z-logo
open-access-imgOpen Access
Comparing Machine Learning Methods to Improve Fall Risk Detection in Elderly with Osteoporosis from Balance Data
Author(s) -
German Cuaya-Simbro,
Alberto Isaac Pérez Sanpablo,
Eduardo F. Morales,
Ivett Quiñones Urióstegui,
Lidia Núñez-Carrera
Publication year - 2021
Publication title -
journal of healthcare engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.509
H-Index - 29
eISSN - 2040-2309
pISSN - 2040-2295
DOI - 10.1155/2021/8697805
Subject(s) - random forest , oversampling , feature selection , fall prevention , machine learning , artificial intelligence , medicine , poison control , computer science , physical therapy , injury prevention , medical emergency , computer network , bandwidth (computing)
Falls are a multifactorial cause of injuries for older people. Subjects with osteoporosis are particularly vulnerable to falls. We study the performance of different computational methods to identify people with osteoporosis who experience a fall by analysing balance parameters. Balance parameters, from eyes open and closed posturographic studies, and prospective registration of falls were obtained from a sample of 126 community-dwelling older women with osteoporosis (age 74.3 ± 6.3) using World Health Organization Questionnaire for the study of falls during a follow-up of 2.5 years. We analyzed model performance to determine falls of every developed model and to validate the relevance of the selected parameter sets. The principal findings of this research were (1) models built using oversampling methods with either IBk (KNN) or Random Forest classifier can be considered good options for a predictive clinical test and (2) feature selection for minority class (FSMC) method selected previously unnoticed balance parameters, which implies that intelligent computing methods can extract useful information with attributes which otherwise are disregarded by experts. Finally, the results obtained suggest that Random Forest classifier using the oversampling method to balance the data independent of the set of variables used got the best overall performance in measures of sensitivity (>0.71), specificity (>0.18), positive predictive value (PPV >0.74), and negative predictive value (NPV >0.66) independent of the set of variables used. Although the IBk classifier was built with oversampling data considering information from both eyes opened and closed, using all variables got the best performance (sensitivity >0.81, specificity >0.19, PPV = 0.97, and NPV = 0.66).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom