z-logo
open-access-imgOpen Access
Vision-based Fall Risk Assessment through Attention Augmented Neural Encoding and Data Augmentation
Author(s) -
Chunhua Pan,
Rui Miao,
Qing Zhang,
Boting Qu,
Xin Wang
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3596942
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Falls among elderly individuals and other at-risk populations represent a significant public health concern, underscoring the need for proactive and accessible fall risk assessment methods. Although prior systems using wearable sensors or depth cameras have demonstrated potential, their widespread adoption is often constrained by user discomfort, setup complexity, and the high cost of specialized equipment such as Kinect. To address these limitations, we propose GTAE-FRA, a novel vision-based deep learning framework for fall risk assessment that leverages standard, widely available RGB cameras to record Five times Sit-To-Stand (FSTS) test videos. GTAE-FRA begins with a robust video processing pipeline that includes 3D pose estimation, noise filtering, and Dynamic Time Warping (DTW)-based matching to extract reliable Body Skeleton Key Point (BSKP) sequences. These sequences are then fed into GTrans, an attention-augmented encoder combining message passing aggregation mechanism with Transformer-based spatial-temporal modeling for nuanced feature extraction. To overcome limited data availability, six skeleton-oriented data augmentation strategies are applied, which significantly enhance the diversity and robustness of training samples. Experimental results on a dataset of 450 FSTS videos demonstrate the effectiveness of GTAE-FRA, achieving impressive performance with detection accuracy, weighted F1 score, macro F1 score, and AUC of 87.00%, 87.01%, 88.92%, and 97.96%, respectively. These results represent average improvements of 2.49%, 2.66%, 2.24% and 1.78% over baseline methods across the respective metrics.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom