z-logo
open-access-imgOpen Access
Multivideo Models for Classifying Hand Impairment After Stroke Using Egocentric Video
Author(s) -
Anne Mei,
Meng-Fen Tsai,
Jose Zariffa
Publication year - 2025
Publication title -
ieee transactions on neural systems and rehabilitation engineering
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.093
H-Index - 140
eISSN - 1558-0210
pISSN - 1534-4320
DOI - 10.1109/tnsre.2025.3596488
Subject(s) - bioengineering , computing and processing , robotics and control systems , signal processing and analysis , communication, networking and broadcast technologies
Objectives: After stroke, hand function assessments are used as outcome measures to evaluate new rehabilitation therapies, but do not reflect true performance in natural environments. Wearable (egocentric) cameras provide a way to capture hand function information during activities of daily living (ADLs). However, while clinical assessments involve observing multiple functional tasks, existing deep learning methods developed to analyze hands in egocentric video are only capable of considering single ADLs. This study presents a novel multi-video architecture that processes multiple task videos to make improved estimations about hand impairment. Methods: An egocentric video dataset of ADLs performed by stroke survivors in a home simulation lab was used to develop single and multi-input video models for binary impairment classification. Using SlowFast as a base feature extractor, late fusion (majority voting, fully-connected network) and intermediate fusion (concatenation, Markov chain) were investigated for building multi-video architectures. Results: Through evaluation with Leave-One-Participant-Out-Cross-Validation, using intermediate concatenation fusion to build multi-video models was found to achieve the best performance out of the fusion techniques. The resulting multi-video model for cropped inputs achieved an F1-score of 0.778±0.129 and significantly outperformed its single-video counterpart (F1-score of 0.696±0.102). Similarly, the multi-video model for full-frame inputs (F1-score of 0.796±0.102) significantly outperformed its single-video counterpart (F1-score of 0.708±0.099). Conclusion: Multi-video architectures are beneficial for estimating hand impairment from egocentric video after stroke. Significance: The proposed deep learning solution is the first of its kind in multi-video analysis, and opens the door to further applications in automating other multi-observation assessments for clinical use.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom