z-logo
open-access-imgOpen Access
Multimodal AI for Home Wound Patient Referral Decisions from Images with Specialist Annotations
Author(s) -
Reza Saadati Fard,
Emmanuel Agu,
Palawat Busaranuvong,
Deepak Kumar,
Shefalika Gautam,
Bengisu Tulu,
Diane Strong
Publication year - 2025
Publication title -
ieee journal of translational engineering in health and medicine
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.653
H-Index - 24
eISSN - 2168-2372
DOI - 10.1109/jtehm.2025.3588427
Subject(s) - bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , signal processing and analysis , robotics and control systems , general topics for engineers
Chronic wounds affect 8.5 million Americans, especially the elderly and patients with diabetes. As regular care is critical for proper healing, many patients receive care in their homes from visiting nurses and caregivers with variable wound expertise. Problematic, non-healing wounds should be referred to experts in wound clinics to avoid adverse outcomes such as limb amputations. Unfortunately, due to the lack of wound expertise, referral decisions made in non-clinical settings can be erroneous, delayed or unnecessary. This paper proposes the Deep Multimodal Wound Assessment Tool (DM-WAT), a novel machine learning framework to support visiting nurses by recommending wound referral decisions from smartphone-captured wound images and associated clinical notes. DM-WAT extracts visual features from wound images using DeiT-Base-Distilled, a Vision Transformer (ViT) architecture. Distillation-based training facilitates representation learning and knowledge transfer from a larger teacher model to DeiT-Base, enabling robust performance on our small wound image dataset of 205 wound images. DM-WAT extracts text features from clinical notes using DeBERTa-base, which comprehends context by disentangling content and position information from clinical notes. Visual and text features are combined using an intermediate fusion approach. To overcome the challenges posed by a small and imbalanced dataset, DM-WAT integrates image and text augmentation along with transfer learning via pre-trained feature extractors to achieve high performance. In rigorous evaluation, DM-WAT achieved an accuracy of 77% ± 3% and an F1 score of 70% ± 2%, outperforming the prior state of the art and all baseline single-modality and multimodal approaches. Additionally, to interpret DM-WAT’s recommendations, the Score-CAM and Captum interpretation algorithms provided insights into the specific parts of the image and text inputs that the model focused on during decision-making.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom