z-logo
open-access-imgOpen Access
Machine learning‐based multimodal prediction of language outcomes in chronic aphasia
Author(s) -
Kristinsson Sigfus,
Zhang Wanfang,
Rorden Chris,
NewmanNorlund Roger,
Basilakos Alexandra,
Bonilha Leonardo,
Yourganov Grigori,
Xiao Feifei,
Hillis Argye,
Fridriksson Julius
Publication year - 2021
Publication title -
human brain mapping
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.005
H-Index - 191
eISSN - 1097-0193
pISSN - 1065-9471
DOI - 10.1002/hbm.25321
Subject(s) - aphasia , neuroimaging , modalities , modality (human–computer interaction) , artificial intelligence , functional neuroimaging , functional magnetic resonance imaging , computer science , psychology , machine learning , cognitive psychology , neuroscience , social science , sociology
Recent studies have combined multiple neuroimaging modalities to gain further understanding of the neurobiological substrates of aphasia. Following this line of work, the current study uses machine learning approaches to predict aphasia severity and specific language measures based on a multimodal neuroimaging dataset. A total of 116 individuals with chronic left‐hemisphere stroke were included in the study. Neuroimaging data included task‐based functional magnetic resonance imaging (fMRI), diffusion‐based fractional anisotropy (FA)‐values, cerebral blood flow (CBF), and lesion‐load data. The Western Aphasia Battery was used to measure aphasia severity and specific language functions. As a primary analysis, we constructed support vector regression (SVR) models predicting language measures based on (i) each neuroimaging modality separately, (ii) lesion volume alone, and (iii) a combination of all modalities. Prediction accuracy across models was subsequently statistically compared. Prediction accuracy across modalities and language measures varied substantially (predicted vs. empirical correlation range: r = .00–.67). The multimodal prediction model yielded the most accurate prediction in all cases ( r = .53–.67). Statistical superiority in favor of the multimodal model was achieved in 28/30 model comparisons ( p ‐value range: <.001–.046). Our results indicate that different neuroimaging modalities carry complementary information that can be integrated to more accurately depict how brain damage and remaining functionality of intact brain tissue translate into language function in aphasia.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here