z-logo
Premium
Visual attention as a model for interpretable neuroimage classification in dementia
Author(s) -
Cole James,
Wood David,
Booth Thomas
Publication year - 2020
Publication title -
alzheimer's and dementia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.713
H-Index - 118
eISSN - 1552-5279
pISSN - 1552-5260
DOI - 10.1002/alz.037351
Subject(s) - interpretability , neuroimaging , artificial intelligence , dementia , computer science , task (project management) , convolutional neural network , machine learning , deep learning , psychology , medicine , disease , neuroscience , pathology , management , economics
Background Deep learning has the potential to aid clinical decision‐making in dementia, by automatically classifying brain images. However, several key limitations currently prohibit clinical adoption: 1) network design must be optimised for 3D neuroimaging; 2) analysis must be computationally feasible; 3) model decisions must be interpretable. Interpretability is particularly crucial, as clinicians need to understand how and why each automated decision is made. Method We address these issues using a 3D recurrent visual attention model tailored for neuroimaging: NEURO‐DRAM. The model comprises an agent which, trained by reinforcement learning, learns to navigate through volumetric images, selectively attending to the most informative regions for a given task. We trained and tested NEURO‐DRAM using T1‐weighted MRIs from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. This entailed n=162 Alzheimer’s Disease (AD) patients and n=160 healthy controls (HCs), split into training (90%) and testing (10%) data. Classification generalisability was evaluated using independent AD patients (n=130) and HCs (n=100) data from the Open Access Series of Imaging Studies (OASIS) dataset. Finally, we assessed the potential to transfer the classification task (i.e., no extra training needed) to discriminate between the baseline MRIs of people with stable or progressive mild cognitive impairment (MCI). Result NEURO‐DRAM achieved 98.5% balanced accuracy when classifying AD patients from HCs from ADNI and 99.8% in OASIS, significantly out‐performing a baseline convolutional neural network. When classifying stable versus progressive MCI, accuracy was 77.8%. For each test participant, an individualised trajectory was obtained, depicting the brain regions that were used to make the specific classification (Fig. 1). The regions ‘visualised’ by the model’s trajectories included the hippocampus, parahippocampal gyrus and lateral ventricles. Computation time for training NEURO‐DRAM was substantially faster than the baseline network (10 minutes versus 45 minutes). Conclusion Using a data‐driven approach, near‐perfect classification of AD patients from HCs can be achieved. To reach this high level of performance, our model learns to ‘visually’ attend to the areas of the brain radiologically associated with AD. Importantly, the neuroanatomical trajectory for each individual run through the analysis can be visualised, providing an intuitive way to interpret how NEURO‐DRAM has reached a classification decision.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here