
Manifold regularized multitask feature learning for multimodality disease classification
Author(s) -
Jie Biao,
Zhang Daoqiang,
Cheng Bo,
Shen Dinggang
Publication year - 2015
Publication title -
human brain mapping
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.005
H-Index - 191
eISSN - 1097-0193
pISSN - 1065-9471
DOI - 10.1002/hbm.22642
Subject(s) - feature selection , artificial intelligence , multi task learning , feature (linguistics) , modalities , modality (human–computer interaction) , pattern recognition (psychology) , computer science , machine learning , neuroimaging , discriminative model , feature learning , positron emission tomography , alzheimer's disease neuroimaging initiative , task (project management) , cognition , cognitive impairment , psychology , neuroscience , social science , linguistics , philosophy , management , sociology , economics
Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group‐sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold‐based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG‐PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease‐related brain regions useful for disease diagnosis. Hum Brain Mapp 36:489–507, 2015 . © 2014 Wiley Periodicals, Inc.