z-logo
open-access-imgOpen Access
Comparing within‐subject classification and regularization methods in fMRI for large and small sample sizes
Author(s) -
Churchill Nathan W.,
Yourganov Grigori,
Strother Stephen C.
Publication year - 2014
Publication title -
human brain mapping
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.005
H-Index - 191
eISSN - 1097-0193
pISSN - 1065-9471
DOI - 10.1002/hbm.22490
Subject(s) - pattern recognition (psychology) , artificial intelligence , classifier (uml) , principal component analysis , computer science , quadratic classifier , support vector machine , sample size determination , covariance , mathematics , statistics
In recent years, a variety of multivariate classifier models have been applied to fMRI, with different modeling assumptions. When classifying high‐dimensional fMRI data, we must also regularize to improve model stability, and the interactions between classifier and regularization techniques are still being investigated. Classifiers are usually compared on large, multisubject fMRI datasets. However, it is unclear how classifier/regularizer models perform for within‐subject analyses, as a function of signal strength and sample size. We compare four standard classifiers: Linear and Quadratic Discriminants, Logistic Regression and Support Vector Machines. Classification was performed on data in the linear kernel (covariance) feature space, and classifiers are tuned with four commonly‐used regularizers: Principal Component and Independent Component Analysis, and penalization of kernel features using L 1 and L 2 norms. We evaluated prediction accuracy ( P ) and spatial reproducibility ( R ) of all classifier/regularizer combinations on single‐subject analyses, over a range of three different block task contrasts and sample sizes for a BOLD fMRI experiment. We show that the classifier model has a small impact on signal detection, compared to the choice of regularizer. PCA maximizes reproducibility and global SNR, whereas L p ‐norms tend to maximize prediction. ICA produces low reproducibility, and prediction accuracy is classifier‐dependent. However, trade‐offs in ( P , R ) depend partly on the optimization criterion, and PCA‐based models are able to explore the widest range of ( P , R ) values. These trends are consistent across task contrasts and data sizes (training samples range from 6 to 96 scans). In addition, the trends in classifier performance are consistent for ROI‐based classifier analyses. Hum Brain Mapp 35:4499–4517, 2014 . © 2014 Wiley Periodicals, Inc .

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here