Premium
A case study of normalization, missing data and variable selection methods in lipidomics
Author(s) -
Kujala M.,
Nevalainen J.
Publication year - 2014
Publication title -
statistics in medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.996
H-Index - 183
eISSN - 1097-0258
pISSN - 0277-6715
DOI - 10.1002/sim.6296
Subject(s) - normalization (sociology) , missing data , computer science , feature selection , preprocessor , imputation (statistics) , lipidomics , data mining , database normalization , data pre processing , artificial intelligence , machine learning , pattern recognition (psychology) , bioinformatics , biology , sociology , anthropology
Lipidomics is an emerging field of science that holds the potential to provide a readout of biomarkers for an early detection of a disease. Our objective was to identify an efficient statistical methodology for lipidomics—especially in finding interpretable and predictive biomarkers useful for clinical practice. In two case studies, we address the need for data preprocessing for regression modeling of a binary response. These are based on a normalization step, in order to remove experimental variability, and on a multiple imputation step, to make the full use of the incompletely observed data with potentially informative missingness. Finally, by cross‐validation, we compare stepwise variable selection to penalized regression models on stacked multiple imputed data sets and propose the use of a permutation test as a global test of association. Our results show that, depending on the design of the study, these data preprocessing methods modestly improve the precision of classification, and no clear winner among the variable selection methods is found. Lipidomics profiles are found to be highly important predictors in both of the two case studies. Copyright © 2014 John Wiley & Sons, Ltd.