Premium
Better prediction by use of co‐data: adaptive group‐regularized ridge regression
Author(s) -
Wiel Mark A.,
Lien Tonje G.,
Verlaat Wina,
Wieringen Wessel N.,
Wilting Saskia M.
Publication year - 2015
Publication title -
statistics in medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.996
H-Index - 183
eISSN - 1097-0258
pISSN - 0277-6715
DOI - 10.1002/sim.6732
Subject(s) - elastic net regularization , lasso (programming language) , computer science , logistic regression , feature selection , regression , context (archaeology) , bayesian probability , bayes' theorem , coordinate descent , data mining , statistics , artificial intelligence , mathematics , machine learning , paleontology , world wide web , biology
For many high‐dimensional studies, additional information on the variables, like (genomic) annotation or external p ‐values, is available. In the context of binary and continuous prediction, we develop a method for adaptive group‐regularized (logistic) ridge regression, which makes structural use of such ‘co‐data’. Here, ‘groups’ refer to a partition of the variables according to the co‐data. We derive empirical Bayes estimates of group‐specific penalties, which possess several nice properties: (i) They are analytical. (ii) They adapt to the informativeness of the co‐data for the data at hand. (iii) Only one global penalty parameter requires tuning by cross‐validation. In addition, the method allows use of multiple types of co‐data at little extra computational effort. We show that the group‐specific penalties may lead to a larger distinction between ‘near‐zero’ and relatively large regression parameters, which facilitates post hoc variable selection. The method, termed GRridge , is implemented in an easy‐to‐use R‐package. It is demonstrated on two cancer genomics studies, which both concern the discrimination of precancerous cervical lesions from normal cervix tissues using methylation microarray data. For both examples, GRridge clearly improves the predictive performances of ordinary logistic ridge regression and the group lasso. In addition, we show that for the second study, the relatively good predictive performance is maintained when selecting only 42 variables. Copyright © 2015 John Wiley & Sons, Ltd.