Premium
Model selection in high dimensions: a quadratic‐risk‐based approach
Author(s) -
Ray Surajit,
Lindsay Bruce G.
Publication year - 2008
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2007.00623.x
Subject(s) - akaike information criterion , bayesian information criterion , model selection , mathematics , quadratic equation , estimator , information criteria , mathematical optimization , kernel (algebra) , kernel density estimation , statistics , geometry , combinatorics
Summary. We propose a general class of risk measures which can be used for data‐based evaluation of parametric models. The loss function is defined as the generalized quadratic distance between the true density and the model proposed. These distances are characterized by a simple quadratic form structure that is adaptable through the choice of a non‐negative definite kernel and a bandwidth parameter. Using asymptotic results for the quadratic distances we build a quick‐to‐compute approximation for the risk function. Its derivation is analogous to the Akaike information criterion but, unlike the Akaike information criterion, the quadratic risk is a global comparison tool. The method does not require resampling, which is a great advantage when point estimators are expensive to compute. The method is illustrated by using the problem of selecting the number of components in a mixture model, where it is shown that, by using an appropriate kernel, the method is computationally straightforward in arbitrarily high data dimensions. In this same context it is shown that the method has some clear advantages over the Akaike information criterion and Bayesian information criterion.