Premium
Tilting methods for assessing the influence of components in a classifier
Author(s) -
Hall Peter,
Titterington D. M.,
Xue JingHao
Publication year - 2009
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2009.00701.x
Subject(s) - computer science , constraint (computer aided design) , classifier (uml) , ranking (information retrieval) , lasso (programming language) , machine learning , data mining , variable (mathematics) , artificial intelligence , rank (graph theory) , mathematics , mathematical analysis , geometry , combinatorics , world wide web
Summary. Many contemporary classifiers are constructed to provide good performance for very high dimensional data. However, an issue that is at least as important as good classification is determining which of the many potential variables provide key information for good decisions. Responding to this issue can help us to determine which aspects of the datagenerating mechanism (e.g. which genes in a genomic study) are of greatest importance in terms of distinguishing between populations. We introduce tilting methods for addressing this problem. We apply weights to the components of data vectors, rather than to the data vectors themselves (as is commonly the case in related work). In addition we tilt in a way that is governed by L 2 ‐distance between weight vectors, rather than by the more commonly used Kullback–Leibler distance. It is shown that this approach, together with the added constraint that the weights should be non‐negative, produces an algorithm which eliminates vector components that have little influence on the classification decision. In particular, use of the L 2 ‐distance in this problem produces properties that are reminiscent of those that arise when L 1 ‐penalties are employed to eliminate explanatory variables in very high dimensional prediction problems, e.g. those involving the lasso. We introduce techniques that can be implemented very rapidly, and we show how to use bootstrap methods to assess the accuracy of our variable ranking and variable elimination procedures.