Premium
Theoretical measures of relative performance of classifiers for high dimensional data with small sample sizes
Author(s) -
Hall Peter,
Pittelkow Yvonne,
Ghosh Malay
Publication year - 2008
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2007.00631.x
Subject(s) - pattern recognition (psychology) , boundary (topology) , artificial intelligence , computer science , sample (material) , support vector machine , sample size determination , mathematics , data mining , statistics , mathematical analysis , chemistry , chromatography
Summary. We suggest a technique, related to the concept of ‘detection boundary’ that was developed by Ingster and by Donoho and Jin, for comparing the theoretical performance of classifiers constructed from small training samples of very large vectors. The resulting ‘classification boundaries’ are obtained for a variety of distance‐based methods, including the support vector machine, distance‐weighted discrimination and k th‐nearest‐neighbour classifiers, for thresholded forms of those methods, and for techniques based on Donoho and Jin's higher criticism approach to signal detection. Assessed in these terms, standard distance‐based methods are shown to be capable only of detecting differences between populations when those differences can be estimated consistently. However, the thresholded forms of distance‐based classifiers can do better, and in particular can correctly classify data even when differences between distributions are only detectable, not estimable. Other methods, including higher criticism classifiers, can on occasion perform better still, but they tend to be more limited in scope, requiring substantially more information about the marginal distributions. Moreover, as tail weight becomes heavier the classification boundaries of methods designed for particular distribution types can converge to, and achieve, the boundary for thresholded nearest neighbour approaches. For example, although higher criticism has a lower classification boundary, and in this sense performs better, in the case of normal data, the boundaries are identical for exponentially distributed data when both sample sizes equal 1.