z-logo
Premium
Partial order relations for classification comparisons
Author(s) -
Chang LoBin
Publication year - 2020
Publication title -
canadian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.804
H-Index - 51
eISSN - 1708-945X
pISSN - 0319-5724
DOI - 10.1002/cjs.11524
Subject(s) - outlier , constant false alarm rate , naive bayes classifier , classifier (uml) , word error rate , bayes' theorem , artificial intelligence , mathematics , machine learning , bayes error rate , lemma (botany) , computer science , bayes classifier , pattern recognition (psychology) , statistics , data mining , bayesian probability , support vector machine , ecology , poaceae , biology
The Bayes classification rule offers the optimal classifier, minimizing the classification error rate, whereas the Neyman–Pearson lemma offers the optimal family of classifiers to maximize the detection rate for any given false alarm rate. These motivate studies on comparing classifiers based on similarities between the classifiers and the optimal. In this article, we define partial order relations on classifiers and families of classifiers, based on rankings of rate function values and rankings of test function values, respectively. Each partial order relation provides a sufficient condition, which yields better classification error rates or better performance on the receiver operating characteristic analysis. Various examples and applications of the partial order theorems are discussed to provide comparisons of classifiers and families of classifiers, including the comparison of cross‐validation methods, training data that contains outliers, and labelling errors in training data. The Canadian Journal of Statistics 48: 152–166; 2020 © 2019 Statistical Society of Canada

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here