z-logo
Premium
Classifier design for computer‐aided diagnosis: Effects of finite sample size on the mean performance of classical and neural network classifiers
Author(s) -
Chan HeangPing,
Sahiner Berkman,
Wagner Robert F.,
Petrick Nicholas
Publication year - 1999
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.598805
Subject(s) - quadratic classifier , classifier (uml) , sample size determination , pattern recognition (psychology) , artificial intelligence , linear discriminant analysis , curse of dimensionality , covariance , mathematics , artificial neural network , covariance matrix , statistics , receiver operating characteristic , computer science
Classifier design is one of the key steps in the development of computer‐aided diagnosis (CAD) algorithms. A classifier is designed with case samples drawn from the patient population. Generally, the sample size available for classifier design is limited, which introduces variance and bias into the performance of the trained classifier, relative to that obtained with an infinite sample size. For CAD applications, a commonly used performance index for a classifier is the area, A z , under the receiver operating characteristic (ROC) curve. We have conducted a computer simulation study to investigate the dependence of the mean performance, in terms of A z , on design sample size for a linear discriminant and two nonlinear classifiers, the quadratic discriminant and the backpropagation neural network (ANN). The performances of the classifiers were compared for four types of class distributions that have specific properties: multivariate normal distributions with equal covariance matrices and unequal means, unequal covariance matrices and unequal means, and unequal covariance matrices and equal means, and a feature space where the two classes were uniformly distributed in disjoint checkerboard regions. We evaluated the performances of the classifiers in feature spaces of dimensionality ranging from 3 to 15, and design sample sizes from 20 to 800 per class. The dependence of the resubstitution and hold‐out performance on design (training) sample size( N t ) was investigated. For multivariate normal class distributions with equal covariance matrices, the linear discriminant is the optimal classifier. It was found that its A z ‐ versus ‐ 1 / N tcurves can be closely approximated by linear dependences over the range of sample sizes studied. In the feature spaces with unequal covariance matrices where the quadratic discriminant is optimal, the linear discriminant is inferior to the quadratic discriminant or the ANN when the design sample size is large. However, when the design sample is small, a relatively simple classifier, such as the linear discriminant or an ANN with very few hidden nodes, may be preferred because performance bias increases with the complexity of the classifier. In the regime where the classifier performance is dominated by the1 / N tterm, the performance in the limit of infinite sample size can be estimated as the intercept( 1 / N t= 0 ) of a linear regression of A zversus1 / N t . The understanding of the performance of the classifiers under the constraint of a finite design sample size is expected to facilitate the selection of a proper classifier for a given classification task and the design of an efficient resampling scheme.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here