Premium
Two‐Group Classification Using Neural Networks *
Author(s) -
Patuwo Eddy,
Hu Michael Y.,
Hung Ming S.
Publication year - 1993
Publication title -
decision sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.238
H-Index - 108
eISSN - 1540-5915
pISSN - 0011-7315
DOI - 10.1111/j.1540-5915.1993.tb00491.x
Subject(s) - artificial neural network , generalizability theory , linear discriminant analysis , computer science , artificial intelligence , discriminant , sample size determination , nonparametric statistics , machine learning , sample (material) , k nearest neighbors algorithm , pattern recognition (psychology) , statistics , mathematics , chemistry , chromatography
Artificial neural networks are new methods for classification. We investigate two important issues in building neural network models; network architecture and size of training samples. Experiments were designed and carried out on two‐group classification problems to find answers to these model building questions. The first experiment deals with selection of architecture and sample size for different classification problems. Results show that choice of architecture and choice of sample size depend on the objective: to maximize the classification rate of training samples, or to maximize the generalizability of neural networks. The second experiment compares neural network models with classical models such as linear discriminant analysis and quadratic discriminant analysis, and nonparametric methods such as k ‐nearest‐neighbor and linear programming. Results show that neural networks are comparable to, if not better than, these other methods in terms of classification rates in the training samples but not in the test samples.