Premium
Reducing data dimensionality through optimizing neural network inputs
Author(s) -
Tan Shufeng,
Mayrovouniotis Michael L.
Publication year - 1995
Publication title -
aiche journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.958
H-Index - 167
eISSN - 1547-5905
pISSN - 0001-1541
DOI - 10.1002/aic.690410612
Subject(s) - artificial neural network , curse of dimensionality , computer science , artificial intelligence , data mining
A neural network method for reducing data dimensionality based on the concept of input training, in which each input pattern is not fixed but adjusted along with internal network parameters to reproduce its corresponding output pattern, is presented. With input adjustment, a property configured network can be trained to reproduce a given data set with minimum distortion; the trained network inputs provide reduced data. A three‐layer network with input training can perform all functions of a flue‐layer autoassociative network, essentially capturing nonlinear correlations among data. In addition, simultaneous training of a network and its inputs is shown to be significantly more efficient in reducing data dimensionality than training an autoassociative network The concept of input training is closely related to principal component analysis (PCA) and the principal curve method, which is a nonlinear extension of PCA.