
Two-Stage Backward Elimination Method for Neural Networks Model Reduction
Author(s) -
Xiaoquan Tang,
Long Zhang
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1267/1/012087
Subject(s) - artificial neural network , computer science , pruning , lasso (programming language) , artificial intelligence , process (computing) , reduction (mathematics) , algorithm , pattern recognition (psychology) , machine learning , mathematics , geometry , world wide web , agronomy , biology , operating system
The single-hidden-layer neural networks (NN) has been widely used for complex system identification. However, the hidden neurons are often determined by trial-and-error method and the amount of neurons is usually large. This commonly leads to over-fitting problem and the training process is time consuming. In this paper, we propose a two-stage backward elimination (TSBE) method to obtain a parsimonious network with fewer hidden neurons but remains a good performance and saves training time. In the first stage, neural networks with a predetermined number of hidden neurons is trained based on stochastic gradient decent (SGD) algorithm with part of training data and Least absolute shrinkage and selection operator (Lasso) is applied for dropping redundant neurons leading to a simplified neural model. In the second stage, the remaining training data is used to update the parameters of the simplified neural model. A simulation example is used to validate and show that the novel approach gives a more compressed model and higher level of accuracy comparing with the recently proposed pruning-based method.