z-logo
Premium
Differential evolution and sparse neural networks
Author(s) -
Morgan Peter H.
Publication year - 2008
Publication title -
expert systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.365
H-Index - 38
eISSN - 1468-0394
pISSN - 0266-4720
DOI - 10.1111/j.1468-0394.2008.00466.x
Subject(s) - overfitting , maxima and minima , computer science , differential evolution , artificial neural network , feedforward neural network , function (biology) , mathematical optimization , artificial intelligence , algorithm , mathematics , mathematical analysis , evolutionary biology , biology
The aim of this work is to avoid overfitting by seeking parsimonious neural network models and hence to provide better out‐of‐sample predictions. The resulting sparse networks are easier to interpret as simple rules which, in turn, could give greater insight into the structure of the data. Fully connected feedforward neural networks are pruned through optimization of an estimated Schwartz model selection criterion using differential evolution to produce a sparse network. A quantity, α , which indicates how close a parameter is to zero is used to estimate the number of model parameters which are being pruned out. The value of α is incorporated into a function of the Schwartz information criterion to form an objective function whose maxima, as α tends to zero, define parsimonious neural network models for a given data set. Since there is a multiplicity of maxima, differential evolution, with its greater capacity for global optimization, is used to optimize this objective function. The value of α is progressively reduced during the evolution of the population of models in the manner of a sequential unconstrained optimization technique. The method is illustrated by results on four sets of data.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here