z-logo
Premium
Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure
Author(s) -
Kowalski Piotr A.,
Kusy Maciej
Publication year - 2018
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/coin.12149
Subject(s) - probabilistic neural network , computer science , artificial neural network , artificial intelligence , pattern recognition (psychology) , feedforward neural network , probabilistic logic , sensitivity (control systems) , set (abstract data type) , algorithm , time delay neural network , electronic engineering , engineering , programming language
In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter‐propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. Therefore, the selection of appropriate input neurons should be regarded. The aim of this paper is to present a complete step‐by‐step algorithm for determining the significance of particular input neurons of the probabilistic neural network (PNN). It is based on the sensitivity analysis procedure applied to a trained PNN. The proposed algorithm is utilized in the task of reduction of the input layer of the considered network, which is achieved by removing appropriately indicated features from the data set. For comparison purposes, the PNN's input neuron significance is established by using the ReliefF and variable importance procedures that provide the relevance of the input features in the data set. The performance of the reduced PNN is verified against a full structure network in classification problems using real benchmark data sets from an available machine learning repository. The achieved results are also referred to the ones attained by entropy‐based algorithms. The prediction ability expressed in terms of misclassifications is obtained by means of a 10‐fold cross‐validation procedure. Received outcomes point out interesting properties of the proposed algorithm. It is shown that the efficiency determined by all tested reduction methods is comparable.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here