
Enhancing the Performance of the BackPropagation for Deep Neural Network
Author(s) -
Ola Mohammed Surakhi,
Walid A. Salameh
Publication year - 2014
Publication title -
international journal of computer and technology
Language(s) - English
Resource type - Journals
ISSN - 2277-3061
DOI - 10.24297/ijct.v13i12.5279
Subject(s) - backpropagation , artificial neural network , maxima and minima , computer science , convergence (economics) , activation function , algorithm , artificial intelligence , function (biology) , deep learning , error function , selection (genetic algorithm) , machine learning , mathematics , mathematical analysis , evolutionary biology , economics , biology , economic growth
The standard Backpropagation Neural Network (BPNN) Algorithm is widely used in solving many real problems in world. But the backpropagation suffers from different difficulties such as the slow convergence and convergence to local minima. Many modifications have been proposed to improve the performance of the algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology and activation function. This paper will illustrate a new additional version of the Backpropagation algorithm. In fact, the new modification has been done on the error signal function by using deep neural networks with more than one hidden layers. Experiments have been made to compare and evaluate the convergence behavior of these training algorithms with two training problems: XOR, and the Iris plant classification. The results showed that the proposed algorithm has improved the classical Bp in terms of its efficiency.