z-logo
open-access-imgOpen Access
An Enhanced Training Algorithm for Multilayer Neural Networks Based on Reference Output of Hidden Layer
Author(s) -
Yan Li,
A.B. Rad,
Peng Wen
Publication year - 1999
Publication title -
neural computing and applications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.713
H-Index - 80
eISSN - 1433-3058
pISSN - 0941-0643
DOI - 10.1007/s005210050024
Subject(s) - broyden–fletcher–goldfarb–shanno algorithm , computer science , convergence (economics) , backpropagation , algorithm , artificial neural network , conjugate gradient method , set (abstract data type) , layer (electronics) , path (computing) , mean squared error , training (meteorology) , artificial intelligence , mathematics , computer network , chemistry , statistics , physics , asynchronous communication , organic chemistry , meteorology , economics , programming language , economic growth
In this paper, the authors propose a new training algorithm which does not only rely upon the training samples, but also depends upon the output of the hidden layer. We adjust both the connecting weights and outputs of the hidden layer based on Least Square Backpropagation (LSB) algorithm. A set of ‘required’ outputs of the hidden layer is added to the input sets through a feedback path to accelerate the convergence speed. The numerical simulation results have demonstrated that the algorithm is better than conventional BP, Quasi-Newton BFGS (an alternative to the conjugate gradient methods for fast optimisation) and LSB algorithms in terms of convergence speed and training error. The proposed method does not suffer from the drawback of the LSB algorithm, for which the training error cannot be further reduced after three iterations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom