z-logo
open-access-imgOpen Access
An Adaptive Learning Rate with Limited Error Signals for Training of Multilayer Perceptrons
Author(s) -
Oh SangHoon,
Lee SooYoung
Publication year - 2000
Publication title -
etri journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.295
H-Index - 46
eISSN - 2233-7326
pISSN - 1225-6463
DOI - 10.4218/etrij.00.0100.0302
Subject(s) - perceptron , computer science , word error rate , backpropagation , artificial intelligence , dependency (uml) , multilayer perceptron , pattern recognition (psychology) , algorithm , error function , entropy (arrow of time) , speech recognition , artificial neural network , machine learning , physics , quantum mechanics
Although an n ‐th order cross‐entropy (nCE) error function resolves the incorrect saturation problem of conventional error backpropagation (EBP) algorithm, performance of multilayer perceptrons (MLPs) trained using the nCE function depends heavily on the order of nCE. In this paper, we propose an adaptive learning rate to markedly reduce the sensitivity of MLP performance to the order of nCE. Additionally, we propose to limit error signal values at output nodes for stable learning with the adaptive learning rate. Through simulations of handwritten digit recognition and isolated‐word recognition tasks, it was verified that the proposed method successfully reduced the performance dependency of MLPs on the nCE order while maintaining advantages of the nCE function.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here