z-logo
open-access-imgOpen Access
Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
Author(s) -
Huisheng Zhang,
Dongpo Xu,
Zhiping Wang
Publication year - 2010
Publication title -
discrete dynamics in nature and society
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.264
H-Index - 39
eISSN - 1607-887X
pISSN - 1026-0226
DOI - 10.1155/2010/829692
Subject(s) - monotonic function , artificial neural network , convergence (economics) , computer science , sequence (biology) , algorithm , function (biology) , error function , gradient method , zero (linguistics) , gradient descent , mathematics , artificial intelligence , mathematical analysis , linguistics , philosophy , evolutionary biology , biology , economics , genetics , economic growth
The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom