z-logo
open-access-imgOpen Access
Normalized stochastic gradient descent learning of general complex‐valued models
Author(s) -
Paireder T.,
Motz C.,
Huemer M.
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12170
Subject(s) - stochastic gradient descent , convergence (economics) , range (aeronautics) , computer science , gradient descent , nonlinear system , mathematical optimization , stochastic approximation , descent (aeronautics) , algorithm , mathematics , artificial intelligence , artificial neural network , key (lock) , materials science , physics , computer security , quantum mechanics , aerospace engineering , engineering , economics , composite material , economic growth
The stochastic gradient descent (SGD) method is one of the most prominent first‐order iterative optimisation algorithms, enabling linear adaptive filters as well as general nonlinear learning schemes. It is applicable to a wide range of objective functions, while featuring low computational costs for online operation. However, without a suitable step‐size normalisation, the convergence and tracking behaviour of the stochastic gradient descent method might be degraded in practical applications. In this letter, a novel general normalisation approach is provided for the learning of (non‐)holomorphic models with multiple independent parameter sets. The advantages of the proposed method are demonstrated by means of a specific widely‐linear estimation example.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here