Premium
Maximum likelihood extended gradient‐based estimation algorithms for the input nonlinear controlled autoregressive moving average system with variable‐gain nonlinearity
Author(s) -
Liu Ximei,
Fan Yamin
Publication year - 2021
Publication title -
international journal of robust and nonlinear control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.361
H-Index - 106
eISSN - 1099-1239
pISSN - 1049-8923
DOI - 10.1002/rnc.5450
Subject(s) - nonlinear system , estimation theory , mathematics , algorithm , autoregressive model , iterative method , likelihood function , expectation–maximization algorithm , gradient method , maximum likelihood sequence estimation , variable (mathematics) , mathematical optimization , maximum likelihood , control theory (sociology) , computer science , statistics , artificial intelligence , mathematical analysis , physics , control (management) , quantum mechanics
Abstract Variable‐gain nonlinearity is a piecewise‐linear characteristic to describe the process with different gains in different input regions. This article studies the parameter estimation issue of the input nonlinear controlled autoregressive moving average system with variable‐gain nonlinearity. Through introducing a suitable switching function, we describe the variable‐gain nonlinearity by a linear‐in‐parameter form and derive the identification model of the system. Based on the obtained identification model, a maximum likelihood extended stochastic gradient algorithm is presented to estimate the unknown parameters. To make sufficient use of the observation data and improve the identification accuracy, we deduce a maximum likelihood (multiinnovation) extended gradient‐based iterative algorithm by using the maximum likelihood principle. An extended gradient‐based iterative algorithm is given for comparison. A simulation example is employed to validate that the proposed algorithms can effectively identify the unknown parameters and the maximum likelihood extended gradient‐based iterative algorithm has better estimation accuracy and fitting performance than the maximum likelihood extended stochastic gradient algorithm and the extended gradient‐based iterative algorithm.