z-logo
Premium
A framework of iterative learning control under random data dropouts: Mean square and almost sure convergence
Author(s) -
Shen Dong,
Xu JianXin
Publication year - 2017
Publication title -
international journal of adaptive control and signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.73
H-Index - 66
eISSN - 1099-1115
pISSN - 0890-6327
DOI - 10.1002/acs.2802
Subject(s) - iterative learning control , dropout (neural networks) , convergence (economics) , sequence (biology) , computer science , convergence of random variables , markov chain , bernoulli's principle , position (finance) , mathematical optimization , random variable , mathematics , control (management) , artificial intelligence , statistics , machine learning , engineering , finance , aerospace engineering , biology , economics , genetics , economic growth
Summary This paper addresses the iterative learning control problem under random data dropout environments. The recent progress on iterative learning control in the presence of data dropouts is first reviewed from 3 aspects, namely, data dropout model, data dropout position, and convergence meaning. A general framework is then proposed for the convergence analysis of all 3 kinds of data dropout models, namely, the stochastic sequence model, the Bernoulli variable model, and the Markov chain model. Both mean square and almost sure convergence of the input sequence to the desired input are strictly established for noise‐free systems and stochastic systems, respectively, where the measurement output suffers from random data dropouts. Illustrative simulations are provided to verify the theoretical results.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here