z-logo
Premium
An iterative approach for the discrete‐time dynamic control of Markov jump linear systems with partial information
Author(s) -
Oliveira André Marcorin,
Costa O. L. V.
Publication year - 2019
Publication title -
international journal of robust and nonlinear control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.361
H-Index - 106
eISSN - 1099-1239
pISSN - 1049-8923
DOI - 10.1002/rnc.4771
Subject(s) - markov chain , mathematics , mathematical optimization , controller (irrigation) , context (archaeology) , iterative method , variable (mathematics) , upper and lower bounds , markov decision process , sequence (biology) , control theory (sociology) , computer science , markov process , control (management) , statistics , paleontology , mathematical analysis , genetics , artificial intelligence , agronomy , biology
Summary TheH 2 ,H ∞and mixedH 2 / H ∞dynamic output feedback control of Markov jump linear systems in a partial observation context is studied through an iterative approach . By partial information, we mean that neither the state variable x ( k ) nor the Markov chain θ ( k ) are available to the controller. Instead, we assume that the controller relies only on an output y ( k ) and a measured variableθ ^ ( k ) coming from a detector that provides the only information of the Markov chain θ ( k ). To solve the problem, we resort to an iterative method that starts with a state‐feedback controller and solves at each iteration a linear matrix inequality optimization problem. It is shown that this iterative algorithm yields to a nonincreasing sequence of upper bound costs so that it converges to a minimum value. The effectiveness of the iterative procedure is illustrated by means of two examples in which the conservatism between the upper bounds and actual costs is significantly reduced.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here