z-logo
Premium
Online solution of nonquadratic two‐player zero‐sum games arising in the H  ∞  control of constrained input systems
Author(s) -
Modares Hamidreza,
Lewis Frank L.,
Sistani MohammadBagher Naghibi
Publication year - 2012
Publication title -
international journal of adaptive control and signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.73
H-Index - 66
eISSN - 1099-1115
pISSN - 0890-6327
DOI - 10.1002/acs.2348
Subject(s) - convergence (economics) , saddle point , benchmark (surveying) , control theory (sociology) , optimal control , stability (learning theory) , zero (linguistics) , differential game , nonlinear system , mathematical optimization , disturbance (geology) , computer science , control (management) , zero sum game , bounded function , mathematics , saddle , optimization problem , nash equilibrium , artificial intelligence , economic growth , mathematical analysis , economics , geography , philosophy , linguistics , biology , paleontology , geometry , geodesy , quantum mechanics , machine learning , physics
SUMMARY In this paper, we present an online learning algorithm to find the solution to the H  ∞  control problem of continuous‐time systems with input constraints. A suitable nonquadratic functional is utilized to encode the input constraints into the H  ∞  control problem, and the related H  ∞  control problem is formulated as a two‐player zero‐sum game with a nonquadratic performance. Then, a policy iteration algorithm on an actor–critic–disturbance structure is developed to solve the Hamilton–Jacobi–Isaacs (HJI) equation associated with this nonquadratic zero‐sum game. That is, three NN approximators, namely, actor, critic, and disturbance, are tuned online and simultaneously for approximating the HJI solution. The value of the actor and disturbance policies is approximated continuously by the critic NN, and then on the basis of this value estimate, the actor and disturbance NNs are updated in real time to improve their policies. The disturbance tries to make the worst possible disturbance, whereas the actor tries to make the best control input. A persistence of excitation condition is shown to guarantee convergence to the optimal saddle point solution. Stability of the closed‐loop system is also guaranteed. A simulation on a nonlinear benchmark problem is performed to validate the effectiveness of the proposed approach. Copyright © 2012 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here