Premium
Online solution of nonlinear two‐player zero‐sum games using synchronous policy iteration
Author(s) -
Vamvoudakis Kyriakos G.,
Lewis F.L.
Publication year - 2011
Publication title -
international journal of robust and nonlinear control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.361
H-Index - 106
eISSN - 1099-1239
pISSN - 1049-8923
DOI - 10.1002/rnc.1760
Subject(s) - saddle point , nonlinear system , convergence (economics) , control theory (sociology) , bounded function , computer science , stability (learning theory) , mathematical optimization , optimal control , saddle , reinforcement learning , bellman equation , mathematics , control (management) , artificial intelligence , mathematical analysis , geometry , quantum mechanics , machine learning , economics , economic growth , physics
SUMMARY The two‐player zero‐sum (ZS) game problem provides the solution to the bounded L 2 ‐gain problem and so is important for robust control. However, its solution depends on solving a design Hamilton–Jacobi–Isaacs (HJI) equation, which is generally intractable for nonlinear systems. In this paper, we present an online adaptive learning algorithm based on policy iteration to solve the continuous‐time two‐player ZS game with infinite horizon cost for nonlinear systems with known dynamics. That is, the algorithm learns online in real time an approximate local solution to the game HJI equation. This method finds, in real time, suitable approximations of the optimal value and the saddle point feedback control policy and disturbance policy, while also guaranteeing closed‐loop stability. The adaptive algorithm is implemented as an actor/critic/disturbance structure that involves simultaneous continuous‐time adaptation of critic, actor, and disturbance neural networks. We call this online gaming algorithm ‘synchronous’ ZS game policy iteration. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for critic, actor, and disturbance networks. The convergence to the optimal saddle point solution is proven, and stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm in solving the HJI equation online for a linear system and a complex nonlinear system. Copyright © 2011 John Wiley & Sons, Ltd.