Premium
Nonlinear–nonquadratic optimal and inverse optimal control for stochastic dynamical systems
Author(s) -
Rajpurohit Tanmay,
Haddad Wassim M.
Publication year - 2017
Publication title -
international journal of robust and nonlinear control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.361
H-Index - 106
eISSN - 1099-1239
pISSN - 1049-8923
DOI - 10.1002/rnc.3829
Subject(s) - nonlinear system , lyapunov function , stochastic control , optimal control , hamilton–jacobi–bellman equation , multilinear map , control theory (sociology) , mathematics , stochastic optimization , mathematical optimization , dynamical systems theory , computer science , control (management) , physics , quantum mechanics , artificial intelligence , pure mathematics
Summary In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for nonlinear stochastic dynamical systems. Specifically, we provide a simplified and tutorial framework for stochastic optimal control and focus on connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory. In particular, we show that asymptotic stability in probability of the closed‐loop nonlinear system is guaranteed by means of a Lyapunov function that can clearly be seen to be the solution to the steady‐state form of the stochastic Hamilton–Jacobi–Bellman equation and, hence, guaranteeing both stochastic stability and optimality. In addition, we develop optimal feedback controllers for affine nonlinear systems using an inverse optimality framework tailored to the stochastic stabilization problem. These results are then used to provide extensions of the nonlinear feedback controllers obtained in the literature that minimize general polynomial and multilinear performance criteria. Copyright © 2017 John Wiley & Sons, Ltd.