z-logo
Premium
Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning
Author(s) -
Sanusi Ibrahim,
Mills Andrew,
Dodd Tony,
Konstantopoulos George
Publication year - 2020
Publication title -
international journal of adaptive control and signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.73
H-Index - 66
eISSN - 1099-1115
pISSN - 0890-6327
DOI - 10.1002/acs.3115
Subject(s) - reinforcement learning , optimal control , control theory (sociology) , tracking error , bellman equation , algebraic riccati equation , computer science , controller (irrigation) , discrete time and continuous time , mathematical optimization , convergence (economics) , feed forward , tracking (education) , function (biology) , linear quadratic regulator , mathematics , riccati equation , control (management) , differential equation , artificial intelligence , control engineering , pedagogy , economic growth , mathematical analysis , engineering , biology , psychology , evolutionary biology , agronomy , statistics , economics
Summary Conventional closed‐form solution to the optimal control problem using optimal control theory is only available under the assumption that there are known system dynamics/models described as differential equations. Without such models, reinforcement learning (RL) as a candidate technique has been successfully applied to iteratively solve the optimal control problem for unknown or varying systems. For the optimal tracking control problem, existing RL techniques in the literature assume either the use of a predetermined feedforward input for the tracking control, restrictive assumptions on the reference model dynamics, or discounted tracking costs. Furthermore, by using discounted tracking costs, zero steady‐state error cannot be guaranteed by the existing RL methods. This article therefore presents an optimal online RL tracking control framework for discrete‐time (DT) systems, which does not impose any restrictive assumptions of the existing methods and equally guarantees zero steady‐state tracking error. This is achieved by augmenting the original system dynamics with the integral of the error between the reference inputs and the tracked outputs for use in the online RL framework. It is further shown that the resulting value function for the DT linear quadratic tracker using the augmented formulation with integral control is also quadratic. This enables the development of Bellman equations, which use only the system measurements to solve the corresponding DT algebraic Riccati equation and obtain the optimal tracking control inputs online. Two RL strategies are thereafter proposed based on both the value function approximation and the Q‐learning along with bounds on excitation for the convergence of the parameter estimates. Simulation case studies show the effectiveness of the proposed approach.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here