Premium
Robust reinforcement learning control with static and dynamic stability
Author(s) -
Kretchmar R. Matthew,
Young Peter M.,
Anderson Charles W.,
Hittle Douglas C.,
Anderson Michael L.,
Delnero Christopher C.
Publication year - 2001
Publication title -
international journal of robust and nonlinear control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.361
H-Index - 106
eISSN - 1099-1239
pISSN - 1049-8923
DOI - 10.1002/rnc.670
Subject(s) - reinforcement learning , robustness (evolution) , artificial neural network , computer science , control theory (sociology) , robust control , stability (learning theory) , nonlinear system , controller (irrigation) , control engineering , control (management) , artificial intelligence , control system , engineering , machine learning , agronomy , biochemistry , chemistry , physics , quantum mechanics , biology , electrical engineering , gene
Robust control theory is used to design stable controllers in the presence of uncertainties. This provides powerful closed‐loop robustness guarantees, but can result in controllers that are conservative with regard to performance. Here we present an approach to learning a better controller through observing actual controlled behaviour. A neural network is placed in parallel with the robust controller and is trained through reinforcement learning to optimize performance over time. By analysing nonlinear and time‐varying aspects of a neural network via uncertainty models, a robust reinforcement learning procedure results that is guaranteed to remain stable even as the neural network is being trained. The behaviour of this procedure is demonstrated and analysed on two control tasks. Results show that at intermediate stages the system without robust constraints goes through a period of unstable behaviour that is avoided when the robust constraints are included. Copyright © 2001 John Wiley & Sons, Ltd.