Premium
Deterministic learning from neural control for uncertain nonlinear pure‐feedback systems by output feedback
Author(s) -
Zhang Fukai,
Wang Cong
Publication year - 2020
Publication title -
international journal of robust and nonlinear control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.361
H-Index - 106
eISSN - 1099-1239
pISSN - 1049-8923
DOI - 10.1002/rnc.4902
Subject(s) - control theory (sociology) , computer science , backstepping , artificial neural network , convergence (economics) , nonlinear system , controller (irrigation) , tracking error , scheme (mathematics) , observer (physics) , control (management) , transformation (genetics) , adaptive control , artificial intelligence , mathematics , mathematical analysis , biochemistry , chemistry , gene , agronomy , economics , biology , economic growth , physics , quantum mechanics
Summary The essence of intelligence lies in the acquisition/learning and utilization of knowledge. However, how to implement learning in dynamical environments for nonlinear systems is a challenging issue. This article investigates the deterministic learning (DL) control problem for uncertain pure‐feedback systems by output feedback, which achieves the human‐like learning and control in a simple way. To reduce the complexity of control design and analysis, first, by combining an appropriate system transformation, the original pure‐feedback system is transformed into a simple normal nonaffine system. An observer is then introduced to estimate the transformed system states. Based on the backstepping and dynamic surface control techniques, a simple adaptive neural control scheme is first developed to guarantee the finite time convergence of the tracking error using only one neural network (NN) approximator. Second, through DL, the exponential convergence of the NN weights is obtained with the satisfaction of partial persistent excitation condition. Thus, locally accurate approximation/learning of the transformed unknown system dynamics is achieved and stored as constant NNs. Finally, by utilizing the stored knowledge, an experience‐based controller is constructed and a novel learning control scheme is further proposed to improve the control performance without any further adaptation online for the estimate neural weights. Simulation results have been given to illustrate that the proposed scheme not only can learn and memorize knowledge like humans but also can utilize experience to achieve superior control performance.