Design of Self-Constructing Recurrent-Neural-Network-Based Adaptive Control
Author(s) -
ChunFei Hsu,
Chih-Min Li
Publication year - 2008
Publication title -
intech ebooks
Language(s) - English
Resource type - Book series
DOI - 10.5772/5538
Subject(s) - computer science , control (management) , artificial neural network , cognitive science , psychology , artificial intelligence
Recently, neural-network-based adaptive control technique has attracted increasing attentions, because it has provided an efficient and effective way in the control of complex nonlinear or ill-defined systems (Duarte-Mermoud et al., 2005; Hsu et al., 2006; Lin and Hsu, 2003; Lin et al., 1999; Peng et al. 2004). The key elements of this success are the approximation capabilities of the neural networks. The parameterized neural networks can approximate the unknown system dynamics or the ideal tracking controller after learning. One must distinguish between two classes of control applications – open-loop identification and closed-loop feedback control. Identification applications are similar to signal processing/classification, so that the same open-loop algorithms may often be used. Therefore, a tremendous amount of training data must be used and considerable training time undertaken is required. On the other hand, in closed-loop feedback applications the neural network is inside the control loop, so that special steps must be taken to ensure that the tracking error and the neural network weights remain bounded in the closed-loop system. The basic issues in neural network closed-loop feedback control are to provide online learning algorithms that do not require preliminary off-line tuning. Some of these learning algorithms are based on the backpropagation algorithm. However, these approaches have difficulties to guarantee the stability and robustness of closed-loop system (Duarte-Mermoud et al., 2005; Lin et al., 1999). Another learning algorithms are based on the Lyapunov stability theorem. The tuning laws have been designed to guarantee the system stability in the Lyapunov sense (Hsu et al., 2006; Lin & Hsu, 2003; Peng et al., 2004). However, these neural networks are feedforward neural networks; they belong to static mapping networks. Without aid of tapped delay, a feedforward neural network is unable to represent a dynamic mapping. The recurrent neural network (RNN) has superior capabilities as compared to feedforward neural networks, such as their dynamic response and their information storing ability (Lee & Teng, 2000; Lin & Hsu, 2004). Since an RNN has an internal feedback loop, it captures the dynamic response of a system with external feedback through delays. Thus, an RNN is a dynamic mapping network. Due to its dynamic characteristic and relatively simple architecture, the recurrent neural network is a useful tool for most real-time applications (Lin & Chen, 2006; Lin & Hsu, 2004; Tian et al., 2004; Wai et al. 2004). Although the neural-network-based adaptive control performances are acceptable in above literatures; however, the learning algorithm only includes the parameter learning, and they O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom