Premium
LANDING AN UNMANNED AIR VEHICLE: VISION BASED MOTION ESTIMATION AND NONLINEAR CONTROL
Author(s) -
Shakernia Omid,
Ma Yi,
Koo T. John,
Sastry Shankar
Publication year - 1999
Publication title -
asian journal of control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.769
H-Index - 53
eISSN - 1934-6093
pISSN - 1561-8625
DOI - 10.1111/j.1934-6093.1999.tb00014.x
Subject(s) - control theory (sociology) , nonlinear system , controller (irrigation) , observer (physics) , computer science , computer vision , artificial intelligence , engineering , control (management) , physics , quantum mechanics , agronomy , biology
In this paper, we use computer vision as a feedback sensor in a control loop for landing an unmanned air vehicle (UAV) on a landing pad. The vision problem we address here is then a special case of the classic ego‐motion estimation problem since all feature points lie on a planar surface (the landing pad). We study together the discrete and differential versions of the ego‐motion estimation, in order to obtain both position and velocity of the UAV relative to the landing pad. After briefly reviewing existing algorithm for the discrete case, we present, in a unified geometric framework, a new estimation scheme for solving the differential case. We further show how the obtained algorithms enable the vision sensor to be placed in the feedback loop as a state observer for landing control. These algorithms are linear, numerically robust, and computationally inexpensive hence suitable for real‐time implementation. We present a thorough performance evaluation of the motion estimation algorithms under varying levels of image measurement noise, altitudes of the camera above the landing pad, and different camera motions relative to the landing pad. A landing controller is then designed for a full dynamic model of the UAV. Using geometric nonlinear control theory, the dynamics of the UAV are decoupled into an inner system and outer system. The proposed control scheme is then based on the differential flatness of the outer system. For the overall closed‐loop system, conditions are provided under which exponential stability can be guaranteed. In the closed‐loop system, the controller is tightly coupled with the vision based state estimation and the only auxiliary sensor are accelerometers for measuring acceleration of the UAV. Finally, we show through simulation results that the designed vision‐in‐the‐loop controller generates stable landing maneuvers even for large levels of image measurement noise. Experiments on a real UAV will be presented in future work.