
Vision Positioning method for Autonomous Precise Landing of UAV Based on Square Landing Mark
Author(s) -
Jiaju Chen,
Rui Wang,
Rumo Wang
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1651/1/012182
Subject(s) - computer vision , artificial intelligence , computer science , pixel , calibration , heading (navigation) , position (finance) , monocular vision , frame (networking) , enhanced data rates for gsm evolution , noise (video) , image (mathematics) , engineering , mathematics , telecommunications , statistics , finance , economics , aerospace engineering
Rotor-craft is a kind of VTOL UAV and is widely used in multiple fields. Among relative researches, vision guided autonomous landing of rotor-craft has been a hot spot, where vision positioning is the most crucial. The core of the algorithm is to calculate the position and attitude information according to the change of the visual image of the same object at different time or frame. Based on the real-time self calibration technology of airborne monocular vision, this paper puts forward a method that takes the designed landing mark composed of black and white squares as the cooperative target, solving the relative pose of camera and cooperative target to carry out the landing positioning of UAV, and realizes the full-automatic sub-pixel precision linear edge detection to ensure the accuracy of visual positioning. A series of simulation experiments show that for 768 * 576 pixel image, when the camera is about 12m away from the target and the noise deviation reaches 3 pixels, the total time of edge extraction and calibration is 0.511s, and the position and attitude estimation are still satisfactory, which shows that the proposed algorithm can effectively realize the autonomous landing of UAV.