
Where to look: a collection of methods forMAV heading correction in underground tunnels
Author(s) -
Kanellakis Christoforos,
Sharif Mansouri Sina,
Castaño Miguel,
Karvelis Petros,
Kominiak Dariusz,
Nikolakopoulos G.
Publication year - 2020
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.1423
Subject(s) - heading (navigation) , convolutional neural network , artificial intelligence , computer science , lidar , computer vision , robotics , process (computing) , range (aeronautics) , remote sensing , robot , engineering , geography , aerospace engineering , operating system
Degraded Subterranean environments are an attractive case for miniature aerial vehicles, since there is a constant need to increase the safety operations in underground mines. The starting point for integrating aerial vehicles in the mining process is the capability to reliably navigate along tunnels. Inspired by recent advancements, this paper presents a collection of different, experimentally verified, methods tackling the problem of MAVs heading regulation while navigating in dark and textureless tunnel areas. More specifically, four different methods are presented in this work with the common goal to identify open space in the tunnel and align the MAV heading using either visual sensor in methods a) single image depth estimation, b) darkness contour detection, c) Convolutional Neural Network (CNN) regression and 2D Lidar sensor in method d) range geometry. For the works a)‐c) the dark scene in the middle of the tunnel is considered as open space and is processed and converted to yaw rate command, while d) examines the geometry of the range measurements to calculate the yaw rate command. Experimental results from real underground tunnel demonstrate the performance of the methods in the field, while setting the ground for further developments in the aerial robotics community.