Premium
Scale‐aware camera localization in 3D LiDAR maps with a monocular visual odometry
Author(s) -
Sun Manhui,
Yang Shaowu,
Liu Henzhu
Publication year - 2019
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.1879
Subject(s) - computer vision , artificial intelligence , computer science , visual odometry , lidar , point cloud , monocular , simultaneous localization and mapping , feature (linguistics) , focus (optics) , scale (ratio) , odometry , monocular vision , mobile robot , robot , remote sensing , geography , cartography , linguistics , philosophy , physics , optics
Localization information is essential for mobile robot systems in navigation tasks. Many visual‐based approaches focus on localizing a robot within prior maps acquired with cameras. It is critical where the Global Positioning System signal is unreliable. In contrast to conventional methods that localize a camera in an image‐based map, we propose a novel approach that localizes a monocular camera within a given three‐dimensional (3D) light detection and ranging (LiDAR) map. We employ visual odometry to reconstruct a semidense set of 3D points from the monocular camera images. These points are continuously matched against the 3D prior LiDAR map by a modified feature‐based point cloud registration method to track a full six‐degree‐of‐freedom camera pose. Since the monocular camera suffers from the scale‐drift problem due to the lack of depth information, the proposed method solves it by adopting updatable scale estimation. Experiments carried out on a publicly large‐scale data set demonstrate that the camera and LiDAR multimodal data matching problem is solved, and the localization accuracy of our method is comparable to state‐of‐the‐art approaches.