Premium
Multimodal localization: Stereo over LiDAR map
Author(s) -
Zuo Xingxing,
Ye Wenlong,
Yang Yulin,
Zheng Renjie,
VidalCalleja Teresa,
Huang Guoquan,
Liu Yong
Publication year - 2020
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21936
Subject(s) - lidar , computer vision , artificial intelligence , computer science , point cloud , visual odometry , leverage (statistics) , feature (linguistics) , stereo cameras , transformation (genetics) , structure from motion , optical flow , stereopsis , motion estimation , remote sensing , geography , robot , image (mathematics) , linguistics , philosophy , biochemistry , chemistry , gene
In this paper, we present a real‐time high‐precision visual localization system for an autonomous vehicle which employs only low‐cost stereo cameras to localize the vehicle with a priori map built using a more expensive 3D LiDAR sensor. To this end, we construct two different visual maps: a sparse feature visual map for visual odometry (VO) based motion tracking, and a semidense visual map for registration with the prior LiDAR map. To register two point clouds sourced from different modalities (i.e., cameras and LiDAR), we leverage probabilistic weighted normal distributions transformation (ProW‐NDT), by particularly taking into account the uncertainty of source point clouds. The registration results are then fused via pose graph optimization to correct the VO drift. Moreover, surfels extracted from the prior LiDAR map are used to refine the sparse 3D visual features that will further improve VO‐based motion estimation. The proposed system has been tested extensively in both simulated and real‐world experiments, showing that robust, high‐precision, real‐time localization can be achieved.