z-logo
open-access-imgOpen Access
Mono-Camera-Based Robust Self-Localization Using LIDAR Intensity Map
Author(s) -
Kei Sato,
Keisuke Yoneda,
Ryo Yanase,
Naoki Suganuma
Publication year - 2020
Publication title -
journal of robotics and mechatronics
Language(s) - English
Resource type - Journals
eISSN - 1883-8049
pISSN - 0915-3942
DOI - 10.20965/jrm.2020.p0624
Subject(s) - computer vision , artificial intelligence , robustness (evolution) , lidar , computer science , similarity (geometry) , image sensor , matching (statistics) , ranging , image (mathematics) , remote sensing , geography , mathematics , telecommunications , biochemistry , chemistry , statistics , gene
An image-based self-localization method for automated vehicles is proposed herein. The general self-localization method estimates a vehicle’s location on a map by collating a predefined map with a sensor’s observation values. The same sensor, generally light detection and ranging (LIDAR), is used to acquire map data and observation values. In this study, to develop a low-cost self-localization system, we estimate the vehicle’s location on a LIDAR-created map using images captured by a mono-camera. The similarity distribution between a mono-camera image transformed into a bird’s-eye image and a map is created in advance by template matching the images. Furthermore, a method to estimate a vehicle’s location based on the acquired similarity is proposed. The proposed self-localization method is evaluated on the driving data from urban public roads; it is found that the proposed method improved the robustness of the self-localization system compared with the previous camera-based method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom