Premium
Lighting‐invariant Visual Teach and Repeat Using Appearance‐based Lidar
Author(s) -
McManus Colin,
Furgale Paul,
Stenning Braden,
Barfoot Timothy D.
Publication year - 2012
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21444
Subject(s) - computer vision , artificial intelligence , computer science , invariant (physics) , lidar , mathematics , remote sensing , geography , mathematical physics
Visual Teach and Repeat (VT&R) is an effective method to enable a vehicle to repeat any previously driven route using just a visual sensor and without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically alter the appearance of the scene. In an effort to achieve lighting invariance, this paper details the design of a VT&R system that uses a laser scanner as the primary sensor. Unlike a traditional scan‐matching approach, we apply appearance‐based computer vision techniques to laser intensity images for motion estimation, providing us the benefit of lighting invariance. Field tests were conducted in an outdoor, planetary analogue environment, over an entire diurnal cycle, repeating a 1.1 km route more than 10 times with an autonomy rate of 99.7% by distance. We describe, in detail, our experimental setup and results, as well as how we address the various off‐nominal scenarios related to feature‐poor environments, hardware failures, and estimation drift. An analysis on motion distortion and a comparison with a stereo‐based system is also presented. We show that even without motion compensation, our system is robust enough to repeat long‐range routes accurately and reliably. © 2012 Wiley Periodicals, Inc.