Premium
Bag‐of‐words‐driven, single‐camera simultaneous localization and mapping
Author(s) -
Botterill Tom,
Mills Steven,
Green Richard
Publication year - 2010
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.20368
Subject(s) - artificial intelligence , computer vision , monocular , simultaneous localization and mapping , computer science , outlier , position (finance) , graph , set (abstract data type) , representation (politics) , robot , trajectory , mobile robot , theoretical computer science , physics , finance , astronomy , politics , law , political science , programming language , economics
This paper describes BoWSLAM, a scheme for a robot to reliably navigate and map previously unknown environments, in real time, using monocular vision alone. BoWSLAM can navigate challenging dynamic and self‐similar environments and can recover from gross errors. Key innovations allowing this include new uses for the bag‐of‐words image representation; this is used to select the best set of frames from which to reconstruct positions and to give efficient wide‐baseline correspondences between many pairs of frames, providing multiple position hypotheses. A graph‐based representation of these position hypotheses enables the modeling and optimization of errors in scale in a dual graph and the selection of only reliable position estimates in the presence of gross outliers. BoWSLAM is demonstrated mapping a 25‐min, 2.5‐km trajectory through a challenging and dynamic outdoor environment without any other sensor input, considerably farther than previous single‐camera simultaneous localization and mapping (SLAM) schemes. © 2010 Wiley Periodicals, Inc.