Premium
Selective memory: Recalling relevant experience for long‐term visual localization
Author(s) -
MacTavish Kirk,
Paton Michael,
Barfoot Timothy D.
Publication year - 2018
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21838
Subject(s) - computer science , landmark , term (time) , artificial intelligence , context (archaeology) , initialization , field (mathematics) , key (lock) , human–computer interaction , modalities , computer vision , real time computing , paleontology , physics , mathematics , computer security , quantum mechanics , pure mathematics , biology , programming language , social science , sociology
Abstract Visual navigation is a key enabling technology for autonomous mobile vehicles. The ability to provide large‐scale, long‐term navigation using low‐cost, low‐power vision sensors is appealing for industrial applications. A crucial requirement for long‐term navigation systems is the ability to localize in environments whose appearance is constantly changing over time—due to lighting, weather, seasons, and physical changes. This paper presents a multiexperience localization (MEL) system that uses a powerful map representation—storing every visual experience in layers—that does not make assumptions about underlying appearance modalities and generators. Our localization system provides real‐time performance by selecting online, a subset of experiences against which to localize. We achieve this task through a novel experience‐triage algorithm based on collaborative filtering, which selects experiences relevant to the live view , outperforming competing techniques. Based on classical memory‐based recommender systems, this technique also enables landmark‐level recommendations, is entirely online, and requires no training data. We demonstrate the capabilities of the MEL system in the context of long‐term autonomous path following in unstructured outdoor environments with a challenging 100‐day field experiment through day, night, snow, spring, and summer. We furthermore provide offline analysis comparing our system to several state‐of‐the‐art alternatives. We show that the combination of the novel methods presented in this paper enable full use of incredibly rich multiexperience maps, opening the door to robust long‐term visual localization.