Edge Enhanced Direct Visual Odometry
Author(s) -
Xin Wang,
Wei Dong,
Mingcai Zhou,
Renju Li,
Hongbin Zha
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.30.35
Subject(s) - visual odometry , computer science , computer vision , odometry , artificial intelligence , enhanced data rates for gsm evolution , mobile robot , robot
We propose an RGB-D visual odometry method that both minimizes the photometric error and aligns the edges between frames. The combination of the direct photometric information and the edge features leads to higher tracking accuracy and allows the approach to deal with challenging texture-less scenes. In contrast to traditional line feature based methods, we use all edges rather than only line segments, avoiding aperture problem and the uncertainty of endpoints. Instead of explicitly matching edge features, we design a dense representation of edges to align them, bridging the direct methods and the feature-based methods in tracking. Image alignment and feature matching are performed in a general framework, where not only pixels but also salient visual landmarks are aligned. Evaluations on real-world benchmark datasets show that our method achieves competitive results in indoor scenes, especially in texture-less scenes where it outperforms the state-of-the-art algorithms.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom