Dissecting scale from pose estimation in visual odometry
Author(s) -
Rong Yuan,
Hongyi Fan,
Benjamin B. Kimia
Publication year - 2017
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.31.170
Subject(s) - visual odometry , pose , artificial intelligence , computer vision , computer science , scale (ratio) , odometry , mobile robot , geography , robot , cartography
Traditional visual odometry approaches often rely on estimating the world in the form a 3D cloud of points from key frames, which are then projected onto other frames to determine their absolute poses. The resulting trajectory is obtained from the integration of these incremental estimates. In this process, both in the initial world reconstruction as well as in the subsequent PnP projection, a rotation matrix and a translation vector are the unknowns that are solved via a numerical process. We observe that the involvement of all these variables in the numerical process is unnecessary, costing both computational time and accuracy. Rather, the relative pose of pairs of frames can be independently estimated from a set of common features, up to scale, with high accuracy. This scale parameter is a free parameter for each pair of frames, whose estimation is the only obstacle in the integration of these local estimates. This paper presents an approach for relating this free parameter for each neighboring pair of frames and therefore integrating the entire estimation process, leaving only a single global scale variable. The odometry results are more accurate and the computational efficiency is significantly improved due to the analytic solution of the relative pose as well as relative scale.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom