z-logo
open-access-imgOpen Access
Relative scale estimation approach for monocular visual odometry
Author(s) -
Aldo Diaz,
Paulo Roberto Gardel Kurka
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.52591/lxai2021062516
Subject(s) - visual odometry , monocular , artificial intelligence , computer vision , computer science , odometry , scale (ratio) , redundancy (engineering) , motion estimation , robot , mobile robot , geography , cartography , operating system
Determining the scale of relative motion is key to achieve consistency in monocular motion estimation when trajectories are recovered up to a scale factor. In this paper, we introduce a novel method to estimate the relative scale in monocular visual odometry using a calibrated camera. Our algorithm exploits redundancy in point depth information to achieve robust relative scale estimates. The performance of the method is evaluated in the KITTI public dataset for autonomous vehicles using the standard KITTI benchmark metrics. The results demonstrate the effectiveness of a robust relative scale estimation with 3.06% less drift against visual odometry without any scale correction, and a total average translation error of 33.23%.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here