z-logo
open-access-imgOpen Access
Monocular Visual Odometry based on joint unsupervised learning of depth and optical flow with geometric constraints
Author(s) -
Xiangrui Meng,
Bo Sun
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1906/1/012056
Subject(s) - visual odometry , epipolar geometry , artificial intelligence , computer vision , optical flow , monocular , computer science , odometry , ground truth , pixel , scale (ratio) , motion (physics) , robot , image (mathematics) , geography , mobile robot , cartography
Inferring camera ego-motion from consecutive images is essential in visual odometry (VO). In this work, we present a jointly unsupervised learning system for monocular VO, consisting of single-view depth, two-view optical flow, and camera-motion estimation module. Our work mitigates the scale drift issue which can further result in a degraded performance in the long-sequence scene. We achieve this by incorporating standard epipolar geometry into the framework. Specifically, we extract correspondences over predicted optical flow and then recover ego-motion. Additionally, we obtain pseudo-ground-truth depth via triangulating 2D-2D pixel matches, which makes the depth scale is closely relevant to the pose. Experimentation on the KITTI driving dataset shows competitive performance compared to established methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here