z-logo
open-access-imgOpen Access
Direct Visual Odometry by Fusing Luminosity and Depth Information
Author(s) -
Jinming Yin,
Haibo Zhou,
Haoxin Zhang,
Chenming Li,
Guoqing Sun
Publication year - 2022
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/2203/1/012004
Subject(s) - artificial intelligence , visual odometry , computer vision , computer science , odometry , error function , rgb color model , function (biology) , algorithm , mobile robot , robot , evolutionary biology , biology
In the traditional direct visual odometry, it is difficult to satisfy the photometric invariant assumption due to the influence of illumination changes in the real environment, which will lead to errors and drift. This paper proposes an improved direct visual odometry system, which combines luminosity and depth information. The algorithm proposed in this paper uses Kinect 2 to collect RGB images with the corresponding depth information, and selects points with large changes of gray gradient to construct a luminosity error function and uses the corresponding depth information to construct a depth error function. The two error functions are merged into one function and converted into the least squares function of the pose of camera, the Levenberg-Marquardt algorithm is used to solve the camera pose. Finally, the Graph optimization theory and the g2o library are used to optimize the initial pose. Experiments show that the algorithm can reduce the error to a certain extent and reduce the drift caused by illumination changes.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here