
Orientation Correction of Kinect’s 3D Depth Data for Mapping
Author(s) -
A A A Mosed,
Kamarulzaman Kamarudin,
S Puveneswari
Publication year - 2019
Publication title -
iop conference series. materials science and engineering
Language(s) - English
Resource type - Journals
eISSN - 1757-899X
pISSN - 1757-8981
DOI - 10.1088/1757-899x/705/1/012030
Subject(s) - ransac , computer vision , artificial intelligence , orientation (vector space) , computer science , process (computing) , sonar , mobile robot , robot , obstacle , ground plane , geography , image (mathematics) , mathematics , telecommunications , geometry , antenna (radio) , operating system , archaeology
Simultaneous localization and mapping (SLAM) is considered to be one of the primary tasks for autonomous robot navigation process. The first element required by the robot to start the navigation process is to create and assemble an accurate map of the surroundings or ground plane to envision its location. Mobile robots that use 1D or 2D sensors such as laser range finder, ultrasonic or sonar for the mapping process cannot provide enough information on the obstacle locations while 3D sensors like Kinect can. However, if the 3D sensor is not mounted correctly, the orientation of the 3D image obtained will also be affected and thus producing inaccurate maps. In this project, Microsoft Kinect was used to scan the environment. RANSAC algorithm was implemented to detect the ground plane. The orientation of the ground plane was corrected, and the depth data acquired by Kinect was then converted into 2D map. It was found that the methods applied have successfully mapped the obstacles detected.