
Semantic Segmentation based Dense RGB-D SLAM in Dynamic Environments
Author(s) -
Jianbo Zhang,
Yanjie Liu,
Junguo Chen,
Liulong Ma,
Dong Jin,
Jiao Chen
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1267/1/012095
Subject(s) - computer vision , artificial intelligence , computer science , rgb color model , simultaneous localization and mapping , segmentation , feature (linguistics) , trajectory , robot , tracking (education) , mobile robot , motion (physics) , psychology , pedagogy , linguistics , philosophy , physics , astronomy
Visual Simultaneous Location and Mapping (SLAM) based on RGB-D has developed as a fundamental capability for intelligent mobile robot. However, most of existing SLAM algorithms assume that the environment is static and not suitable for dynamic environments. This is because moving objects in dynamic environments can interfere with camera pose tracking, cause undesired objects to be integrated into the map. In this paper, we modify the existing framework for RGB-D SLAM in dynamic environments, which reduces the influence of moving objects and reconstructs the background. The method starts by semantic segmentation and motion points detection, then removing feature points on moving objects. Meanwhile, a clean and accurate semantic map is produced, which contains semantic information maintenance part. Quantitative experiments using TUM RGB-D dataset are conducted. The results show that the absolute trajectory accuracy and real-time performance in dynamic scenes can be improved.