
Multi-modal RGB-D Image Segmentation from Appearance and Geometric Depth Maps
Author(s) -
Isail Salazar Acosta,
Said Pertuz,
Fabio Martínez
Publication year - 2020
Publication title -
tecnológicas
Language(s) - English
Resource type - Journals
eISSN - 2256-5337
pISSN - 0123-7799
DOI - 10.22430/22565337.1538
Subject(s) - minimum spanning tree based segmentation , segmentation , artificial intelligence , scale space segmentation , rgb color model , computer science , image segmentation , computer vision , segmentation based object categorization , tree (set theory) , pattern recognition (psychology) , range segmentation , region growing , mathematics , mathematical analysis
Classical image segmentation algorithms exploit the detection of similarities and discontinuities of different visual cues to define and differentiate multiple regions of interest in images. However, due to the high variability and uncertainty of image data, producing accurate results is difficult. In other words, segmentation based just on color is often insufficient for a large percentage of real-life scenes. This work presents a novel multi-modal segmentation strategy that integrates depth and appearance cues from RGB-D images by building a hierarchical region-based representation, i.e., a multi-modal segmentation tree (MM-tree). For this purpose, RGB-D image pairs are represented in a complementary fashion by different segmentation maps. Based on color images, a color segmentation tree (C-tree) is created to obtain segmented and over-segmented maps. From depth images, two independent segmentation maps are derived by computing planar and 3D edge primitives. Then, an iterative region merging process can be used to locally group the previously obtained maps into the MM-tree. Finally, the top emerging MM-tree level coherently integrates the available information from depth and appearance maps. The experiments were conducted using the NYU-Depth V2 RGB-D dataset, which demonstrated the competitive results of our strategy compared to state-of-the-art segmentation methods. Specifically, using test images, our method reached average scores of 0.56 in Segmentation Covering and 2.13 in Variation of Information.