z-logo
Premium
Active Scene Understanding via Online Semantic Reconstruction
Author(s) -
Zheng Lintao,
Zhu Chenyang,
Zhang Jiazhao,
Zhao Hang,
Huang Hui,
Niessner Matthias,
Xu Kai
Publication year - 2019
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.13820
Subject(s) - computer science , artificial intelligence , segmentation , computer vision , trajectory , parsing , robot , parameterized complexity , voxel , traverse , path (computing) , pattern recognition (psychology) , algorithm , physics , geodesy , astronomy , programming language , geography
Abstract We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here