
Robust approach to reconstructing transparent objects using a time-of-flight depth camera
Author(s) -
Kyungmin Kim,
Hyunjung Shim
Publication year - 2017
Publication title -
optics express
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.394
H-Index - 271
ISSN - 1094-4087
DOI - 10.1364/oe.25.002666
Subject(s) - computer science , computer vision , artificial intelligence , prior probability , object (grammar) , noise (video) , optics , image (mathematics) , physics , bayesian probability
This study presents a robust approach to reconstructing a three dimensional (3-D) translucent object using a single time-of-flight depth camera with simple user marks. Because the appearance of translucent objects depends on the light interaction with the surrounding environment, the measurement using depth cameras is considerably biased or invalid. Although several existing methods attempt to model the depth error of translucent objects, their model remains partial because of object assumptions and its sensitivity to noise. In this study, we introduce a ground plane and piece-wise linear surface model as priors and construct a robust 3-D reconstruction framework for translucent objects. These two depth priors are combined with the depth error model built on the time-of-flight principle. Extensive evaluation of various real data reveals that the proposed method substantially improves the accuracy and reliability of 3-D reconstruction for translucent objects.