z-logo
open-access-imgOpen Access
Real‐time depth enhancement by fusion for RGB‐D cameras
Author(s) -
Garcia Frederic,
Aouada Djamila,
Solignac Thomas,
Mirbach Bruno,
Ottersten Björn
Publication year - 2013
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2012.0289
Subject(s) - computer vision , artificial intelligence , rgb color model , computer science , filter (signal processing) , grayscale , image fusion , fusion , depth map , enhanced data rates for gsm evolution , image (mathematics) , philosophy , linguistics
This study presents a real‐time refinement procedure for depth data acquired by RGB‐D cameras. Data from RGB‐D cameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occlusions or low object remission. In this work, the authors use recent depth enhancement filters intended for time‐of‐flight cameras, and extend them to structured light‐based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding two‐dimensional image, we correct the depth measurements by separately treating its undesired regions. To that end, the authors propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artefacts, the authors introduce the use of RGB images as guidance images as an alternative to real‐time state‐of‐the‐art fusion filters that use greyscale guidance images. The experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, the authors propose a mathematical formulation that enables to use the filter in real‐time applications.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here