z-logo
open-access-imgOpen Access
Learning scene and blur model for active chromatic depth from defocus
Author(s) -
Benjamin Buat,
Pauline Trouvé-Peloux,
Frédéric Champagnat,
Guy Le Besnerais
Publication year - 2021
Publication title -
applied optics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.668
H-Index - 197
eISSN - 2155-3165
pISSN - 1559-128X
DOI - 10.1364/ao.439139
Subject(s) - artificial intelligence , computer vision , computer science , projector , monocular , chromatic aberration , chromatic scale , benchmark (surveying) , projection (relational algebra) , optics , algorithm , physics , geology , geodesy
In this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e., depth) information. Depth is then estimated from a single image patch using a maximum likelihood criterion defined using the learned covariance. This method is applied here within a new active DFD method using a dense textured projection and a chromatic lens for image acquisition. The projector adds texture for low-textured objects, which is usually a limitation of DFD, and the chromatic aberration increases the estimated depth range with respect to a conventional DFD. Here, we provide quantitative evaluations of the depth estimation performance of our method on simulated and real data of fronto-parallel untextured scenes. The proposed method is then experimentally evaluated qualitatively using a 3D printed benchmark.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom