z-logo
open-access-imgOpen Access
Robust depth-from-defocus for autofocusing in the presence of image shifts
Author(s) -
Younsik Kang,
Xue Tu,
S. Dutta,
Murali Subbarao
Publication year - 2008
Publication title -
proceedings of spie, the international society for optical engineering/proceedings of spie
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.192
H-Index - 176
eISSN - 1996-756X
pISSN - 0277-786X
DOI - 10.1117/12.792769
Subject(s) - computer vision , artificial intelligence , computer science , focus (optics) , pixel , image (mathematics) , ranging , relation (database) , optics , telecommunications , physics , database
A new passive ranging technique named Robust Depth-from-Defocus (RDFD) is presented for autofocusing in digital cameras. It is adapted to work in the presence of image shift and scale change caused by camera/hand/object motion. RDFD is similar to spatial-domain Depth-from-Defocus (DFD) techniques in terms of computational efficiency, but it does not require pixel correspondence between two images captured with different defocus levels. It requires approximate correspondence between image regions in different image frames as in the case of Depth-from-Focus (DFF) techniques. Theory and computational algorithm are presented for two different variations of RDFD. Experimental results are presented to show that RDFD is robust against image shifts and useful in practical applications. RDFD also provides insight into the close relation between DFF and DFD techniques.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom