z-logo
open-access-imgOpen Access
Synthetic aperture integral imaging using edge depth maps of unstructured monocular video
Author(s) -
Jian Wei,
Shigang Wang,
Yan Zhao,
Mei-Lan Piao
Publication year - 2018
Publication title -
optics express
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.394
H-Index - 271
ISSN - 1094-4087
DOI - 10.1364/oe.26.034894
Subject(s) - computer science , computer vision , artificial intelligence , rendering (computer graphics) , integral imaging , monocular , depth map , computer graphics (images) , image (mathematics)
Synthetic aperture integral imaging using monocular video with arbitrary camera trajectory enables casual acquisition of three-dimensional information of any-scale scenes. This paper presents a novel algorithm for computational reconstruction and imaging of the scenes in this SAII system. Since dense geometry recovery and virtual view rendering are required to handle such unstructured input, for less computational costs and artifacts in both stages, we assume flat surfaces in homogeneous areas and take full advantage of the per-frame edges which are accurately reconstructed beforehand. A dense depth map of each real view is first estimated by successively generating two complete, named smoothest- and densest-surface, depth maps, both respecting local cues, and then merging them via Markov random field global optimization. This way, high-quality perspective images of any virtual camera array can be synthesized simply by back-projecting the obtained closest surfaces into the new views. The pixel-level operations throughout most parts of our pipeline allow high parallelism. Simulation results have shown that the proposed approach is robust to view-dependent occlusions and lack of textures in original frames and can produce recognizable slice images at different depths.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here