z-logo
Premium
Hallucinating Stereoscopy from a Single Image
Author(s) -
Zeng Qiong,
Chen Wenzheng,
Wang Huan,
Tu Changhe,
CohenOr Daniel,
Lischinski Dani,
Chen Baoquan
Publication year - 2015
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.12536
Subject(s) - hallucinating , stereoscopy , artificial intelligence , computer vision , computer science , object (grammar) , depth map , prior probability , depth perception , image (mathematics) , perception , bayesian probability , neuroscience , biology
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre‐segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene's 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth‐sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state‐of‐the‐art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi‐interactive and automatic single image depth recovery methods.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here