z-logo
Premium
Viewpoint image generation for head tracking 3D display using multi‐camera and approximate depth information
Author(s) -
Date Munekazu,
Takada Hideaki,
Kojima Akira
Publication year - 2015
Publication title -
journal of the society for information display
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 52
eISSN - 1938-3657
pISSN - 1071-0922
DOI - 10.1002/jsid.315
Subject(s) - computer vision , artificial intelligence , computer science , parallax , stereoscopy , stereo display , observer (physics) , tracking (education) , perspective (graphical) , computer graphics (images) , depth perception , depth map , stereo camera , image (mathematics) , perception , psychology , pedagogy , physics , quantum mechanics , neuroscience , biology
A simple and high image quality method for viewpoint image synthesis from multi‐camera images for a stereoscopic 3D display using head tracking is proposed. In this method, slices of images for depth layers are made using approximate depth information, the slices are linearly blended corresponding to the distance between the viewpoint and cameras at each layer, and the layers are overlaid from the perspective of viewpoint. Because the linear blending automatically compensates for depth error because of the visual effects of depth‐fused 3D (DFD), the resulting image is natural to observer's perception. Smooth motion parallax of wide depth range objects induced by viewpoint movement for left‐and‐right and front‐and‐back directions is achieved using multi‐camera images and approximate depth information. Because the calculation algorithm is very simple, it is suitable for real time 3D display applications.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here