z-logo
open-access-imgOpen Access
View Synthesis Using Stereo Vision
Author(s) -
Daniel Scharstein
Publication year - 1999
Publication title -
lecture notes in computer science
Language(s) - English
Resource type - Book series
SCImago Journal Rank - 0.249
H-Index - 400
eISSN - 1611-3349
pISSN - 0302-9743
DOI - 10.1007/3-540-48725-5
Subject(s) - computer science , computer vision , stereopsis , artificial intelligence , computer stereo vision , stereo cameras , computer graphics (images)
This thesis investigates the use of stereo vision for the application of view synthesis. View synthesis--the problem of creating images of a scene as it would appear from novel viewpoints--has traditionally been approached using methods from computer graphics. These methods, however, suffer from low rendering speed, limited achievable realism, and, most severely, their dependence on a global scene model, which typically needs to be constructed manually. In this thesis, we present a new approach to view synthesis that avoids the above problems by synthesizing new views from existing images of a scene. Using an image-based representation of scene geometry computed by stereo vision methods, a global model can be avoided, and realistic new views can be synthesized quickly using image warping. The new application of stereo for view synthesis makes it necessary to re-evaluate the requirements on stereo algorithms. We compare view synthesis to several traditional applications of stereo, and conclude that stereo vision is better suited for view synthesis than for applications requiring explicit 3D reconstruction. We also discuss ways of dealing with partially occluded regions of unknown depth and with completely occluded regions of unknown texture, and present experiments demonstrating that it is possible to efficiently synthesize realistic new views even from inaccurate and incomplete depth information. This thesis also contributes several novel stereo algorithms that are motivated by the specific requirements imposed by view synthesis. We introduce a new evidence measure based on intensity gradients for establishing correspondences between images. This measure combines the notions of similarity and confidence, and allows stable matching and easy assigning of canonical depth interpretations in image regions of insufficient information. We also present new diffusion-based stereo algorithms that are motivated by the need to correctly recover object boundaries. In particular, we develop a novel Bayesian estimation technique that significantly outperforms area-based algorithms using fixed-sized windows. We provide experimental results for all algorithms on both synthetic and real images.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom