Hole Filling for View Synthesis Using Depth Guided Global Optimization
Author(s) -
Guibo Luo,
Yuesheng Zhu
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2847312
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
View synthesis is an effective way to generate multi-view contents from a limited number of views, and can be utilized for 2-D-to-3-D video conversion, multi-view video compression, and virtual reality. In the view synthesis techniques, depth-image-based rendering (DIBR) is an important method to generate virtual view from video-plus-depth sequence. However, some holes might be produced in the DIBR process. Many hole filling methods have been proposed to tackle this issue, but most of them cannot achieve globally coherent or acquire trusted contents. In this paper, a hole filling method with depthguided global optimization is proposed for view synthesis. The global optimization is achieved by iterating the spatio-temporal approximate nearest neighbor (ANN) search and video reconstruction step. Directly applying global optimization might introduce some foreground artifacts to the synthesized video. To prevent this problem, some strategies have been developed in this paper. The depth information is applied to guide the spatio-temporal ANN searching and the initialization step is specified in the global optimization procedure. Our experimental results have demonstrated that the proposed method has better performance compared with other methods in terms of visual quality, trusted textures, and temporal consistency in the synthesized video.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom