z-logo
open-access-imgOpen Access
Novel Pixel Recovery Method Based on Motion Vector Disparity and Compensation Difference
Author(s) -
Ting-Lan Lin,
Xutao Wei,
Xubo Wei,
Tzu-Hao Su,
Yu-Liang Chiang
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2803733
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
As compressed videos are transmitted in the communication networks, video packet loss inevitably occurs. This problem can be solved by error concealment method. We used the motion vector of the available neighboring blocks to estimate the lost motion vector for the lost block. These estimates propagate to predict all other missing motion vectors. We further improved the work by using the idea of the motion vector disparities between neighboring available blocks to modify the motion vector weightings. Furthermore, the differences between the compensated pixels and the decoded pixels in the neighboring blocks are computed for another weighting for improvement. These two novelties are combined as a final indicator to prediction weightings. By comparison against the state-of-the-art method, the four proposed algorithms increase the average peak signal-to-noise ratio (PSNR) by up to 1.86, 1.93, 1.94, and 2.04 dB on average, showing the gradual improvement of our design systems. For other video quality measurements, the average gains of the proposed work against the state-of-the-art work can be up to 0.0575 in structural similarity index metric (SSIM), -0.0278 in video quality metric (VQM) (the lower the better), -0.0008 in motion-based video integrity evaluation (MOVIE) (the lower the better), and 2.77 in subjective evaluation. The proposed work performs slightly worse than a pixel-based state-of-the-art method in PSNR and SSIM but performs better in VQM and MOVIE (both correlate better with human perception) and subjective experiments, with much lower computational complexity.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom