
Estimation of measurements for block‐based compressed video sensing: study of correlation noise in measurement domain
Author(s) -
Song Bin,
Guo Jie,
Li Lingquan,
Liu Haixiao
Publication year - 2014
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2013.0380
Subject(s) - compressed sensing , computer science , frequency domain , block (permutation group theory) , redundancy (engineering) , algorithm , frame (networking) , position (finance) , nyquist rate , time domain , noise (video) , nyquist–shannon sampling theorem , artificial intelligence , mathematics , computer vision , sampling (signal processing) , telecommunications , image (mathematics) , geometry , finance , filter (signal processing) , economics , operating system
Compressed video sensing (CVS) is an application of compressed sensing theory which samples a signal below the Shannon–Nyquist rate. However, previous research about CVS has largely ignored the inter‐frame correlation analysis in the measurement domain, and then is not able to remove the time redundancy. In this study, the authors consider the estimation of the measurements of a block in any possible position in a frame by introducing a correlation noise (CN) between the actual and the estimated measurements. In this work, they first establish a correlation model (CM) in the pixel domain between a block which is in a random unknown position in a frame and the adjacent non‐overlapping blocks that they already have. Then, a novel measurement domain CM is presented to approximate the measurements for the random block. Lastly, they employ the CN to characterise the accuracy of the CM in the measurement domain. The simulation results show that the proposed model can make an accurate estimation to the actual measurements of an arbitrary block in a frame and that by using the proposed CN to perform motion estimation, they can improve the peak signal‐to‐noise ratio of the video sequences by 0.1–1.7 dB compared with the existing methods.