
Estimating just‐noticeable distortion for images/videos in pixel domain
Author(s) -
Uzair Muhammad,
Dony Robert D.
Publication year - 2017
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2016.1120
Subject(s) - foveal , artificial intelligence , luminance , computer science , human visual system model , computer vision , pixel , distortion (music) , just noticeable difference , masking (illustration) , discrete cosine transform , human eye , contrast (vision) , visibility , pattern recognition (psychology) , image (mathematics) , optics , retinal , art , amplifier , computer network , biochemistry , chemistry , physics , visual arts , bandwidth (computing)
Existing pixel‐based just noticeable distortion (JND) models only take into account luminance adaptation and texture masking (TM). Similarly, existing discrete cosine transform (DCT) based models do not take into account foveal vision effects and do not estimate TM efficiently. As human visual system (HVS) is not sensitive to distortion below the JND threshold, estimation of the perceptual visibility threshold is widely used in digital and video processing applications. The authors propose a comprehensive and efficient pixel‐based JND model incorporating all major factors which contribute to the JND estimation. The evaluation of contrast masking (CM) is done by distinguishing the edge and TM with respect to the entropy masking properties of the HVS. Similarly, the foveal vision effects are also taken into account for the comprehensive estimation of contrast sensitivity function (CSF). Hence, the proposed pixel‐based JND model incorporates the spatio‐temporal CSF, foveal vision effects, influence of eye‐movement, luminance adaptation and CM to be more consistent with human perception. The incorporation of these important factors makes the proposed model the most comprehensive and efficient in the current literature. Psychophysical experiments were performed to test the proposed model. The results show the proposed model comprehensively outperforms other existing models proving its efficiency.