Open Access
3D Layout encoding network for spatial‐aware 3D saliency modelling
Author(s) -
Yuan Jing,
Cao Yang,
Kang Yu,
Song Weiguo,
Yin Zhongcheng,
Ba Rui,
Ma Qing
Publication year - 2019
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2018.5591
Subject(s) - computer science , rgb color model , leverage (statistics) , artificial intelligence , encode , encoding (memory) , computer vision , pattern recognition (psychology) , depth map , image (mathematics) , biochemistry , chemistry , gene
Three‐dimensional (3D) [red, green and blue (RGB) + depth] saliency modelling can help with popular 3D multimedia applications. However, depth images produced from existing 3D devices are often with low quality, e.g. containing noises and holes. In this study, rather than relying on features or predictions directly derived from single depth images, the authors propose to encode deep layout features to facilitate the spatial‐aware saliency prediction. Specifically, they first generate coarse depth‐induced saliency cues which are careless of depth details. Then, to leverage the information of the high‐quality RGB image, they embed both low‐level and high‐level RGB deep features to refine the final prediction. In this way, they take both bottom‐up and top‐down cues together with spatial layout into account and achieve better saliency modelling results. Experiments on five public datasets show the superiority of the proposed method.