
Locality and context‐aware top‐down saliency
Author(s) -
Li Junxia,
Rajan Deepu,
Yang Jian
Publication year - 2018
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2017.0251
Subject(s) - locality , computer science , pooling , pascal (unit) , artificial intelligence , discriminative model , pattern recognition (psychology) , coding (social sciences) , representation (politics) , feature (linguistics) , context (archaeology) , saliency map , machine learning , image (mathematics) , mathematics , paleontology , philosophy , linguistics , statistics , politics , political science , law , biology , programming language
In this study, the authors propose a novel framework for top‐down (TD) saliency detection, which is well suited to locate category‐specific objects in natural images. Saliency value is defined as the probability of a target based on its visual feature. They introduce an effective coding strategy called locality constrained contextual coding (LCCC) that enforces locality and contextual constraints. Furthermore, a contextual pooling operation is presented to take advantages of feature contextual information. Benefiting from LCCC and contextual pooling, the obtained feature representation has high discriminative power, which makes the authors' saliency detection method achieving competitive results with existing saliency detection algorithms. They also include bottom‐up cues into their framework to supplement the proposed TD saliency algorithm. Experimental results on three datasets (Graz‐02, Weizmann Horse and PASCAL VOC 2007) show that the proposed framework outperforms state‐of‐the‐art methods in terms of visual quality and accuracy.