
Multi‐scale deep neural network for salient object detection
Author(s) -
Xiao Fen,
Deng Wenzheng,
Peng Liangchan,
Cao Chunhong,
Hu Kai,
Gao Xieping
Publication year - 2018
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.5631
Subject(s) - artificial intelligence , computer science , convolutional neural network , salient , pattern recognition (psychology) , deep learning , benchmark (surveying) , context (archaeology) , convolution (computer science) , feature extraction , object detection , feature (linguistics) , artificial neural network , representation (politics) , object (grammar) , scale (ratio) , pixel , computer vision , paleontology , linguistics , philosophy , physics , geodesy , quantum mechanics , politics , political science , law , biology , geography
Salient object detection is a fundamental problem and has been received a great deal of attention in computer vision. Recently, deep learning model became a powerful tool for image feature extraction. In this study, the authors propose a multi‐scale deep neural network (MSDNN) for salient object detection. The proposed model first extracts global high‐level features and context information over the whole source image with the recurrent convolutional neural network. Then several stacked deconvolutional layers are adopted to get the multi‐scale feature representation and obtain a series of saliency maps. Finally, the authors investigate a fusion convolution module to build a final pixel level saliency map. The proposed model is extensively evaluated on six salient object detection benchmark datasets. Results show that the authors’ deep model significantly outperforms other 12 state‐of‐the‐art approaches.