
Depth Guided Cross-modal Residual Adaptive Network for RGB-D Salient Object Detection
Author(s) -
Zhengyun Zhao,
Qingpeng Yang,
Shangqin Yang,
Jun Wang
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1873/1/012024
Subject(s) - rgb color model , modal , artificial intelligence , computer science , residual , computer vision , crossmodal , pattern recognition (psychology) , feature extraction , channel (broadcasting) , algorithm , perception , visual perception , computer network , chemistry , neuroscience , polymer chemistry , biology
Depth modal features can provide complementary information for salient object detection (SOD). Most of the existing RGB-D SOD methods focus on fully combining RGB and Depth modal features without distinguishing them. In this paper, we propose a new depth guided cross-modal residual adaptive network for RGB-D SOD. We use two independent resnet-50 to extract the features of the two modes respectively. Then the cross-modal channel-wise refinement module is designed to obtain complementary modal information. We design a crossmodal guided module to make complementary modal information guide RGB image feature extraction. Finally, the residual adaptive selection module is used to enhance the spatial mutual concerns between the two modal features to achieve multimodal information fusion. Experimental results show that our method can achieve a more reasonable fusion state of RGB and Depth, and verify the effectiveness of our final saliency model.