z-logo
open-access-imgOpen Access
Multi-scale Feature Fusion for 3D Saliency Detection
Author(s) -
Gang Pan,
Anzhi Wang,
Baolei Xu,
Weihua Ou
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1651/1/012128
Subject(s) - saliency map , artificial intelligence , fuse (electrical) , leverage (statistics) , computer science , contrast (vision) , pattern recognition (psychology) , computer vision , feature (linguistics) , computation , kadir–brady saliency detector , depth map , fusion , image (mathematics) , linguistics , philosophy , algorithm , electrical engineering , engineering
3D saliency detection aims to take advantage of the disparity map, depth map and color information to automatically detect informative objects from natural scenes. Although studies have concentrated on this issue in recent years, there are challenges such as how to leverage disparity map or depth map effectively to compute depth-induced saliency, and how to fuse optimally multiple visual features and cues. A novel 3D saliency detection approach is proposed, which fuses local contrast, region contrast, texture feature, depth cue, and location cue into a unified saliency computation framework. Results show that the proposed approach achieves significant and consistent improvements on other advanced methods in the RGBD1000 datasets.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here