
Multi-scale feature enhancement for saliency object detection algorithm
Author(s) -
Su Li,
Rugang Wang,
Feng Zhou,
Yuanyuan Wang,
Naihong Guo
Publication year - 2023
Publication title -
ieee access
Language(s) - English
Resource type - Journals
ISSN - 2169-3536
DOI - 10.1109/access.2023.3317901
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Aimed at existing saliency object detection models with problems of front and back view misclassification and edge blur, this study proposes an algorithm with multi-scale feature enhancement. In this algorithm, the feature maps of salient objects are extracted using VGG16. Multi-scale Feature Fusion Module is added to enhance the detailed information of the second feature layer and the semantic information of the fifth feature layer, which effectively improves the characterization ability of the second feature layer on the edges of salient objects and the fifth feature layer on salient objects. Simultaneously, Feature Enhancement Fusion Module is added to achieve the full fusion of local detail information and global semantic information through layer-by-layer fusion from deep to shallow, which is used to obtain a feature map with complete feature information. Finally, a complete prediction map with clear edges is obtained by training the network model. The algorithm was experimented on the HKU-IS, ECSSD, DUT-OMRON, and DUTS-TE datasets, MAE (Mean Absolute Error) values were 0.031, 0.040, 0.057, 0.040; F-measure values were 0.912, 0.923, 0.771, 0.876; E-measure values were 0.956, 0.931, 0.871, 0.887; S-measure values were 0.919, 0.928, 0.894, 0.879. Compared with existing algorithms, the proposed algorithm can accurately identify all regions of significant objects and obtain better detection results.