
Deformable Feature Fusion and Accurate Anchors Prediction for Lightweight SAR Ship Detector Based on Dynamic Hierarchical Model Pruning
Author(s) -
Yue Guo,
Shiqi Chen,
Ronghui Zhan,
Wei Wang,
Jun Zhang
Publication year - 2025
Publication title -
ieee journal of selected topics in applied earth observations and remote sensing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.246
H-Index - 88
eISSN - 2151-1535
pISSN - 1939-1404
DOI - 10.1109/jstars.2025.3574184
Subject(s) - geoscience , signal processing and analysis , power, energy and industry applications
In recent years, Convolutional Neural Networks (CNNs) have been extensively utilized for Synthetic Aperture Radar (SAR) ship detection tasks. The fixed square shape of convolutional kernels in traditional convolutional limits the ability to extract features. Moreover, the large number of parameters in CNNs restricts their deployment on platforms with limited resources. To address these challenges, this paper proposes a novel SAR ship detection network called DFES-Net, which incorporates deformable feature fusion and accurate anchor prediction to enhance detection performance. DFES-Net employs depthwise separable deformable convolution (DWDCN) and a lightweight feature enhancement module (LDFFM) to improve detection accuracy in complex scenes. Specifically, DWDCN is integrated into the backbone network to adapt convolutional positions to the shape of ship targets, thereby enhancing feature extraction and improving detection accuracy in complex scenarios. Additionally, a receptive field enhancement module (RFEM) based on dilated convolutions is introduced to increase the effective receptive field and improve detection performance for nearshore and densely packed small targets. An effective regression loss function, CADIoU, is proposed to generate accurate bounding boxes for SAR ship targets. Finally, a magnitude-based dynamic hierarchical pruning algorithm (LMDHP) is introduced to dynamically prune parameters across various network structures, thereby reducing the model's parameter count. Extensive experiments conducted on the SSDD and HRSID datasets demonstrate that our method achieves a 2.0x speed up with a model memory of only 4.7MB (73.4% smaller than YOLOv7-tiny), while attaining mAP scores of 97.9% and 92.2% on the SSDD and HRSID datasets, respectively