View Intervention and Feature Alignment Aggregation Framework for Multi-View SAR Target Recognition
Author(s) -
Qijun Dai,
Gong Zhang,
Biao Xue,
Lifeng Liu,
Lipo Wang
Publication year - 2025
Publication title -
ieee journal of selected topics in applied earth observations and remote sensing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.246
H-Index - 88
eISSN - 2151-1535
pISSN - 1939-1404
DOI - 10.1109/jstars.2025.3614695
Subject(s) - geoscience , signal processing and analysis , power, energy and industry applications
Multi-view synthetic aperture radar (SAR) automatic target recognition (ATR) has attracted increasing attention for its ability to integrate effective information from multiple images. However, the existing algorithms have ignored the interplay between the multi-view combination and the multi-view network, failing to explore the inherent coupling relationship within multi-view images. To tackle these issues, a multi-view SAR ATR framework called view intervention and feature alignment aggregation (VIFA) is proposed. First, a deep clustering-based multi-view combination is designed. Images with sufficient complementary information are selected from the raw SAR data under each category to form multi-view images according to image features, which are the latent features obtained by the autoencoder (AE). Next, an efficient multi-view feature alignment aggregation (Mv-FAA) network is proposed, in which the encoder of the AE serves as the feature extraction module. By designing a hybrid loss function to guide the training of the Mv-FAA network, it can extract complementary features from multi-view images while retaining certain consistent features so that the final holistic features of the target are obtained for discrimination. The proposed framework strengthens the link between the multi-view combination and the multi-view network to reconcile the complementary and consistent information within multi-view images, providing valuable insights for advancing multi-view SAR ATR research. The experimental results on the Moving and Stationary Target Recognition (MSTAR) and the Full Aspect Stationary Targets-Vehicle (FAST-Vehicle) datasets have achieved state-of-the-art performance.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom