z-logo
open-access-imgOpen Access
Spectral-Spatial Dual-Branch Selective Fusion Enhanced Transformer for Hyperspectral Image Classification (2025)
Author(s) -
Haixin Sun,
Jingwen Xu,
Fanlei Meng,
Qiuguang Cao,
Mengdi Cheng
Publication year - 2025
Publication title -
ieee journal of selected topics in applied earth observations and remote sensing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.246
H-Index - 88
eISSN - 2151-1535
pISSN - 1939-1404
DOI - 10.1109/jstars.2025.3593531
Subject(s) - geoscience , signal processing and analysis , power, energy and industry applications
In recent years, deep learning approaches that integrate convolutional neural networks (CNNs) with Transformer architectures have greatly enhanced the accuracy of hyperspectral image (HSI) classification. However, the combination of these two methods has not fully explored the deep correlations between spectral and spatial features, nor effectively utilized the discriminative information from the central pixels. To address these issues, we propose a novel spectral-spatial dual-branch selection fusion enhanced Transformer (DSFEFormer) network. Specifically, a spectral-spatial dual-branch structure is designed to extract comprehensive and rich spectral and spatial features, thereby enhancing the feature representation capability of the image. Subsequently, a spectral spatial selection attention (SSSA) is introduced to guide the deep fusion of spectral and spatial features, adaptively focusing on the most discriminative regions while effectively suppressing redundant information. Additionally, to fully leverage the deep fusion features, we design a Center Manhattan Transformer Encoder (CMTE) composed by two key modules, Pooling Manhattan Attention (PMA) and Convolutional Gated Feed Forward Network (CGFFN) to model the relationships between central pixels and their neighboring pixels effectively, further enhancing the ability to perceive subtle feature differences. Comparative experiments and analyses are conducted on four publicly available datasets, Pavia University, Salinas, Xuzhou, and WHU-Hi-LongKou. The experimental results indicate that DSFEFormer achieves superior classification performance while maintaining low computational complexity, and also exhibits certain robustness and generalization potential.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom