z-logo
open-access-imgOpen Access
GTCFN: A Graph-based Transformer and Convolution Fusion Network for Hyperspectral Image Classification
Author(s) -
Xiaofeng Zhao,
Junyi Ma,
Lei Wang,
Jiayi Shi,
Yao Ding,
Zhili Zhang,
Jie Feng
Publication year - 2025
Publication title -
ieee transactions on geoscience and remote sensing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 2.141
H-Index - 254
eISSN - 1558-0644
pISSN - 0196-2892
DOI - 10.1109/tgrs.2025.3618962
Subject(s) - geoscience , signal processing and analysis
Graph Neural Networks (GNN) are capable of modeling complex non-Euclidean structures through information transfer, and thus have been party widely used in the field of Hyperspectral Image (HSI) classification. However, conventional GNNs often have difficulty in handling regular grid data, which in turn loses positional information or spatial coherence, as well as in capturing long-range dependencies, which affects their performance in heterogeneous and limited-sample condition. To address these limitations, this paper proposes a novel Graph-based Transformer and Convolution Fusion Network (GTCFN) that integrates the local representation power of Convolutional Neural Networks (CNNs) with the global reasoning capability of graph-based Transformers. GTCFN consists of two synergistic branches: a Graph Transformer sub-network (GTsN) that models high-level semantic structures among superpixels via attention-based topology learning, and a Spectral–Spatial Convolutional sub-network (S 2 CsN) that extracts multi-scale fine-grained features using 5×5, 7×7, and 9×9 convolutional kernels. To enhance efficiency and generalization, GTCFN incorporates kernelized attention with random feature mapping, reducing the complexity from O(M 2 ) to O(M) . At the same time, attention oversmoothing is avoided by introducing a Gumbel-based multi-head random aggregation mechanism. Experiments conducted on four benchmark datasets, namely Indian Pines, Pavia University, Salinas and WHU-Hi-HongHu, show that GTCFN achieves state-of-the-art performance with OA of 95.62%, 98.34%, 97.88% and 96.69%, which is significantly better than 12 other algorithms, such as CNNs, graph-based models and hybrid network models. The core code for GTCFN is posted on https://github.com/ Majunyi310321/GTCFN.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom