
Attention-Plus-Plus Network for Lightweight Image Super-Resolution
Author(s) -
Wei Wu,
Xianglin Hao,
Xueliang Luo,
Zhu Li
Publication year - 2025
Publication title -
ieee signal processing letters
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.815
H-Index - 138
eISSN - 1558-2361
pISSN - 1070-9908
DOI - 10.1109/lsp.2025.3596018
Subject(s) - signal processing and analysis , computing and processing , communication, networking and broadcast technologies
Fig. 1. Parameters vs. PSNR vs. FLOPs on Manga109 dataset (×4). Fig. 1. Parameters vs. PSNR vs. FLOPs on Manga109 dataset (×4).Transformer-based image super-resolution (SR) algorithms have achieved remarkable progress due to their powerful long-range modeling capability. However, they still suffer from high computational complexity and insufficient multi-dimensional feature interaction, limiting the practical deployment in real-world scenarios. To address this issue, we propose a novel lightweight attention paradigm, dubbed Attention-Plus-Plus (A++) network. A++ first excavates self-attention information in channel dimension, and then exploits cross-attentions between those self-attention features and decomposed 2D spatial information along horizontal and vertical directions. Moreover, a multi-level feature extraction method is developed to deepen feature analyses in exploring local intrinsic characteristics. Furthermore, we design a multi-scale feed-forward block to achieve adaptive feature integration. This attention-cross-attention scheme strengthens multi-dimensional feature interactions while effectively reducing computational overhead. Experimental results demonstrate that the proposed A++ achieves superior performance-complexity tradeoffs compared to other state-of-the-art lightweight SR models, especially for the SR with large scales.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom