Frequency-Prompted Image Restoration to Enhance Perception in Intelligent Transportation Systems
Author(s) -
Yuning Cui,
Mingyu Liu,
Xiongfei Su,
Alois Knoll
Publication year - 2025
Publication title -
ieee transactions on intelligent transportation systems
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.591
H-Index - 153
eISSN - 1558-0016
pISSN - 1524-9050
DOI - 10.1109/tits.2025.3617507
Subject(s) - transportation , aerospace , communication, networking and broadcast technologies , computing and processing , robotics and control systems , signal processing and analysis
High perceptual image quality is crucial for intelligent transportation systems (ITS), including autonomous vehicles, digital twins, and surveillance infrastructure. However, images captured in adverse weather conditions or dynamic environments often suffer from various visibility degradations. To address this issue, image restoration aims to recover missing details and remove distortions from degraded observations, thereby enhancing the usability of visual data in intelligent transportation applications. Inspired by the success of prompt learning in natural language processing, recent studies have explored prompt-based approaches for various image restoration tasks. However, most of these methods operate in the spatial domain. Given the importance of frequency learning in image restoration, particularly in reducing the spectral discrepancy between degraded and sharp image pairs, this study investigates the use of frequency prompts through a plug-and-play mechanism consisting of a prompt generation module and a prompt integration module. Specifically, the prompt generation module encodes frequency information by aggregating pre-defined learnable parameters, guided by the implicitly decomposed spectra of the input features. The learned prompts are then integrated into the feature spectra via dual-dimensional attention, dynamically guiding the reconstruction process and enabling more effective frequency-aware learning. To validate the effectiveness of the proposed plug-in module, we integrate it into both CNN-based and Transformer-based backbones. Extensive experiments demonstrate that the CNN-based variant achieves state-of-the-art performance on 15 datasets across five representative image restoration tasks. Furthermore, it generalizes well to composite degradation scenarios. The Transformer-based model performs competitively with state-of-the-art methods under two all-in-one image restoration settings. Finally, the effectiveness of our models in enhancing perception for ITS is empirically verified.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom