z-logo
open-access-imgOpen Access
Downstream Task-Aware Cloud Removal for Very-High-Resolution Remote Sensing Images: An Information Loss Perspective
Author(s) -
Ziyao Wang,
Xianping Ma,
Man-On Pun
Publication year - 2025
Publication title -
ieee journal of selected topics in applied earth observations and remote sensing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.246
H-Index - 88
eISSN - 2151-1535
pISSN - 1939-1404
DOI - 10.1109/jstars.2025.3610641
Subject(s) - geoscience , signal processing and analysis , power, energy and industry applications
Cloud removal (CR) methods have been widely studied and discussed to address the issue of cloud occlusion in Earth observation tasks. Existing CR methods heavily rely on image similarity metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) to evaluate the quality of CR results. However, due to factors including rapid changes in landforms and viewpoint differences between cloudy and reference images, image similarity metrics could be ineffective, even misleading. To address these challenges, this study investigates CR by evaluating whether CR algorithms effectively produce information beneficial for downstream tasks. We introduce CUHKCR-EXT, the first very-high-resolution CR dataset explicitly designed for post-CR downstream task performance assessment. Furthermore, we propose DFCFormer, a dynamic filter-based transformer that generates adaptive kernels conditioned on cloud characteristics, enabling more precise recovery across diverse cloud types within a unified framework. In addition, we design a feature alignment loss that enforces consistency between cloud-removed and reference features at the semantic level, which guides the model to retain landform-relevant information crucial for downstream analysis. Using scene classification as a representative downstream task, we conduct extensive experiments and evaluate performance using both image similarity and information loss metrics. The results demonstrate that the proposed method achieves strong performance across all evaluated metrics. More importantly, the improvements lie not only in image similarity but also in the preservation of task-relevant semantics, which enhances the effective quality of output images for downstream applications rather than merely their visual fidelity. The project code will be released at https://github.com/wzy6055/DFCFormer .

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom