Research Library

open-access-imgOpen AccessDeep Covariance Alignment for Domain Adaptive Remote Sensing Image Segmentation
Author(s)
Linshan Wu,
Ming Lu,
Leyuan Fang
Publication year2024
Unsupervised domain adaptive (UDA) image segmentation has recently gainedincreasing attention, aiming to improve the generalization capability fortransferring knowledge from the source domain to the target domain. However, inhigh spatial resolution remote sensing image (RSI), the same category fromdifferent domains (\emph{e.g.}, urban and rural) can appear to be totallydifferent with extremely inconsistent distributions, which heavily limits theUDA accuracy. To address this problem, in this paper, we propose a novel DeepCovariance Alignment (DCA) model for UDA RSI segmentation. The DCA canexplicitly align category features to learn shared domain-invariantdiscriminative feature representations, which enhances the ability of modelgeneralization. Specifically, a Category Feature Pooling (CFP) module is firstemployed to extract category features by combining the coarse outputs and thedeep features. Then, we leverage a novel Covariance Regularization (CR) toenforce the intra-category features to be closer and the inter-categoryfeatures to be further separate. Compared with the existing category alignmentmethods, our CR aims to regularize the correlation between different dimensionsof the features and thus performs more robustly when dealing with the divergentcategory features of imbalanced and inconsistent distributions. Finally, wepropose a stagewise procedure to train the DCA in order to alleviate the erroraccumulation. Experiments on both Rural-to-Urban and Urban-to-Rural scenariosof the LoveDA dataset demonstrate the superiority of our proposed DCA overother state-of-the-art UDA segmentation methods. Code is available athttps://github.com/Luffy03/DCA.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here