Research Library

open-access-imgOpen AccessFrequency Domain Modality-invariant Feature Learning for Visible-infrared Person Re-Identification
Author(s)
Yulin Li,
Tianzhu Zhang,
Yongdong Zhang
Publication year2024
Visible-infrared person re-identification (VI-ReID) is challenging due to thesignificant cross-modality discrepancies between visible and infrared images.While existing methods have focused on designing complex network architecturesor using metric learning constraints to learn modality-invariant features, theyoften overlook which specific component of the image causes the modalitydiscrepancy problem. In this paper, we first reveal that the difference in theamplitude component of visible and infrared images is the primary factor thatcauses the modality discrepancy and further propose a novel Frequency Domainmodality-invariant feature learning framework (FDMNet) to reduce modalitydiscrepancy from the frequency domain perspective. Our framework introduces twonovel modules, namely the Instance-Adaptive Amplitude Filter (IAF) module andthe Phrase-Preserving Normalization (PPNorm) module, to enhance themodality-invariant amplitude component and suppress the modality-specificcomponent at both the image- and feature-levels. Extensive experimental resultson two standard benchmarks, SYSU-MM01 and RegDB, demonstrate the superiorperformance of our FDMNet against state-of-the-art methods.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here