
Deep Attention Network for RGB-Infrared Cross-Modality Person Re-Identification
Author(s) -
Yang Li,
Huahu Xu
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1642/1/012015
Subject(s) - computer science , artificial intelligence , rgb color model , feature (linguistics) , pattern recognition (psychology) , modality (human–computer interaction) , identification (biology) , backbone network , computer vision , task (project management) , ranking (information retrieval) , deep learning , process (computing) , telecommunications , engineering , philosophy , linguistics , botany , systems engineering , biology , operating system
RGB-Infrared cross-modality person re-identification is an important task for 24hour full-time intelligent video surveillance, the task is challenging because of cross modal heterogeneity and intra modal variation. A novel deep attention network is proposed in this paper to handle these challenges by increasing the discriminability of the learned person features. The method includes three elements: (1) dual-path CNN to extract the feature maps of the RGB images and infrared images respectively, (2) dual-attention mechanism combining spatial attention and channel attention to enhance the discriminability of extracted features, and (3) joint loss function joining bi-directional ranking loss and identity loss to constraint the training process to further increase the accuracy. Extensive experiments on two public datasets demonstrate the effectiveness of our proposed method because the method achieves higher performance than state-of-the-arts methods.