z-logo
open-access-imgOpen Access
HPILN: a feature learning framework for cross‐modality person re‐identification
Author(s) -
Zhao YunBo,
Lin JianWu,
Xuan Qi,
Xi Xugang
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.0699
Subject(s) - modality (human–computer interaction) , artificial intelligence , computer science , benchmark (surveying) , feature (linguistics) , identification (biology) , rgb color model , modalities , pattern recognition (psychology) , computer vision , brightness , position (finance) , machine learning , linguistics , philosophy , botany , biology , social science , physics , geodesy , finance , sociology , optics , economics , geography
Most video surveillance systems use both RGB and infrared cameras, making it a vital technique to re‐identify a person cross the RGB and infrared modalities. This task can be challenging due to both the cross‐modality variations caused by heterogeneous images in RGB and infrared, and the intra‐modality variations caused by the heterogeneous human poses, camera position, light brightness etc. To meet these challenges, a novel feature learning framework, hard pentaplet and identity loss network (HPILN), is proposed. In the framework existing single‐modality re‐identification models are modified to fit for the cross‐modality scenario, following which specifically designed hard pentaplet loss and identity loss are used to increase the accuracy of the modified cross‐modality re‐identification models. Based on the benchmark of the SYSU‐MM01 dataset, extensive experiments have been conducted, showing that the authors’ method outperforms all existing ones in terms of cumulative match characteristic curve and mean average precision.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here