
STCMH with minimal semantic loss
Author(s) -
Du Jianing,
Chen Zhikui,
Zhong Fangming,
Qiu Xiru
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.5034
Subject(s) - discriminative model , hash function , computer science , artificial intelligence , binary number , binary code , pattern recognition (psychology) , modalities , modal , modality (human–computer interaction) , theoretical computer science , machine learning , mathematics , arithmetic , social science , chemistry , computer security , sociology , polymer chemistry
Cross‐modal hashing (CMH) has received widespread attention due to high retrieval efficiency, which plays an extremely important role in cross‐modal retrieval. Recently, many CMH methods have been proposed to establish the semantic connection of different modalities. However, most of these methods only use a simple quantisation strategy, resulting in large quantisation error, and inferior hash codes. To address this issue, in this study, the authors propose a novel self‐taught CMH (STCMH) to minimise the semantic encoding loss. In particular, the common semantic representations across different modalities are first learnt based on collective matrix factorisation. Then, the quantisation procedure based on orthogonal transformation is integrated to encode the semantic representations into discriminative binary codes. Moreover, similarity preservation is imposed to further boost the discriminative power. Finally, hashing functions learning is formulated as a binary classification problem by self‐taught scheme. Experimental results on three public datasets demonstrate that STCMH significantly outperforms most state‐of‐the‐art CMH methods.