z-logo
Premium
Cross‐modal retrieval with dual multi‐angle self‐attention
Author(s) -
Li Wenjie,
Zheng Yi,
Zhang Yuejie,
Feng Rui,
Zhang Tao,
Fan Weiguo
Publication year - 2021
Publication title -
journal of the association for information science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.903
H-Index - 145
eISSN - 2330-1643
pISSN - 2330-1635
DOI - 10.1002/asi.24373
Subject(s) - computer science , benchmark (surveying) , modal , dual (grammatical number) , embedding , similarity (geometry) , modality (human–computer interaction) , artificial intelligence , space (punctuation) , modalities , information retrieval , natural language processing , pattern recognition (psychology) , image (mathematics) , art , chemistry , literature , geodesy , polymer chemistry , geography , operating system , social science , sociology
In recent years, cross‐modal retrieval has been a popular research topic in both fields of computer vision and natural language processing. There is a huge semantic gap between different modalities on account of heterogeneous properties. How to establish the correlation among different modality data faces enormous challenges. In this work, we propose a novel end‐to‐end framework named Dual Multi‐Angle Self‐Attention (DMASA) for cross‐modal retrieval. Multiple self‐attention mechanisms are applied to extract fine‐grained features for both images and texts from different angles. We then integrate coarse‐grained and fine‐grained features into a multimodal embedding space, in which the similarity degrees between images and texts can be directly compared. Moreover, we propose a special multistage training strategy, in which the preceding stage can provide a good initial value for the succeeding stage and make our framework work better. Very promising experimental results over the state‐of‐the‐art methods can be achieved on three benchmark datasets of Flickr8k , Flickr30k , and MSCOCO .

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here