
Multimodal deep network learning‐based image annotation
Author(s) -
Zhu Songhao,
Li Xiangxiang,
Shen Shuhan
Publication year - 2015
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2015.0258
Subject(s) - artificial intelligence , deep learning , computer science , convolutional neural network , modality (human–computer interaction) , modalities , artificial neural network , scheme (mathematics) , machine learning , annotation , variety (cybernetics) , field (mathematics) , process (computing) , pattern recognition (psychology) , mathematics , mathematical analysis , social science , sociology , pure mathematics , operating system
Multilabel image annotation is one of the most important open problems in the computer vision field. Unlike existing works that usually use conventional visual features to annotate images, features based on deep learning have shown potential to achieve outstanding performance. A multimodal deep learning framework is proposed, which aims to optimally integrate multiple deep neural networks pretrained with convolutional neural networks. In particular, the proposed framework explores a unified two‐stage learning scheme that consists of (i) learning to fune‐tune the parameters of the deep neural network with respect to each individual modality and (ii) learning to find the optimal combination of diverse modalities simultaneously in a coherent process. Experiments conducted on a variety of public datasets.