z-logo
open-access-imgOpen Access
LSTM for Image Annotation with Relative Visual Importance
Author(s) -
Geng Yan,
Yang Wang,
Zicheng Liao
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.30.78
Subject(s) - computer science , annotation , artificial intelligence , automatic image annotation , computer vision , image (mathematics) , image retrieval
We consider the problem of image annotations that takes into account of the relative visual importance of tags. Previous works usually consider the tags associated with an image as an unordered set of object names. In contrast, we exploit the implicit cues about the relative importance of objects mentioned by the tags. For example, important objects tend to be mentioned first in a list of tags. We propose a recurrent neural network with long-short term memory to model this. Given an image, our model can produce a ranked list of tags, where tags for objects of higher visual importance appear earlier in the list. Experimental results demonstrate that our model achieves better performance on several benchmark datasets.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom