z-logo
open-access-imgOpen Access
Image Captioning with Sentiment Terms via Weakly-Supervised Sentiment Dataset
Author(s) -
Andrew Shin,
Yoshitaka Ushiku,
Tatsuya Harada
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.30.53
Subject(s) - closed captioning , computer science , artificial intelligence , sentiment analysis , image (mathematics) , natural language processing , pattern recognition (psychology)
Image captioning task has become a highly competitive research area with successful application of convolutional and recurrent neural networks, especially with the advent of long short-term memory (LSTM) architecture. However, its primary focus has been a factual description of the images, including the objects, movements, and their relations. While such focus has demonstrated competence, describing the images along with nonfactual elements, namely sentiments of the images expressed via adjectives, has mostly been neglected. We attempt to address this issue by fine-tuning an additional convolutional neural network solely devoted to sentiments, where dataset on sentiment is built from a data-driven, multi-label approach. Our experimental results show that our method can generate image captions with sentiment terms that are more compatible with the images than solely relying on features devoted to object classification, while capable of preserving the semantics.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom