z-logo
open-access-imgOpen Access
Trans Media Relevance Feedback for Image Autoannotation
Author(s) -
Thomas Mensink,
Jakob Verbeek,
Gabriela Csurka
Publication year - 2010
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.24.20
Subject(s) - computer science , annotation , relevance (law) , automatic image annotation , relevance feedback , image retrieval , information retrieval , artificial intelligence , similarity (geometry) , image (mathematics) , visual word , pattern recognition (psychology) , political science , law
International audienceAutomatic image annotation is an important tool for keyword-based image retrieval, providing a textual index for non-annotated images. Many image auto annotation methods are based on visual similarity between images to be annotated and images in a training corpus. The annotations of the most similar training images are transferred to the image to be annotated. In this paper we consider using also similarities among the training images, both visual and textual, to derive pseudo relevance models, as well as crossmedia relevance models. We extend a recent state-of-the-art image annotation model to incorporate this information. On two widely used datasets (COREL and IAPR) we show experimentally that the pseudo-relevance models improve the annotation accuracy

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom