z-logo
open-access-imgOpen Access
Semantic combination of textual and visual information in multimedia retrieval
Author(s) -
Stéphane Clinchant,
Julien Ah-Pine,
Gabriela Csurka
Publication year - 2011
Publication title -
hal (le centre pour la communication scientifique directe)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1991996.1992040
Subject(s) - computer science , information retrieval , context (archaeology) , image retrieval , set (abstract data type) , fuse (electrical) , image (mathematics) , semantics (computer science) , multimedia information retrieval , video retrieval , artificial intelligence , paleontology , electrical engineering , biology , programming language , engineering
International audienceThe goal of this paper is to introduce a set of techniques we call semantic combination in order to efficiently fuse text and image retrieval systems in the context of multimedia information access. These techniques emerge from the observation that image and tex-tual queries are expressed at different semantic levels and that a single image query is often ambiguous. Overall, the semantic combination techniques overcome a conceptual barrier rather than a technical one: these methods can be seen as a combination of late fusion and image reranking. Albeit simple, this approach has not been used yet. We assess the proposed techniques against late and cross-media fusion using 4 different ImageCLEF datasets. Compared to late fusion, performances significantly increase on two datasets and remain similar on the two other ones

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom