Unsupervised Visual and Textual Information Fusion in CBMIR Using Graph-Based Methods
Author(s) -
Julien Ah-Pine,
Gabriela Csurka,
Stéphane Clinchant
Publication year - 2015
Publication title -
acm transactions on office information systems
Language(s) - English
Resource type - Journals
eISSN - 1558-1152
pISSN - 0734-2047
DOI - 10.1145/2699668
Subject(s) - computer science , information retrieval , graph , scalability , focus (optics) , context (archaeology) , random walk , multimedia , theoretical computer science , paleontology , statistics , physics , mathematics , database , optics , biology
International audienceMultimedia collections are more than ever growing in size and diversity. Effective multimedia retrieval systems are thus critical to access these datasets from the end-user perspective and in a scalable way. We are interested in repositories of image/text multimedia objects and we study multimodal information fusion techniques in the context of content based multimedia information retrieval. We focus on graph based methods which have proven to provide state-of-the-art performances. We particularly examine two of such methods: cross-media similarities and random walk based scores. From a theoretical viewpoint, we propose a unifying graph based framework which encompasses the two aforementioned approaches. Our proposal allows us to highlight the core features one should consider when using a graph based technique for the combination of visual and textual information. We compare cross-media and random walk based results using three different real-world datasets. From a practical standpoint, our extended empirical analyses allow us to provide insights and guidelines about the use of graph based methods for multimodal information fusion in content based multimedia information retrieval
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom