z-logo
open-access-imgOpen Access
RELEVANCE FEEDBACK AS AN INDICATOR TO SELECT THE BEST SEARCH ENGINE - Evaluation on TREC Data
Author(s) -
Gilles Hubert,
Josiane Mothe
Publication year - 2007
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5220/0002361301840189
Subject(s) - relevance feedback , computer science , relevance (law) , search engine , information retrieval , metasearch engine , data mining , artificial intelligence , web search query , image retrieval , political science , law , image (mathematics)
This paper explores information retrieval system variability and takes advantage of the fact two systems can retrieve different documents for a given query. More precisely, our approach is based on data fusion (fusion of system results) by taking into account local performances of each system. Our method considers the relevance of the very first documents retrieved by different systems and from this information selects the system that will perform the retrieval for the user. We found that this principle improves the performances of about 9%. Evaluation is based on different years of TREC evaluation program (TREC 3, 5, 6 and 7), TREC-adhoc tracks. It considers the two and five best systems that participate to TREC the corresponding year.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom