z-logo
open-access-imgOpen Access
Evaluating multi-query sessions
Author(s) -
Evangelos Kanoulas,
Ben Carterette,
Paul Clough,
Mark Sanderson
Publication year - 2011
Publication title -
proceedings of the 45th international acm sigir conference on research and development in information retrieval
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/2009916.2010056
Subject(s) - computer science , session (web analytics) , query expansion , information retrieval , query optimization , query language , sargable , search engine , web search query , web query classification , data mining , world wide web
The standard system-based evaluation paradigm has focused on assessing the performance of retrieval systems in serving the best results for a single query. Real users, however, often begin an interaction with a search engine with a sufficiently under-specified query that they will need to reformulate before they find what they are looking for. In this work we consider the problem of evaluating retrieval systems over test collections of multi-query sessions. We propose two families of measures: a model-free family that makes no assumption about the user's behavior over a session, and a model-based family with a simple model of user interactions over the session. In both cases we generalize traditional evaluation metrics such as average precision to multi-query session evaluation. We demonstrate the behavior of the proposed metrics by using the new TREC 2010 Session track collection and simulations over the TREC-9 Query track collection.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom