z-logo
open-access-imgOpen Access
Learning more powerful test statistics for click-based retrieval evaluation
Author(s) -
Yisong Yue,
Yue Gao,
Oliver Chapelle,
Ya Zhang,
Thorsten Joachims
Publication year - 2010
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1835449.1835534
Subject(s) - interleaving , computer science , ranking (information retrieval) , statistic , learning to rank , information retrieval , test (biology) , machine learning , artificial intelligence , data mining , statistics , mathematics , paleontology , biology , operating system
Interleaving experiments are an attractive methodology for evaluating retrieval functions through implicit feedback. Designed as a blind and unbiased test for eliciting a preference between two retrieval functions, an interleaved ranking of the results of two retrieval functions is presented to the users. It is then observed whether the users click more on results from one retrieval function or the other. While it was shown that such interleaving experiments reliably identify the better of the two retrieval functions, the naive approach of counting all clicks equally leads to a suboptimal test. We present new methods for learning how to score different types of clicks so that the resulting test statistic optimizes the statistical power of the experiment. This can lead to substantial savings in the amount of data required for reaching a target confidence level. Our methods are evaluated on an operational search engine over a collection of scientific articles.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom