Published July 2010 | Version public
Book Section - Chapter

Learning More Powerful Test Statistics for Click-Based Retrieval Evaluation

Abstract

Interleaving experiments are an attractive methodology for evaluating retrieval functions through implicit feedback. Designed as a blind and unbiased test for eliciting a preference between two retrieval functions, an interleaved ranking of the results of two retrieval functions is presented to the users. It is then observed whether the users click more on results from one retrieval function or the other. While it was shown that such interleaving experiments reliably identify the better of the two retrieval functions, the naive approach of counting all clicks equally leads to a suboptimal test. We present new methods for learning how to score different types of clicks so that the resulting test statistic optimizes the statistical power of the experiment. This can lead to substantial savings in the amount of data required for reaching a target confidence level. Our methods are evaluated on an operational search engine over a collection of scientific articles.

Additional Information

© 2010 ACM. The work is funded by NSF Awards IIS-0812091 and IIS-0905467. The first author is also supported in part by a Microsoft Research Graduate Fellowship and a Yahoo! Key Scientific Challenges Award.

Additional details

Identifiers

Eprint ID
49556
Resolver ID
CaltechAUTHORS:20140910-140102637

Funding

NSF
IIS-0812091
NSF
IIS-0905467
Microsoft Research
Yahoo!

Dates

Created
2014-09-10
Created from EPrint's datestamp field
Updated
2021-11-10
Created from EPrint's last_modified field