When to Stop Making Relevance Judgments? A Study of Stopping Methods for Building Information Retrieval Test Collections

TítuloWhen to Stop Making Relevance Judgments? A Study of Stopping Methods for Building Information Retrieval Test Collections
AutoresDavid E. Losada, Javier Parapar, Álvaro Barreiro
TipoArtículo de revista
Fonte Journal of the Association for Information Science and Technology, Wiley, Vol. 70, No. 1, pp. 49-60 , 2019.
ISSN2330-1635
DOI10.1002/asi.24077
AbstractIn Information Retrieval evaluation, pooling is a well-known technique to extract a sample of documents to be assessed for relevance. Given the pooled documents, a number of studies have proposed different prioritization methods to adjudicate documents for judgment. These methods follow different strategies to reduce the assessment effort. However, there is no clear guidance on how many relevance judgments are required for creating a reliable test collection. In this paper we investigate and further develop methods to determine when to stop making relevance judgments. We propose a highly diversified set of stopping methods and provide a comprehensive analysis of the usefulness of the resulting test collections. Some of the stopping methods introduced here combine innovative estimates of recall with time series models used in Financial Trading. Experimental results on several representative collections show that some stopping methods can reduce up to 95% of the assessment effort and still produce a robust test collection. We demonstrate that the reduced set of judgments can be reliably employed to compare search systems using disparate effectiveness metrics such as Average Precision, NDCG, P@100 and Rank Biased Precision. With all these measures, the correlations found between full pool rankings and reduced pool rankings is very high.
Palabras chaveInformation Retrieval, Evaluation, Pooling, Relevance Judgments, Stopping methods