z-logo
open-access-imgOpen Access
Evaluation in Contextual Information Retrieval
Author(s) -
Lynda Tamine,
Mariam Daoud
Publication year - 2018
Publication title -
acm computing surveys
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.079
H-Index - 163
eISSN - 1557-7341
pISSN - 0360-0300
DOI - 10.1145/3204940
Subject(s) - computer science , information retrieval , data science , context (archaeology) , human–computer information retrieval , cognitive models of information retrieval , task (project management) , demographics , world wide web , search engine , geography , demography , management , sociology , economics , archaeology
Context such as the user’s search history, demographics, devices, and surroundings, has become prevalent in various domains of information seeking and retrieval such as mobile search, task-based search, and social search. While evaluation is central and has a long history in information retrieval, it faces the big challenge of designing an appropriate methodology that embeds the context into evaluation settings. In this article, we present a unified summary of a wide range of main and recent progress in contextual information retrieval evaluation that leverages diverse context dimensions and uses different principles, methodologies, and levels of measurements. More specifically, this survey article aims to fill two main gaps in the literature: First, it provides a critical summary and comparison of existing contextual information retrieval evaluation methodologies and metrics according to a simple stratification model; second, it points out the impact of context dynamicity and data privacy on the evaluation design. Finally, we recommend promising research directions for future investigations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom