z-logo
open-access-imgOpen Access
Progress in recommender systems research: Crisis? What crisis?
Author(s) -
Cremonesi Paolo,
Jannach Dietmar
Publication year - 2021
Publication title -
ai magazine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.597
H-Index - 79
eISSN - 2371-9621
pISSN - 0738-4602
DOI - 10.1609/aimag.v42i3.18145
Subject(s) - recommender system , computer science , scholarship , focus (optics) , set (abstract data type) , data science , work (physics) , code (set theory) , information retrieval , political science , engineering , mechanical engineering , physics , law , programming language , optics
Scholars in algorithmic recommender systems research have developed a largely standardized scientific method, where progress is claimed by showing that a new algorithm outperforms existing ones on or more accuracy measures. In theory, reproducing and thereby verifying such improvements is easy, as it merely involves the execution of the experiment code on the same data. However, as recent work shows, the reported progress is often only virtual, because of a number of issues related to (i) a lack of reproducibility, (ii) technical and theoretical flaws, and (iii) scholarship practices that are strongly prone to researcher biases. As a result, several recent works could show that the latest published algorithms actually do not outperform existing methods when evaluated independently. Despite these issues, we currently see no signs of a crisis, where researchers re‐think their scientific method, but rather a situation of stagnation, where researchers continue to focus on the same topics. In this paper, we discuss these issues, analyze their potential underlying reasons, and outline a set of guidelines to ensure progress in recommender systems research.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here