Premium
Consensus‐based journal rankings: A complementary tool for bibliometric evaluation
Author(s) -
Aledo Juan A.,
Gámez Jose A.,
Molina David,
Rosete Alejandro
Publication year - 2018
Publication title -
journal of the association for information science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.903
H-Index - 145
eISSN - 2330-1643
pISSN - 2330-1635
DOI - 10.1002/asi.24040
Subject(s) - ranking (information retrieval) , citation , quartile , computer science , context (archaeology) , impact factor , data science , information retrieval , statistics , library science , mathematics , political science , confidence interval , paleontology , biology , law
Annual journal rankings are usually considered a tool for the evaluation of research and researchers. Although they are an objective resource for such evaluation, they also present drawbacks: (a) the uncertainty about the definite position of a target journal in the corresponding annual ranking when selecting a journal, and (b) in spite of the nonsignificant difference in score (for instance, impact factor) between consecutive journals in the ranking, the journals are strictly ranked and eventually placed in different terciles/quartiles, which may have a significant influence in the subsequent evaluation. In this article we present several proposals to obtain an aggregated consensus ranking as an alternative/complementary tool to standardize annual rankings. To illustrate the proposed methodology we use as a case study the Journal Citation Reports, and in particular the category of Computer Science: Artificial Intelligence (CS:AI). In the context of the consensus rankings obtained by the different methods, we discuss the convenience of using one or the other procedure according to the corresponding framework. In particular, our proposals allow us to obtain consensus rankings that avoid crisp frontiers between similarly ranked journals and consider the longitudinal/temporal evolution of the journals.