
Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations
Author(s) -
Leydesdorff Loet,
Opthof Tobias
Publication year - 2010
Publication title -
journal of the american society for information science and technology
Language(s) - English
Resource type - Journals
eISSN - 1532-2890
pISSN - 1532-2882
DOI - 10.1002/asi.21371
Subject(s) - impact factor , scopus , normalization (sociology) , citation impact , citation , annals , statistics , citation analysis , computer science , mathematics , econometrics , library science , social science , political science , history , sociology , medline , law , ancient history
Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (a) citation behavior varies among fields of science and, therefore, leads to systematic differences, and (b) there are no statistics to inform us whether differences are significant. The recently introduced “source normalized impact per paper” indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved, which makes it impossible to test for significance. Using fractional counting of citations—based on the assumption that impact is proportionate to the number of references in the citing documents—citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five‐f old difference between their impact factors (2.793 and 13.156, respectively).