In This Issue
Publication year - 2015
Publication title -
proceedings of the national academy of sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 5.011
H-Index - 771
eISSN - 1091-6490
pISSN - 0027-8424
DOI - 10.1073/iti0215112
Subject(s) - chemistry , computational biology , biology
Little systematic evidence exists regarding the effectiveness of scientific peer review. Kyle Siler et al. (pp. 360–365) used a dataset of 1,008 manuscripts submitted to three leading medical journals—Annals of Internal Medicine, The BMJ, and The Lancet—in 2003 and 2004 to evaluate the effectiveness of peer review. The authors analyzed differences in citation outcomes for articles that received different appraisals from editors and peer reviewers. The authors found that desk-rejected manuscripts—those that were deemed unworthy of peer review by editors—received fewer citations when they were eventually published than those sent for peer review prior to rejection. In addition, among all accepted and rejected manuscripts, manuscripts with low scores from peer reviewers received relatively few citations when they were eventually published. However, the authors found that the three medical journals rejected many highly cited manuscripts, including the 14 most highly cited; 12 of those 14 manuscripts were desk-rejected. These rejected manuscripts comprised roughly the top 2% of most popular published articles. According to the authors the study suggests that while peer review is effective at predicting “good” articles, it simultaneously may have difficulties in identifying outstanding and/or breakthrough work. — S.R.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom