z-logo
Premium
Reliability of reviewers' ratings when using public peer review: a case study
Author(s) -
BORNMANN L.,
DANIEL H.D.
Publication year - 2010
Publication title -
learned publishing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.06
H-Index - 34
eISSN - 1741-4857
pISSN - 0953-1513
DOI - 10.1087/20100207
Subject(s) - reliability (semiconductor) , intraclass correlation , peer review , peer evaluation , psychology , inter rater reliability , computer science , statistics , political science , mathematics , physics , psychometrics , law , clinical psychology , power (physics) , rating scale , quantum mechanics , higher education
If a manuscript meets scientific standards and contributes to the advancement of science, it can be expected that two or more reviewers will agree on its value. Manuscripts are rated reliably when there is a high level of agreement between independent reviewers. This study investigates for the first time whether inter‐rater reliability, which is low with the traditional model of closed peer review, is also low with the new system of public peer review or whether higher coefficients can be found for public peer review. To investigate this question we examined the peer‐review process practiced by the interactive open access journal Atmospheric Chemistry and Physics (based on 465 manuscripts submitted between 2004 and 2006 receiving 1,058 reviews in total). The results of the study show that inter‐rater reliability is low (kappa coefficient) or reasonable (Intraclass Correlation Coefficient) in public peer review .

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here