z-logo
open-access-imgOpen Access
How best to quantify replication success? A simulation study on the comparison of replication success metrics
Author(s) -
Jasmine Muradchanian,
Rink Hoekstra,
Henk A. L. Kiers,
Don van Ravenzwaaij
Publication year - 2021
Publication title -
royal society open science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.84
H-Index - 51
ISSN - 2054-5703
DOI - 10.1098/rsos.201697
Subject(s) - frequentist inference , replication (statistics) , computer science , bayesian probability , inference , statistical inference , frequentist probability , publication bias , econometrics , artificial intelligence , bayesian inference , statistics , confidence interval , mathematics
To overcome the frequently debated crisis of confidence, replicating studies is becoming increasingly more common. Multiple frequentist and Bayesian measures have been proposed to evaluate whether a replication is successful, but little is known about which method best captures replication success. This study is one of the first attempts to compare a number of quantitative measures of replication success with respect to their ability to draw the correct inference when the underlying truth is known, while taking publication bias into account. Our results show that Bayesian metrics seem to slightly outperform frequentist metrics across the board. Generally, meta-analytic approaches seem to slightly outperform metrics that evaluate single studies, except in the scenario of extreme publication bias, where this pattern reverses.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom