z-logo
Premium
Why and how we should join the shift from significance testing to estimation
Author(s) -
Berner Daniel,
Amrhein Valentin
Publication year - 2022
Publication title -
journal of evolutionary biology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.289
H-Index - 128
eISSN - 1420-9101
pISSN - 1010-061X
DOI - 10.1111/jeb.14009
Subject(s) - null hypothesis , statistical power , statistical hypothesis testing , statistical significance , type i and type ii errors , statistical inference , inference , statistics , biology , alternative hypothesis , econometrics
Abstract A paradigm shift away from null hypothesis significance testing seems in progress. Based on simulations, we illustrate some of the underlying motivations. First, p ‐values vary strongly from study to study, hence dichotomous inference using significance thresholds is usually unjustified. Second, ‘statistically significant’ results have overestimated effect sizes, a bias declining with increasing statistical power. Third, ‘statistically non‐significant’ results have underestimated effect sizes, and this bias gets stronger with higher statistical power. Fourth, the tested statistical hypotheses usually lack biological justification and are often uninformative. Despite these problems, a screen of 48 papers from the 2020 volume of the Journal of Evolutionary Biology exemplifies that significance testing is still used almost universally in evolutionary biology. All screened studies tested default null hypotheses of zero effect with the default significance threshold of p  = 0.05, none presented a pre‐specified alternative hypothesis, pre‐study power calculation and the probability of ‘false negatives’ (beta error rate). The results sections of the papers presented 49 significance tests on average (median 23, range 0–390). Of 41 studies that contained verbal descriptions of a ‘statistically non‐significant’ result, 26 (63%) falsely claimed the absence of an effect. We conclude that studies in ecology and evolutionary biology are mostly exploratory and descriptive. We should thus shift from claiming to ‘test’ specific hypotheses statistically to describing and discussing many hypotheses (possible true effect sizes) that are most compatible with our data, given our statistical model. We already have the means for doing so, because we routinely present compatibility (‘confidence’) intervals covering these hypotheses.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here