z-logo
Premium
Ecologists should not use statistical significance tests to interpret simulation model results
Author(s) -
White J. Wilson,
Rassweiler Andrew,
Samhouri Jameal F.,
Stier Adrian C.,
White Crow
Publication year - 2014
Publication title -
oikos
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.672
H-Index - 179
eISSN - 1600-0706
pISSN - 0030-1299
DOI - 10.1111/j.1600-0706.2013.01073.x
Subject(s) - frequentist inference , null hypothesis , statistical hypothesis testing , statistical power , statistics , context (archaeology) , econometrics , computer science , field (mathematics) , sample size determination , focus (optics) , null model , ecology , mathematics , bayesian probability , bayesian inference , biology , paleontology , physics , pure mathematics , optics
Simulation models are widely used to represent the dynamics of ecological systems. A common question with such models is how changes to a parameter value or functional form in the model alter the results. Some authors have chosen to answer that question using frequentist statistical hypothesis tests (e.g. ANOVA). This is inappropriate for two reasons. First, p‐values are determined by statistical power (i.e. replication), which can be arbitrarily high in a simulation context, producing minuscule p‐values regardless of the effect size. Second, the null hypothesis of no difference between treatments (e.g. parameter values) is known a priori to be false, invalidating the premise of the test. Use of p‐values is troublesome (rather than simply irrelevant) because small p‐values lend a false sense of importance to observed differences. We argue that modelers should abandon this practice and focus on evaluating the magnitude of differences between simulations. Synthesis Researchers analyzing field or lab data often test ecological hypotheses using frequentist statistics (t‐tests, ANOVA, etc.) that focus on p‐values. Field and lab data usually have limited sample sizes, and p‐values are valuable for quantifying the probability of making incorrect inferences in that situation. However, modern ecologists increasingly rely on simulation models to address complex questions, and those who were trained in frequentist statistics often apply the hypothesis‐testing approach inappropriately to their simulation results. Our paper explains why p‐values are not informative for interpreting simulation models, and suggests better ways to evaluate the ecological significance of model results.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here