Premium
A graphical framework for model selection criteria and significance tests: refutation, confirmation and ecology
Author(s) -
Aho Ken,
Derryberry Dewayne,
Peterson Teri
Publication year - 2017
Publication title -
methods in ecology and evolution
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.425
H-Index - 105
ISSN - 2041-210X
DOI - 10.1111/2041-210x.12648
Subject(s) - bayes factor , statistical hypothesis testing , model selection , information criteria , bayes' theorem , consistency (knowledge bases) , akaike information criterion , econometrics , selection (genetic algorithm) , bayesian information criterion , statistical power , computer science , machine learning , mathematics , statistics , bayesian probability , artificial intelligence
Summary In this study, we use a novel graphical heuristic to compare the way four methods: significance testing, two popular information‐theoretic approaches ( AIC and BIC ) and Good's Bayes/non‐Bayes compromise (an underutilized hypothesis testing approach whose demarcation criterion adjusts for n ), evaluate the merit of competing hypotheses, for example H 0 and H A . A primary goal of our work is to clarify the concept of strong consistency in model selection. Explicit considerations of this principle (including the strong consistency of BIC ) are currently limited to technical derivations, inaccessible to most ecologists. We use our graphical framework to demonstrate, in simple terms, the strong consistency of both BIC and Good's compromise. Our framework also locates the evaluated metrics (and IC s in general) along a conceptual continuum of hypothesis refutation/confirmation that considers n , parameter number and effect size. Along this continuum, significance testing and particularly AIC are refutative for H 0 , whereas Good's compromise and particularly BIC are confirmatory for the true hypothesis. Our work graphically demonstrates the well‐known asymptotic bias of significance tests for H A , and the incorrectness of using statistically non‐consistent methods for point hypothesis testing. To address these issues, we recommend: (i) dedicated confirmatory methods with strong consistency like BIC for use in point hypothesis testing and confirmatory model selection; (ii) significance tests for use in exploratory/refutative hypothesis testing, particularly when conjoined with rational approaches (e.g. Good's compromise, power analyses) to account for the effect of n on P ‐values; and (iii) asymptotically efficient methods like AIC for exploratory model selection.