z-logo
Premium
Spending degrees of freedom in a poor economy: A case study of building a sightability model for moose in northeastern Minnesota
Author(s) -
Giudice John H.,
Fieberg John R.,
Lenarz Mark S.
Publication year - 2012
Publication title -
the journal of wildlife management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.94
H-Index - 111
eISSN - 1937-2817
pISSN - 0022-541X
DOI - 10.1002/jwmg.213
Subject(s) - overfitting , akaike information criterion , model selection , statistics , degrees of freedom (physics and chemistry) , econometrics , selection (genetic algorithm) , logistic regression , population , inference , mathematics , computer science , machine learning , artificial intelligence , demography , physics , quantum mechanics , sociology , artificial neural network
Sightability models are binary logistic‐regression models used to estimate and adjust for visibility bias in wildlife‐population surveys. Like many models in wildlife and ecology, sightability models are typically developed from small observational datasets with many candidate predictors. Aggressive model‐selection methods are often employed to choose a best model for prediction and effect estimation, despite evidence that such methods can lead to overfitting (i.e., selected models may describe random error or noise rather than true predictor–response curves) and poor predictive ability. We used moose ( Alces alces ) sightability data from northeastern Minnesota (2005–2007) as a case study to illustrate an alternative approach, which we refer to as degrees‐of‐freedom (df) spending: sample‐size guidelines are used to determine an acceptable level of model complexity and then a pre‐specified model is fit to the data and used for inference. For comparison, we also constructed sightability models using Akaike's Information Criterion (AIC) step‐down procedures and model averaging (based on a small set of models developed using df‐spending guidelines). We used bootstrap procedures to mimic the process of model fitting and prediction, and to compute an index of overfitting, expected predictive accuracy, and model‐selection uncertainty. The index of overfitting increased 13% when the number of candidate predictors was increased from three to eight and a best model was selected using step‐down procedures. Likewise, model‐selection uncertainty increased when the number of candidate predictors increased. Model averaging (based on R  = 30 models with 1–3 predictors) effectively shrunk regression coefficients toward zero and produced similar estimates of precision to our 3‐df pre‐specified model. As such, model averaging may help to guard against overfitting when too many predictors are considered (relative to available sample size). The set of candidate models will influence the extent to which coefficients are shrunk toward zero, which has implications for how one might apply model averaging to problems traditionally approached using variable‐selection methods. We often recommend the df‐spending approach in our consulting work because it is easy to implement and it naturally forces investigators to think carefully about their models and predictors. Nonetheless, similar concepts should apply whether one is fitting 1 model or using multi‐model inference. For example, model‐building decisions should consider the effective sample size, and potential predictors should be screened (without looking at their relationship to the response) for missing data, narrow distributions, collinearity, potentially overly influential observations, and measurement errors (e.g., via logical error checks). © 2011 The Wildlife Society.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here