z-logo
open-access-imgOpen Access
Testing whether ensemble modelling is advantageous for maximising predictive performance of species distribution models
Author(s) -
Hao Tianxiao,
Elith Jane,
LahozMonfort José J.,
GuilleraArroita Gurutzeta
Publication year - 2020
Publication title -
ecography
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.973
H-Index - 128
eISSN - 1600-0587
pISSN - 0906-7590
DOI - 10.1111/ecog.04890
Subject(s) - computer science , random forest , predictive modelling , calibration , cross validation , tree (set theory) , ensemble forecasting , component (thermodynamics) , machine learning , statistics , mathematics , mathematical analysis , physics , thermodynamics
Predictive performance is important to many applications of species distribution models (SDMs). The SDM ‘ensemble’ approach, which combines predictions across different modelling methods, is believed to improve predictive performance, and is used in many recent SDM studies. Here, we aim to compare the predictive performance of ensemble species distribution models to that of individual models, using a large presence–absence dataset of eucalypt tree species. To test model performance, we divided our dataset into calibration and evaluation folds using two spatial blocking strategies (checkerboard‐pattern and latitudinal slicing). We calibrated and cross‐validated all models within the calibration folds, using both repeated random division of data (a common approach) and spatial blocking. Ensembles were built using the software package ‘biomod2’, with standard (‘untuned’) settings. Boosted regression tree (BRT) models were also fitted to the same data, tuned according to published procedures. We then used evaluation folds to compare ensembles against both their component untuned individual models, and against the BRTs. We used area under the receiver‐operating characteristic curve (AUC) and log‐likelihood for assessing model performance. In all our tests, ensemble models performed well, but not consistently better than their component untuned individual models or tuned BRTs across all tests. Moreover, choosing untuned individual models with best cross‐validation performance also yielded good external performance, with blocked cross‐validation proving better suited for this choice, in this study, than repeated random cross‐validation. The latitudinal slice test was only possible for four species; this showed some individual models, and particularly the tuned one, performing better than ensembles. This study shows no particular benefit to using ensembles over individual tuned models. It also suggests that further robust testing of performance is required for situations where models are used to predict to distant places or environments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here