Premium
Calibration of probability predictions from machine‐learning and statistical models
Author(s) -
Dormann Carsten F.
Publication year - 2020
Publication title -
global ecology and biogeography
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.164
H-Index - 152
eISSN - 1466-8238
pISSN - 1466-822X
DOI - 10.1111/geb.13070
Subject(s) - calibration , simple (philosophy) , probabilistic logic , probability distribution , statistical model , computer science , statistics , artificial intelligence , posterior probability , machine learning , mathematics , bayesian probability , philosophy , epistemology
Aim Predictions from statistical models may be uncalibrated, meaning that the predicted values do not have the nominal coverage probability. This is easiest seen with probability predictions in machine‐learning classification, including the common species occurrence probabilities. Here, a predicted probability of, say, .7 should indicate that out of 100 cases with these environmental conditions, and hence the same predicted probability, the species should be present in 70 and absent in 30. Innovation A simple calibration plot shows that this is not necessarily the case, particularly not for overfitted models or algorithms that use non‐likelihood target functions. As a consequence, ‘raw’ predictions from such a model could easily be off by .2, are unsuitable for averaging across model types, and resulting maps hence be substantially distorted. The solution, a flexible calibration regression, is simple and can be applied whenever deviations are observed. Main conclusions ‘Raw’, uncalibrated probability predictions should be calibrated before interpreting or averaging them in a probabilistic way.