Premium
What is the role of the observational dataset in the evaluation and scoring of climate models?
Author(s) -
GómezNavarro J. J.,
Montávez J. P.,
Jerez S.,
JiménezGuerrero P.,
Zorita E.
Publication year - 2012
Publication title -
geophysical research letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.007
H-Index - 273
eISSN - 1944-8007
pISSN - 0094-8276
DOI - 10.1029/2012gl054206
Subject(s) - observational study , climate model , weighting , ranking (information retrieval) , climatology , set (abstract data type) , environmental science , computer science , downscaling , climate change , data set , econometrics , quality (philosophy) , meteorology , statistics , precipitation , machine learning , mathematics , geography , artificial intelligence , geology , medicine , philosophy , oceanography , epistemology , radiology , programming language
Climate models are usually assessed through their capacity to reproduce present climate conditions, which in turn are established by comparing the output of climate simulations with observational data sets including gridded products. However, due to the nature of the procedures to obtain observations and the statistical techniques employed to extrapolate this information onto reference gridded databases, they contain important uncertainties which may compromise the evaluation process. This paper examines to what extent the evaluation and ranking of an ensemble of regional climate models, according to their ability to reproduce the observed climatologies, is sensitive to the choice of the reference observational data set. Results show that even in areas covered by dense monitoring networks such as Spain, uncertainties in the observations are comparable to the uncertainties within state‐of‐the‐art Regional Climate Models, at least when they are driven by nominally perfect boundary conditions like reanalysis. These findings point out that model evaluation should take into account the observational uncertainties. In particular, weighting models according to how well they perform with respect to a unique observation dataset, without acknowledging uncertainties in the observational dataset, might reduce the quality of the weighted ensemble average.