Premium
Addressing model uncertainty in seasonal and annual dynamical ensemble forecasts
Author(s) -
DoblasReyes F. J.,
Weisheimer A.,
Déqué M.,
Keenlyside N.,
McVean M.,
Murphy J. M.,
Rogel P.,
Smith D.,
Palmer T. N.
Publication year - 2009
Publication title -
quarterly journal of the royal meteorological society
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.744
H-Index - 143
eISSN - 1477-870X
pISSN - 0035-9009
DOI - 10.1002/qj.464
Subject(s) - parametrization (atmospheric modeling) , ensemble forecasting , ensemble average , range (aeronautics) , stochastic modelling , forecast skill , sampling (signal processing) , econometrics , environmental science , statistics , statistical physics , mathematics , computer science , meteorology , climatology , physics , materials science , filter (signal processing) , quantum mechanics , geology , composite material , computer vision , radiative transfer
Abstract The relative merits of three forecast systems addressing the impact of model uncertainty on seasonal/annual forecasts are described. One system consists of a multi‐model, whereas two other systems sample uncertainties by perturbing the parametrization of reference models through perturbed parameter and stochastic physics techniques. Ensemble re‐forecasts over 1991 to 2001 were performed with coupled climate models started from realistic initial conditions. Forecast quality varies due to the different strategies for sampling uncertainties, but also to differences in initialisation methods and in the reference forecast system. Both the stochastic‐physics and perturbed‐parameter ensembles improve the reliability with respect to their reference forecast systems, but not the discrimination ability. Although the multi‐model experiment has an ensemble size larger than the other two experiments, most of the assessment was done using equally‐sized ensembles. The three ensembles show similar levels of skill: significant differences in performance typically range between 5 and 20%. However, a nine‐member multi‐model shows better results for seasonal predictions with lead times shorter than five months, followed by the stochastic‐physics and perturbed‐parameter ensembles. Conversely, for seasonal predictions with lead times longer than four months, the perturbed‐parameter ensemble gives more often better results. All systems suggest that spread cannot be considered a useful predictor of skill. Annual‐mean predictions showed lower forecast quality than seasonal predictions. Only small differences between the systems were found. The full multi‐model ensemble has improved quality with respect to all other systems, mainly from the larger ensemble size for lead times longer than four months and annual predictions. Copyright © 2009 Royal Meteorological Society and Crown Copyright