Open Access
Operational statistical postprocessing of temperature ensemble forecasts with station‐specific predictors
Author(s) -
Ylinen Kaisa,
Räty Olle,
Laine Marko
Publication year - 2020
Publication title -
meteorological applications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.672
H-Index - 59
eISSN - 1469-8080
pISSN - 1350-4827
DOI - 10.1002/met.1971
Subject(s) - calibration , environmental science , latitude , range (aeronautics) , meteorology , training (meteorology) , elevation (ballistics) , lead time , climatology , ensemble forecasting , ensemble average , forecast skill , computer science , statistics , mathematics , geography , geology , geodesy , materials science , geometry , marketing , business , composite material
Abstract A proper account for forecast uncertainty is crucial in operational weather services and weather‐related decision‐making. Ensemble forecasts provide such information. However, they may be biased and tend to be under‐dispersive. Therefore, ensemble forecasts need to be post‐processed before using them in operational weather products. The present study post‐processes the European Centre for Medium‐Range Weather Forecasts (ECMWF) ensemble prediction system temperature forecasts over Europe with lead times up to 240 hr using the statistical calibration method that is currently implemented in operational workflow at the Finnish Meteorological Institute (FMI). The calibration coefficients are estimated simultaneously for all stations using a 30 day rolling training period. Station‐specific characteristic are accounted for by using elevation, latitude and land–sea mask as additional predictors in the calibration. On average the calibration improved the ensemble spread over Europe, although the improvements varied between different verification months. In March, the calibration improved ensemble forecasts the most, while in January the performance depended strongly on location. A comparison between three versions with different sets of station‐specific predictors in calibration showed that elevation was the most important predictor, while latitude and land–sea mask improved the forecasts mostly with shorter lead times. The calibration for the Finnish stations was also tested using three different size training domains in order to find the optimal training area. The results showed that smaller training domains had a significant effect on calibration performance only at lead times up to a few days. With longer lead times, the calibrated forecasts were better when all available stations were included.