z-logo
open-access-imgOpen Access
Short-Range Precipitation Forecasts from Time-Lagged Multimodel Ensembles during the HMT-West-2006 Campaign
Author(s) -
Huiling Yuan,
John McGinley,
Paul Schultz,
Christopher J. Anderson,
Chungu Lu
Publication year - 2008
Publication title -
journal of hydrometeorology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.733
H-Index - 123
eISSN - 1525-755X
pISSN - 1525-7541
DOI - 10.1175/2007jhm879.1
Subject(s) - mesoscale meteorology , quantitative precipitation forecast , mm5 , environmental science , meteorology , forecast skill , north american mesoscale model , precipitation , precipitable water , hydrometeorology , weather research and forecasting model , data assimilation , climatology , hindcast , initialization , probabilistic logic , weather forecasting , computer science , global forecast system , artificial intelligence , geography , geology , programming language
High-resolution (3 km) time-lagged (initialized every 3 h) multimodel ensembles were produced in support of the Hydrometeorological Testbed (HMT)-West-2006 campaign in northern California, covering the American River basin (ARB). Multiple mesoscale models were used, including the Weather Research and Forecasting (WRF) model, Regional Atmospheric Modeling System (RAMS), and fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5). Short-range (6 h) quantitative precipitation forecasts (QPFs) and probabilistic QPFs (PQPFs) were compared to the 4-km NCEP stage IV precipitation analyses for archived intensive operation periods (IOPs). The two sets of ensemble runs (operational and rerun forecasts) were examined to evaluate the quality of high-resolution QPFs produced by time-lagged multimodel ensembles and to investigate the impacts of ensemble configurations on forecast skill. Uncertainties in precipitation forecasts were associated with different models, model physics, and initial and boundary conditions. The diabatic initialization by the Local Analysis and Prediction System (LAPS) helped precipitation forecasts, while the selection of microphysics was critical in ensemble design. Probability biases in the ensemble products were addressed by calibrating PQPFs. Using artificial neural network (ANN) and linear regression (LR) methods, the bias correction of PQPFs and a cross-validation procedure were applied to three operational IOPs and four rerun IOPs. Both the ANN and LR methods effectively improved PQPFs, especially for lower thresholds. The LR method outperformed the ANN method in bias correction, in particular for a smaller training data size. More training data (e.g., one-season forecasts) are desirable to test the robustness of both calibration methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here