z-logo
Premium
Stochastic downscaling of precipitation with neural network conditional mixture models
Author(s) -
Carreau Julie,
Vrac Mathieu
Publication year - 2011
Publication title -
water resources research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.863
H-Index - 217
eISSN - 1944-7973
pISSN - 0043-1397
DOI - 10.1029/2010wr010128
Subject(s) - downscaling , artificial neural network , benchmark (surveying) , generalized pareto distribution , conditional probability distribution , computer science , pareto principle , mathematics , precipitation , mathematical optimization , meteorology , artificial intelligence , statistics , geography , extreme value theory , geodesy
We present a new class of stochastic downscaling models, the conditional mixture models (CMMs), which builds on neural network models. CMMs are mixture models whose parameters are functions of predictor variables. These functions are implemented with a one‐layer feed‐forward neural network. By combining the approximation capabilities of mixtures and neural networks, CMMs can, in principle, represent arbitrary conditional distributions. We evaluate the CMMs at downscaling precipitation data at three stations in the French Mediterranean region. A discrete (Dirac) component is included in the mixture to handle the “no‐rain” events. Positive rainfall is modeled with a mixture of continuous densities, which can be either Gaussian, log‐normal, or hybrid Pareto (an extension of the generalized Pareto). CMMs are stochastic weather generators in the sense that they provide a model for the conditional density of local variables given large‐scale information. In this study, we did not look for the most appropriate set of predictors, and we settled for a decent set as the basis to compare the downscaling models. The set of predictors includes the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses sea level pressure fields on a 6 × 6 grid cell region surrounding the stations plus three date variables. We compare the three distribution families of CMMs with a simpler benchmark model, which is more common in the downscaling community. The difference between the benchmark model and CMMs is that positive rainfall is modeled with a single Gamma distribution. The results show that CMM with hybrid Pareto components outperforms both the CMM with Gaussian components and the benchmark model in terms of log‐likelihood. However, there is no significant difference with the log‐normal CMM. In general, the additional flexibility of mixture models, as opposed to using a single distribution, allows us to better represent the distribution of rainfall, both in the central part and in the upper tail.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here