Premium
Efficient prediction uncertainty approximation in the calibration of environmental simulation models
Author(s) -
Tolson Bryan A.,
Shoemaker Christine A.
Publication year - 2008
Publication title -
water resources research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.863
H-Index - 217
eISSN - 1944-7973
pISSN - 0043-1397
DOI - 10.1029/2007wr005869
Subject(s) - calibration , environmental science , computer science , econometrics , mathematics , statistics
This paper is aimed at improving the efficiency of model uncertainty analyses that are conditioned on measured calibration data. Specifically, the focus is on developing an alternative methodology to the generalized likelihood uncertainty estimation (GLUE) technique when pseudolikelihood functions are utilized instead of a traditional statistical likelihood function. We demonstrate for multiple calibration case studies that the most common sampling approach utilized in GLUE applications, uniform random sampling, is much too inefficient and can generate misleading estimates of prediction uncertainty. We present how the new dynamically dimensioned search (DDS) optimization algorithm can be used to independently identify multiple acceptable or behavioral model parameter sets in two ways. DDS could replace random sampling in typical applications of GLUE. More importantly, we present a new, practical, and efficient uncertainty analysis methodology called DDS–approximation of uncertainty (DDS‐AU) that quantifies prediction uncertainty using prediction bounds rather than prediction limits. Results for 13, 14, 26, and 30 parameter calibration problems show that DDS‐AU can be hundreds or thousands of times more efficient at finding behavioral parameter sets than GLUE with random sampling. Results for one example show that for the same limited computational effort, DDS‐AU prediction bounds can simultaneously be smaller and contain more of the measured data in comparison to GLUE prediction bounds. We also argue and then demonstrate that within the GLUE framework, when behavioral parameter sets are not sampled frequently enough, Latin hypercube sampling does not offer any improvements over simple random sampling.