z-logo
Premium
On the Robustness of Conceptual Rainfall‐Runoff Models to Calibration and Evaluation Data Set Splits Selection: A Large Sample Investigation
Author(s) -
Guo Danlu,
Zheng Feifei,
Gupta Hoshin,
Maier Holger R.
Publication year - 2020
Publication title -
water resources research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.863
H-Index - 217
eISSN - 1944-7973
pISSN - 0043-1397
DOI - 10.1029/2019wr026752
Subject(s) - robustness (evolution) , calibration , surface runoff , environmental science , baseflow , computer science , skewness , hydrological modelling , drainage basin , data mining , statistics , mathematics , streamflow , climatology , geography , geology , cartography , ecology , biochemistry , chemistry , biology , gene
Conceptual rainfall‐runoff (CRR) models are widely used for runoff simulation and for prediction under a changing climate. The models are often calibrated with only a portion of all available data at a location and then evaluated independently with another part of the data for reliability assessment. Previous studies report a persistent decrease in CRR model performance when applying the calibrated model to the evaluation data. However, there remains a lack of comprehensive understanding about the nature of this “low transferability” problem and why it occurs. In this study we employ a large sample approach to investigate the robustness of CRR models across calibration/validation data splits. Specially, we investigate (1) how robust is CRR model performance across calibration/evaluation data splits, at catchments with a wide range of hydroclimatic conditions; and (2) is the robustness of model performance somehow related to the hydroclimatic characteristics of a catchment? We apply three widely used CRR models, GR4J, AWBM and IHACRE_CMD, to 163 Australian catchments having long‐term historical data. Each model was calibrated and evaluated at each catchment, using a large number of data splits, resulting in a total of 929,160 calibrated models. Results show that (1) model performance generally exhibits poor robustness across calibration/evaluation data splits and (2) lower model robustness is correlated with specific catchment characteristics, such as higher runoff skewness and aridity, highly variable baseflow contribution, and lower rainfall‐runoff ratio. These results provide a valuable benchmark for future model robustness assessments and useful guidance for model calibration and evaluation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here