Premium
Simulating runoff under changing climatic conditions: Revisiting an apparent deficiency of conceptual rainfall‐runoff models
Author(s) -
Fowler Keirnan J. A.,
Peel Murray C.,
Western Andrew W.,
Zhang Lu,
Peterson Tim J.
Publication year - 2016
Publication title -
water resources research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.863
H-Index - 217
eISSN - 1944-7973
pISSN - 0043-1397
DOI - 10.1002/2015wr018068
Subject(s) - surface runoff , generalized pareto distribution , pareto principle , calibration , environmental science , sample (material) , conceptual model , climate change , econometrics , hydrological modelling , computer science , hydrology (agriculture) , statistics , mathematics , climatology , extreme value theory , geology , ecology , chemistry , geotechnical engineering , chromatography , database , biology , oceanography
Hydrologic models have potential to be useful tools in planning for future climate variability. However, recent literature suggests that the current generation of conceptual rainfall runoff models tend to underestimate the sensitivity of runoff to a given change in rainfall, leading to poor performance when evaluated over multiyear droughts. This research revisited this conclusion, investigating whether the observed poor performance could be due to insufficient model calibration and evaluation techniques. We applied an approach based on Pareto optimality to explore trade‐offs between model performance in different climatic conditions. Five conceptual rainfall runoff model structures were tested in 86 catchments in Australia, for a total of 430 Pareto analyses. The Pareto results were then compared with results from a commonly used model calibration and evaluation method, the Differential Split Sample Test. We found that the latter often missed potentially promising parameter sets within a given model structure, giving a false negative impression of the capabilities of the model. This suggests that models may be more capable under changing climatic conditions than previously thought. Of the 282[347] cases of apparent model failure under the split sample test using the lower [higher] of two model performance criteria trialed, 155[120] were false negatives. We discuss potential causes of remaining model failures, including the role of data errors. Although the Pareto approach proved useful, our aim was not to suggest an alternative calibration strategy, but to critically assess existing methods of model calibration and evaluation. We recommend caution when interpreting split sample results.