z-logo
Premium
Ensemble Streamflow Forecast: A GLUE‐Based Neural Network Approach 1
Author(s) -
Asefa Tirusew
Publication year - 2009
Publication title -
jawra journal of the american water resources association
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.957
H-Index - 105
eISSN - 1752-1688
pISSN - 1093-474X
DOI - 10.1111/j.1752-1688.2009.00351.x
Subject(s) - lag , generalization , artificial neural network , streamflow , computer science , range (aeronautics) , sample (material) , glue , a priori and a posteriori , network model , population , measure (data warehouse) , process (computing) , sample space , data mining , mathematical optimization , mathematics , artificial intelligence , engineering , philosophy , aerospace engineering , mathematical analysis , computer network , chemistry , sociology , operating system , epistemology , chromatography , mechanical engineering , drainage basin , demography , cartography , geography
  While training a Neural Network to model a rainfall‐runoff process, generally two aspects are considered: its capability to be able to describe the complex nature of the processes being modeled and the ability to generalize so that novel samples could be mapped correctly. The general conclusion is that, the smallest size network capable of representing the sample distribution is the best choice, as far as generalization is concerned. Oftentimes input variables are selected a priori in what is called an explanatory data analysis stage and are not part of the actual network training and testing procedures. When they are, the final model will have only a “fixed” type of inputs, lag‐space, and/or network structure. If one of these constituents was to change, one would obtain another equally “optimal” Neural Network. Following Beven and others' generalized likelihood uncertainty estimate approach, a methodology is introduced here that accounts for uncertainties in network structures, types of inputs, and their lag‐space relationships by looking at a population of Neural Networks rather than target in getting a single “optimal” network. It is shown that there is a wide array of networks that provide “similar” results, as seen by a likelihood measure, for different types of inputs, lag‐space, and network size combinations. These equally optimal networks expose the range of uncertainty in streamflow predictions and their expected value results in a better performance than any of the single network predictions.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here