z-logo
Premium
Model Uncertainty and Choices Made by Modelers: Lessons Learned from the International Atomic Energy Agency Model Intercomparisons †
Author(s) -
Linkov Igor,
Burmistrov Dmitriy
Publication year - 2003
Publication title -
risk analysis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.972
H-Index - 130
eISSN - 1539-6924
pISSN - 0272-4332
DOI - 10.1111/j.0272-4332.2003.00402.x
Subject(s) - uncertainty analysis , monte carlo method , computer science , probabilistic logic , range (aeronautics) , uncertainty quantification , atomic energy , operations research , sensitivity analysis , agency (philosophy) , statistics , simulation , engineering , mathematics , machine learning , artificial intelligence , philosophy , epistemology , aerospace engineering
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often‐overlooked issue of what we call “modeler uncertainty,” i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model‐model and model‐data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic‐deliberative process in risk characterization.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here