When Is a Model Good Enough? Deriving the Expected Value of Model Improvement via Specifying Internal Model Discrepancies
Author(s) -
Mark Strong,
Jeremy E. Oakley
Publication year - 2014
Publication title -
siam/asa journal on uncertainty quantification
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.094
H-Index - 29
ISSN - 2166-2525
DOI - 10.1137/120889563
Subject(s) - value (mathematics) , set (abstract data type) , representation (politics) , computer science , series (stratigraphy) , bayesian probability , bayesian inference , econometrics , mathematical optimization , mathematics , artificial intelligence , machine learning , paleontology , politics , political science , law , biology , programming language
A “law-driven” or “mechanistic” computer model is a representation of judgments about the functional relationship between one set of quantities (the model inputs) and another set of target quantities (the model outputs). We recognize that we can rarely define with certainty a “true” model for a particular problem. Building an “incorrect” model will result in an uncertain prediction error, which we denote “structural uncertainty.” Structural uncertainty can be quantified within a Bayesian framework via the specification of a series of internal discrepancy terms, each representing at a subfunction level within the model the difference between the subfunction output and the true value of the intermediate parameter implied by the subfunction. By using value of information analysis we can then determine the expected value of learning the discrepancy terms, which we loosely interpret as an upper bound on the “expected value of model improvement.” We illustrate the method using a case study model drawn from the ...
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom