A Unifying Framework for Probabilistic Validation Metrics
Author(s) -
Paul Gardner,
Charles Lord,
R. J. Barthorpe
Publication year - 2019
Publication title -
journal of verification validation and uncertainty quantification
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.218
H-Index - 4
eISSN - 2377-2166
pISSN - 2377-2158
DOI - 10.1115/1.4045296
Subject(s) - probabilistic logic , metric (unit) , computer science , statistical model , range (aeronautics) , divergence (linguistics) , set (abstract data type) , probability distribution , task (project management) , data mining , statistical power , machine learning , artificial intelligence , statistics , mathematics , engineering , systems engineering , linguistics , operations management , philosophy , programming language , aerospace engineering
Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom