Premium
Comments on “A Foundational Justification for a Weighted Likelihood Approach to Inference”
Author(s) -
Shafer Glenn
Publication year - 2004
Publication title -
international statistical review
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.051
H-Index - 54
eISSN - 1751-5823
pISSN - 0306-7734
DOI - 10.1111/j.1751-5823.2004.tb00239.x
Subject(s) - inference , computer science , citation , library science , artificial intelligence
Non-additive probability goes back to the very beginning of probability theory— the work of Jacob Bernoulli. Bernoulli’s calculus for combining arguments allowed both sides of a question to attain only small or zero probability, and he also thought the probabilities for two sides might sometimes add to more than one (Shafer 1978). Twentieth-century non-additive probability has roots in both mathematics and statistics. On the mathematical side, it is natural to generalize measuretheoretic probability by interpreting upper and lower bounds on the measure of a non-measurable set as the set’s non-additive “upper and lower probabilities”. On the statistical side, it is natural to try to use the greater flexibility of upper and lower probabilities in an effort to find better solutions to problems of inference. A. P. Dempster (1968) and Peter Walley (1991), perhaps the most influential innovators in this domain, both proposed generalizations of Bayesian inference. In my work on the “Dempster-Shafer theory” in the 1970s and 1980s (Shafer 1976), I called the lower probability (P or P ∗ or Bel) a degree of support or belief. It measures the strength of evidence for an event but does not necessarily have a betting interpretation. The upper probability (P or P∗ or Pl) I called ”plausibility”. An event or proposition is plausible to the extent its denial is