z-logo
Premium
Bayesian Hypothesis Testing: a Reference Approach
Author(s) -
Bernardo José M.,
Rueda Raúl
Publication year - 2002
Publication title -
international statistical review
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.051
H-Index - 54
eISSN - 1751-5823
pISSN - 0306-7734
DOI - 10.1111/j.1751-5823.2002.tb00175.x
Subject(s) - mathematics , statistics , bayesian information criterion , bayesian probability , null hypothesis , curse of dimensionality
Summary For any probability model M ={ p ( x |θ, ω), θεΘ, ωεΩ} assumed to describe the probabilistic behaviour of data x ε X , it is argued that testing whether or not the available data are compatible with the hypothesis H 0 ={ θ = θ 0 } is best considered as a formal decision problem on whether to use ( a 0 ), or not to use ( a 0 ), the simpler probability model (or null model) M 0 ={ p ( x |θ 0 , ω), ωεΩ}, where the loss difference L ( a 0 , θ, ω) – L ( a 0 , θ, ω) is proportional to the amount of information δ(θ 0 , ω), which would be lost if the simplified model M 0 were used as a proxy for the assumed model M . For any prior distribution π(θ, ω), the appropriate normative solution is obtained by rejecting the null model M 0 whenever the corresponding posterior expectation ∫∫δ(θ 0 , θ, ω)π(θ, ω| x ) d θ d ω is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M , and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d (θ 0 , x ) =∫δπ(δ| x ) d δ, of the amount of information δ(θ 0 , θ, ω) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic , which is invariant under reparametrization. The intrinsic statistic d (θ 0 , x ) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log‐likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC) , indicates that the null model M 0 should only be rejected if the posterior expected loss of information from using the simplified model M 0 is too large or, equivalently, if the associated expected average log‐likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M 0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here