z-logo
Premium
Approximate Monte Carlo Conditional Inference in Exponential Families
Author(s) -
Kolassa John E.,
Tanner Martin A.
Publication year - 1999
Publication title -
biometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.298
H-Index - 130
eISSN - 1541-0420
pISSN - 0006-341X
DOI - 10.1111/j.0006-341x.1999.00246.x
Subject(s) - mathematics , conditional probability distribution , exponential family , markov chain monte carlo , statistics , gibbs sampling , frequentist inference , joint probability distribution , sufficient statistic , monte carlo method , bayesian inference , bayesian probability
Summary. This article presents an algorithm for approximate frequentist conditional inference on two or more parameters for any regression model in the Generalized Linear Model (GLIM) family. We thereby extend highly accurate inference beyond the cases of logistic regression and contingency tables implimented in commercially available software. The method makes use of the double saddlepoint approximations of Skovgaard (1987, Journal of Applied Probability 24 , 875–887) and Jensen (1992, Biometrika 79 , 693–703) to the conditional cumulative distribution function of a sufficient statistic given the remaining sufficient statistics. This approximation is then used in conjunction with noniterative Monte Carlo methods to generate a sample from a distribution that approximates the joint distribution of the sufficient statistics associated with the parameters of interest conditional on the observed values of the sufficient statistics associated with the nuisance parameters. This algorithm is an alternate approach to that presented by Kolassa and Tanner (1994, Journal of the American Statistical Association 89 , 697–702), in which a Markov chain is generated whose equilibrium distribution under certain regularity conditions approximates the joint distribution of interest. In Kolassa and Tanner (1994), the Gibbs sampler was used in conjunction with these univariate conditional distribution function approximations. The method of this paper does not require the construction and simulation of a Markov chain, thus avoiding the need to develop regularity conditions under which the algorithm converges and the need for the data analyst to check convergence of the particular chain. Examples involving logistic and truncated Poisson regression are presented.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here