z-logo
Premium
Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation
Author(s) -
Chaudhuri Sanjay,
Mondal Debashis,
Yin Teng
Publication year - 2017
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/rssb.12164
Subject(s) - markov chain monte carlo , mathematics , monte carlo method , likelihood function , gibbs sampling , hybrid monte carlo , slice sampling , marginal likelihood , empirical likelihood , metropolis–hastings algorithm , approximate bayesian computation , bayes factor , importance sampling , statistical inference , bayesian inference , parallel tempering , computer science , bayesian probability , estimation theory , algorithm , inference , statistics , artificial intelligence , estimator
Summary We consider Bayesian empirical likelihood estimation and develop an efficient Hamiltonian Monte Carlo method for sampling from the posterior distribution of the parameters of interest. The method proposed uses hitherto unknown properties of the gradient of the underlying log‐empirical‐likelihood function. We use results from convex analysis to show that these properties hold under minimal assumptions on the parameter space, prior density and the functions used in the estimating equations determining the empirical likelihood. Our method employs a finite number of estimating equations and observations but produces valid semiparametric inference for a large class of statistical models including mixed effects models, generalized linear models and hierarchical Bayes models. We overcome major challenges posed by complex, non‐convex boundaries of the support routinely observed for empirical likelihood which prevent efficient implementation of traditional Markov chain Monte Carlo methods like random‐walk Metropolis–Hastings sampling etc. with or without parallel tempering. A simulation study confirms that our method converges quickly and draws samples from the posterior support efficiently. We further illustrate its utility through an analysis of a discrete data set in small area estimation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here