z-logo
open-access-imgOpen Access
Extended stochastic gradient Markov chain Monte Carlo for large-scale Bayesian variable selection
Author(s) -
Qifan Song,
Yan Sun,
Mao Ye,
Faming Liang
Publication year - 2020
Publication title -
biometrika
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.307
H-Index - 122
eISSN - 1464-3510
pISSN - 0006-3444
DOI - 10.1093/biomet/asaa029
Subject(s) - markov chain monte carlo , hybrid monte carlo , mathematics , markov chain , monte carlo method , markov chain mixing time , mathematical optimization , parallel tempering , algorithm , variable order markov model , markov model , computer science , statistics
Stochastic gradient Markov chain Monte Carlo (MCMC) algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters. This paper proposes an extended stochastic gradient MCMC algorithm which, by introducing appropriate latent variables, can be applied to more general large-scale Bayesian computing problems, such as those involving dimension jumping and missing data. Numerical studies show that the proposed algorithm is highly scalable and much more efficient than traditional MCMC algorithms. The proposed algorithms have much alleviated the pain of Bayesian methods in big data computing.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here