Premium
Following a moving target—Monte Carlo inference for dynamic Bayesian models
Author(s) -
Gilks Walter R.,
Berzuini Carlo
Publication year - 2001
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/1467-9868.00280
Subject(s) - markov chain monte carlo , particle filter , computer science , bayesian inference , resampling , sampling (signal processing) , monte carlo method , importance sampling , gibbs sampling , bayesian probability , inference , metropolis–hastings algorithm , rejection sampling , artificial intelligence , algorithm , machine learning , hybrid monte carlo , statistics , mathematics , kalman filter , filter (signal processing) , computer vision
Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.