Premium
Subsample distribution distance and McMC convergence
Author(s) -
HJORTH URBAN,
VADEBY ANNA
Publication year - 2005
Publication title -
scandinavian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.359
H-Index - 65
eISSN - 1467-9469
pISSN - 0303-6898
DOI - 10.1111/j.1467-9469.2005.00424.x
Subject(s) - mathematics , markov chain monte carlo , markov chain , measure (data warehouse) , stability (learning theory) , convergence (economics) , series (stratigraphy) , sequence (biology) , term (time) , dependency (uml) , kullback–leibler divergence , monte carlo method , statistics , computer science , artificial intelligence , data mining , paleontology , genetics , physics , quantum mechanics , machine learning , biology , economics , economic growth
. A new measure based on comparison of empirical distributions for sub sequences or parallel runs and the full sequence of Markov chain Monte Carlo simulations, is proposed as a criterion of stability or convergence. The measure is also put forward as a loss function when the design of a Markov chain is optimized. The comparison is based on a Kullback–Leibler (KL) type distance over value sets defined by the output data. The leading term in a series expansion gives an interpretation in terms of the relative uncertainty of cell frequencies. The validity of this term is studied by simulation in two analytically tractable cases with Markov dependency. The agreement between the leading term and the KL‐measure is close, in particular when the simulations are extensive enough for stable results. Comparisons with established criteria turn out favourably in examples studied.