z-logo
Premium
Bayesian Model Selection for Incomplete Data Using the Posterior Predictive Distribution
Author(s) -
Daniels Michael J.,
Chatterjee Arkendu S.,
Wang Chenguang
Publication year - 2012
Publication title -
biometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.298
H-Index - 130
eISSN - 1541-0420
pISSN - 0006-341X
DOI - 10.1111/j.1541-0420.2012.01766.x
Subject(s) - deviance information criterion , bayesian information criterion , model selection , deviance (statistics) , property (philosophy) , computer science , bayesian probability , posterior probability , selection (genetic algorithm) , extension (predicate logic) , missing data , information criteria , goodness of fit , mathematics , statistics , bayesian inference , artificial intelligence , philosophy , epistemology , programming language
Summary We explore the use of a posterior predictive loss criterion for model selection for incomplete longitudinal data. We begin by identifying a property that most model selection criteria for incomplete data should consider. We then show that a straightforward extension of the Gelfand and Ghosh (1998, Biometrika , 85 , 1–11) criterion to incomplete data has two problems. First, it introduces an extra term (in addition to the goodness of fit and penalty terms) that compromises the criterion. Second, it does not satisfy the aforementioned property. We propose an alternative and explore its properties via simulations and on a real dataset and compare it to the deviance information criterion (DIC). In general, the DIC outperforms the posterior predictive criterion, but the latter criterion appears to work well overall and is very easy to compute unlike the DIC in certain classes of models for missing data.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here