z-logo
Premium
Bayesian inference on the patient population size given list mismatches
Author(s) -
Wang Xiaoyin,
He Chong Z.,
Sun Dongchu
Publication year - 2005
Publication title -
statistics in medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.996
H-Index - 183
eISSN - 1097-0258
pISSN - 0277-6715
DOI - 10.1002/sim.1933
Subject(s) - inference , computer science , bayesian probability , bayesian inference , statistics , econometrics , population , population size , data mining , artificial intelligence , mathematics , medicine , environmental health
Abstract In applying capture–recapture methods for closed populations to epidemiology, one needs to estimate the total number of people with a certain disease in a certain research area by using several lists with information of patients. Problems of lists error often arise due to mistyping or misinformation. Adopting the concept of tag‐loss methodology in animal populations, Seber et al . ( Biometrics 2000; 56 :1227–1232) proposed solutions to a two‐list problem. This article reports an interesting simulation study, where Bayesian point estimates based on improper constant and Jeffreys prior for unknown population size N could have smaller frequentist standard errors and MSEs compared to the estimates proposed in Seber et al . (2000). The Bayesian credible intervals based on the same priors also have super frequentist coverage probabilities while some of the frequentist confidence intervals procedures have drastically poor coverage. Seber's real data set on gestational diabetics is analysed with the proposed new methods. Copyright © 2004 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here