z-logo
open-access-imgOpen Access
Exploring consequences of simulation design for apparent performance of methods of meta-analysis
Author(s) -
Elena Kulinskaya,
David C. Hoaglin,
Ilyas Bakbergenuly
Publication year - 2021
Publication title -
statistical methods in medical research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.952
H-Index - 85
eISSN - 1477-0334
pISSN - 0962-2802
DOI - 10.1177/09622802211013065
Subject(s) - sample size determination , statistics , random effects model , meta analysis , context (archaeology) , variance (accounting) , computer science , contrast (vision) , econometrics , mathematics , medicine , artificial intelligence , paleontology , accounting , business , biology
Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse-variance-weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here