z-logo
Premium
A Robust Method for Large‐Scale Multiple Hypotheses Testing
Author(s) -
Han Seungbong,
Andrei AdinCristian,
Tsui KamWah
Publication year - 2010
Publication title -
biometrical journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.108
H-Index - 63
eISSN - 1521-4036
pISSN - 0323-3847
DOI - 10.1002/bimj.200900177
Subject(s) - inference , bayesian probability , estimator , computer science , statistical hypothesis testing , null hypothesis , statistics , econometrics , scale (ratio) , type i and type ii errors , multiple comparisons problem , mathematics , algorithm , artificial intelligence , physics , quantum mechanics
When drawing large‐scale simultaneous inference, such as in genomics and imaging problems, multiplicity adjustments should be made, since, otherwise, one would be faced with an inflated type I error. Numerous methods are available to estimate the proportion of true null hypotheses π 0 , among a large number of hypotheses tested. Many methods implicitly assume that the π 0 is large, that is, close to 1. However, in practice, mid‐range π 0 values are frequently encountered and many of the widely used methods tend to produce highly variable or biased estimates of π 0 . As a remedy in such situations, we propose a hierarchical Bayesian model that produces an estimator of π 0 that exhibits considerably less bias and is more stable. Simulation studies seem indicative of good method performance even when low‐to‐moderate correlation exists among test statistics. Method performance is assessed in simulated settings and its practical usefulness is illustrated in an application to a type II diabetes study.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here