Premium
Assessing Equivalence Tests with Respect to their Expected p ‐Value
Author(s) -
Pflüger Rafael,
Hothorn Torsten
Publication year - 2002
Publication title -
biometrical journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.108
H-Index - 63
eISSN - 1521-4036
pISSN - 0323-3847
DOI - 10.1002/bimj.200290001
Subject(s) - equivalence (formal languages) , test statistic , statistical hypothesis testing , nonparametric statistics , mathematics , null hypothesis , goodness of fit , univariate , statistics , null distribution , monte carlo method , statistic , context (archaeology) , econometrics , computer science , multivariate statistics , discrete mathematics , paleontology , biology
Monte‐Carlo simulation methods are commonly used for assessing the performance of statistical tests under finite sample scenarios. They help us ascertain the nominal level for tests with approximate level, e.g. asymptotic tests. Additionally, a simulation can assess the quality of a test on the alternative . The latter can be used to compare new tests and established tests under certain assumptions in order to determinate a preferable test given characteristics of the data. The key problem for such investigations is the choice of a goodness criterion. We expand the expected p ‐value as considered by Sackrowitz and Samuel‐Cahn (1999) to the context of univariate equivalence tests. This presents an effective tool to evaluate new purposes for equivalence testing because of its independence of the distribution of the test statistic under null‐hypothesis. It helps to avoid the often tedious search for the distribution under null‐hypothesis for test statistics which have no considerable advantage over yet available methods. To demonstrate the usefulness in biometry a comparison of established equivalence tests with a nonparametric approach is conducted in a simulation study for three distributional assumptions.