Premium
Detecting potential hucksterism in meta‐analysis using a follow‐up fail‐safe test
Author(s) -
Brown Jonathan R.
Publication year - 1992
Publication title -
psychology in the schools
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.738
H-Index - 75
eISSN - 1520-6807
pISSN - 0033-3085
DOI - 10.1002/1520-6807(199204)29:2<179::aid-pits2310290213>3.0.co;2-1
Subject(s) - meta analysis , psychology , test (biology) , sampling (signal processing) , publication bias , stability (learning theory) , empirical research , statistics , applied psychology , clinical psychology , econometrics , computer science , medicine , mathematics , machine learning , paleontology , filter (signal processing) , computer vision , biology
Meta‐analysis is an analysis of analyses. It is a technique widely used by researchers and practitioners to aggregate and summarize statistically reported empirical educational research. In 10 years, meta‐analysis appeared more than 600 times in research journals and dissertation abstracts. Although most meta‐analyses were reported as significant, few of the findings determined how many unpublished “no‐effect” studies, if sampled, would have invalidated significance. If significant meta‐analysis results are over represented through selective sampling, hucksterism in the form of sampling bias exists. An explanation for using a follow‐up test called the fail‐safe N is provided with tables constructed to assist researchers and practitioners to estimate, without calculation, the relative stability of meta‐analysis results. The implication is that failsafe N should routinely be used and reported in meta‐analysis research.