Premium
History and development of the Schmidt–Hunter meta‐analysis methods
Author(s) -
Schmidt Frank L.
Publication year - 2015
Publication title -
research synthesis methods
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.376
H-Index - 35
eISSN - 1759-2887
pISSN - 1759-2879
DOI - 10.1002/jrsm.1134
Subject(s) - selection (genetic algorithm) , meta analysis , variety (cybernetics) , generalization , test (biology) , computer science , personnel selection , psychology , statistical power , test validity , applied psychology , statistics , operations research , artificial intelligence , psychometrics , mathematics , medicine , mathematical analysis , paleontology , biology
In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt–Hunter methods of meta‐analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta‐analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long‐standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas. Copyright © 2015 John Wiley & Sons, Ltd.