z-logo
Premium
STATISTICAL POWER OF TRAINING EVALUATION DESIGNS
Author(s) -
ARVEY RICHARD D.,
COLE DAVID A.,
HAZUCHA JOY FISHER,
HARTANTO FRANS M.
Publication year - 1985
Publication title -
personnel psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.076
H-Index - 142
eISSN - 1744-6570
pISSN - 0031-5826
DOI - 10.1111/j.1744-6570.1985.tb00556.x
Subject(s) - sample size determination , statistical power , sample (material) , type i and type ii errors , statistics , psychology , power (physics) , analysis of covariance , statistical analysis , covariance , design of experiments , statistical hypothesis testing , correlation , research design , reliability engineering , computer science , mathematics , engineering , chemistry , physics , chromatography , quantum mechanics , geometry
Sample size requirements needed to achieve various levels of statistical power using posttest‐only, gain‐score, and analysis of covariance designs in evaluating training interventions have been developed. Results are presented which indicate that the power to detect true effects differs according to the type of design, the correlation between the pre‐ and posttest, and the size of the effect due to the training program. We show that the type of design and correlations between the pre‐ and posttest complexly determine the power curve. Finally, an estimate of typical sample sizes used in training evaluation design has been determined and reviewed to determine the power of the various designs to detect true effects, given this sample‐size specification. Recommendations for type of design are provided based on sample size and projected correlations between pre‐ and posttest scores.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here