Premium
Meta‐Analytic Methodology and Inferences About the Efficacy of Formative Assessment
Author(s) -
Briggs Derek C.,
RuizPrimo Maria Araceli,
Furtak Erin,
Shepard Lorrie,
Yin Yue
Publication year - 2012
Publication title -
educational measurement: issues and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.158
H-Index - 52
eISSN - 1745-3992
pISSN - 0731-1745
DOI - 10.1111/j.1745-3992.2012.00251.x
Subject(s) - formative assessment , context (archaeology) , situated , meta analysis , inclusion (mineral) , psychology , outcome (game theory) , mathematics education , computer science , social psychology , medicine , mathematics , artificial intelligence , history , archaeology , mathematical economics
In a recent article published in EM:IP, Kingston and Nash report on the results of a meta‐analysis on the efficacy of formative assessment. They conclude that the average effect of formative assessment on student achievement is about .20 SD units. This would seem to dispel the myth that effects between .40 and .70 can be attributed to formative assessment. They also find that there is considerable variability in effect sizes across studies, and that only the content area in which the treatment is situated explains a significant proportion of study variability. However, there are issues in the meta‐analytic methodology employed by the authors that make their findings somewhat equivocal. This commentary focuses on four methodological concerns about the Kingston and Nash meta‐analysis: (1) the approach taken to select studies for inclusion, (2) the application of study inclusion criteria, (3) the extent to which the effect sizes being combined are biased, and (4) the relationship between effect size magnitude and characteristics of outcome measures. After examining these issues in the context of the Kingston and Nash review, it appears that considerable uncertainty remains about the effect that formative assessment practices have on student achievement .