Premium
Using Examination Results as Indicators of School and College Performance
Author(s) -
Goldstein Harvey,
Thomas Sally
Publication year - 1996
Publication title -
journal of the royal statistical society: series a (statistics in society)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.103
H-Index - 84
eISSN - 1467-985X
pISSN - 0964-1998
DOI - 10.2307/2983475
Subject(s) - mathematics education , advanced placement , psychology , computer science , statistics , mathematics
SUMMARY A current requirement for secondary schools in England and Wales, associated with the so‐called ‘Parents' Charter’ , is that each school is required to publish its average General Certificate of Secondary Education examination results. Every institution with A‐level General Certificate of Education candidates is also required to do this. Additionally, the government arranges every autumn for a national ‘league table’ of all these results to be published in the national press. A principal official justification for this policy is that it will help parents to choose schools for their children on the basis of how well the school is seen to be performing. The paper argues that institutional comparisons based on average, unadjusted, examination results are inadequate and potentially misleading for several reasons. Aggregate data obscure important information; the failure to take account of prior achievement leads to inaccurate and misleading inferences about school differences and they are always out of date because they refer to a cohort who began attending the institutions several years earlier. An alternative ‘value‐added’ analysis of A‐level results will be presented based on individual student data and adjusted for intake achievement. The results illustrate the inadequacies of the current procedure but they also demonstrate that any attempts to use examination results to judge the comparative 'effectiveness' of schools and other educational institutions have inherent problems which severely limit the usefulness of such a system for accountability. These results suggest the possibility that better uses for value‐added comparisons are as screening instruments to identify institutions for further investigation.