z-logo
Premium
Measuring Comparability of Standards between Subjects: why our statistical techniques do not make the grade
Author(s) -
Newton Paul E.
Publication year - 1997
Publication title -
british educational research journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.171
H-Index - 89
eISSN - 1469-3518
pISSN - 0141-1926
DOI - 10.1080/0141192970230404
Subject(s) - comparability , grading (engineering) , subject (documents) , statistical analysis , representation (politics) , psychology , computer science , statistics , econometrics , mathematics education , mathematics , political science , engineering , law , civil engineering , combinatorics , politics , library science
Abstract In the past few years the examination boards in Britain have witnessed a renewed interest from external bodies in the notion of comparability of grading standards between different subjects. This interest has stemmed from concern with findings, from statistical techniques for comparison, suggesting that public examinations in different subjects are not comparable. This article focuses on one of these techniques — the Subject‐Pair Analysis — in an attempt to demonstrate that reliance on the statistical comparison of standards between subjects is misplaced. Fundamental assumptions underlying the Subject‐Pair Analysis, and related analyses, are made explicit and then challenged both in principle and from operational data. These techniques cannot be assumed even to approximate a valid representation of ‘the problem’ of between‐subject comparability because they are inappropriate for dealing with the kind of data that our examinations generate.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here