Premium
Consistency of Angoff‐Based Predictions of Item Performance: Evidence of Technical Quality of Results From the Angoff Standard Setting Method
Author(s) -
Plake Barbara S.,
Impara James C.,
Irwin Patrick M.
Publication year - 2000
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.2000.tb01091.x
Subject(s) - consistency (knowledge bases) , reliability (semiconductor) , quality (philosophy) , computer science , statistics , psychology , econometrics , mathematics , artificial intelligence , power (physics) , philosophy , physics , epistemology , quantum mechanics
Judgmental standard‐setting methods, such as the Angoff(1971) method, use item performance estimates as the basis for determining the minimum passing score (MPS). Therefore, the accuracy, of these item peformance estimates is crucial to the validity of the resulting MPS. Recent researchers (Shepard, 1995; Impara & Plake, 1998; National Research Council. 1999) have called into question the ability of judges to make accurate item performance estimates for target subgroups of candidates, such as minimally competent candidates. The propose of this study was to examine the intra‐ and inter‐rater consistency of item performance estimates from an Angoff standard setting. Results provide evidence that item pelformance estimates were consistent within and across panels within and across years. Factors that might have influenced this high degree of reliability, in the item performance estimates in a standard setting study are discussed.