z-logo
Premium
Comparison of Methods for Combining the Minimum Passing Levels for Individual Items into a Passing Score for a Test
Author(s) -
Plake Barbara S.,
Kane Michael T.
Publication year - 1991
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.1991.tb00357.x
Subject(s) - statistics , weighting , test (biology) , test score , mean squared error , item response theory , computer science , mathematics , psychometrics , standardized test , medicine , paleontology , radiology , biology
The purpose of this study was to compare several methods for determining a passing score on an examination from the individual raters' estimates of minimal pass levels for the items. The methods investigated differ in the weighting that the estimates for each item receive in the aggregation process. An IRT‐based simulation method was used to model a variety of error components of minimum pass levels. The results indicate little difference in estimated passing scores across the three methods. Less error was present when the ability level of the minimally competent candidates matched the expected difficulty level of the test. No meaningful improvement in passing score estimation was achieved for a 50‐item test as opposed to a 25‐item test; however, the RMSE values for estimates with 10 raters were smaller than those for 5 raters. The results suggest that the simplest method for aggregating minimum pass levels across the items in a test–adding them up–is the preferred method.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here