z-logo
open-access-imgOpen Access
Unmixing Rasch scales: How to score an educational test
Author(s) -
Maria Bolsinova,
Gunter Maris,
Herbert Hoijtink
Publication year - 2016
Publication title -
annals of applied statistics/the annals of applied statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.674
H-Index - 75
eISSN - 1941-7330
pISSN - 1932-6157
DOI - 10.1214/16-aoas919
Subject(s) - rasch model , test (biology) , computer science , polytomous rasch model , set (abstract data type) , artificial intelligence , natural language processing , active listening , simple (philosophy) , statistics , machine learning , item response theory , psychology , psychometrics , mathematics , paleontology , communication , biology , programming language , philosophy , epistemology
One of the important questions in the practice of educational testing is how a particular test should be scored. In this paper we consider what an appropriate simple scoring rule should be for the Dutch as a second language test consisting of listening and reading items. As in many other applications, here the Rasch model which allows to score the test with a simple sumscore is too restrictive to adequately represent the data. In this study we propose an exploratory algorithm which clusters the items into subscales each fitting a Rasch model and thus provides a scoring rule based on observed data. The scoring rule produces either a weighted sumscore based on equal weights within each subscale or a set of sumscores (one for each of the subscales). An MCMC algorithm which enables to determine the number of Rasch scales constituting the test and to unmix these scales is introduced and evaluated in simulations. Using the results of unmixing, we conclude that the Dutch language test can be scored with a weighted sumscore with three different weights

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here