z-logo
Premium
Obtaining Test Blueprint Weights From Job Analysis Surveys
Author(s) -
Spray Judith A.,
Huang ChiYu
Publication year - 2000
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.2000.tb01082.x
Subject(s) - rasch model , ranking (information retrieval) , computer science , task (project management) , polytomous rasch model , scale (ratio) , rating scale , test (biology) , blueprint , bayesian probability , item response theory , statistics , machine learning , artificial intelligence , mathematics , psychometrics , engineering , mechanical engineering , paleontology , physics , systems engineering , quantum mechanics , biology
A method for combining multiple scale responses from job or task surveys based on a hierarchical ranking scheme is presented. A rationale for placing the resulting ordinal information onto an interval scale of measurement using the Rasch Rating Scale Model is also provided. After a simple linear transformation, the item or task parameter estimates can be used to obtain item weights to be used in constructing test blueprints. Prior weights can then be used to modify the item weights after data collection, based either on content balancing requirements or Bayesian prior content weights from SMEs (subject matter experts). Finally a method is suggested to link two or more surveys, again using the Rasch Rating Scale Model and the computer program, Bigsteps, when it is desirable to shorten the length of the typical job or task survey.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here