z-logo
Premium
Using Natural Language Processing to Predict Item Response Times and Improve Test Construction
Author(s) -
Baldwin Peter,
Yaneva Victoria,
Mee Janet,
Clauser Brian E.,
Ha Le An
Publication year - 2020
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/jedm.12264
Subject(s) - test (biology) , computer science , natural language processing , word (group theory) , affect (linguistics) , item response theory , artificial intelligence , question answering , information retrieval , statistics , psychology , linguistics , psychometrics , mathematics , communication , paleontology , biology , philosophy
Abstract In this article, it is shown how item text can be represented by (a) 113 features quantifying the text's linguistic characteristics, (b) 16 measures of the extent to which an information‐retrieval‐based automatic question‐answering system finds an item challenging, and (c) through dense word representations (word embeddings). Using a random forests algorithm, these data then are used to train a prediction model for item response times and predicted response times then are used to assemble test forms. Using empirical data from the United States Medical Licensing Examination, we show that timing demands are more consistent across these specially assembled forms than across forms comprising randomly‐selected items. Because an exam's timing conditions affect examinee performance, this result has implications for exam fairness whenever examinees are compared with each other or against a common standard.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here