z-logo
open-access-imgOpen Access
A Baseline for Multiple-Choice Testing in the University Classroom
Author(s) -
Aaron D. Slepkov,
Melissa L. Van Bussel,
Kara. M. Fitze,
Wesley S. Burr
Publication year - 2021
Publication title -
sage open
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.357
H-Index - 32
ISSN - 2158-2440
DOI - 10.1177/21582440211016838
Subject(s) - reliability (semiconductor) , test (biology) , multiple choice , context (archaeology) , baseline (sea) , psychometrics , item response theory , psychology , variety (cybernetics) , applied psychology , quality (philosophy) , item analysis , test validity , standardized test , computer science , medical education , mathematics education , statistics , clinical psychology , artificial intelligence , medicine , significant difference , philosophy , mathematics , oceanography , biology , paleontology , power (physics) , epistemology , quantum mechanics , physics , geology
There is a broad literature in multiple-choice test development, both in terms of item-writing guidelines, and psychometric functionality as a measurement tool. However, most of the published literature concerns multiple-choice testing in the context of expert-designed high-stakes standardized assessments, with little attention being paid to the use of the technique within non-expert instructor-created classroom examinations. In this work, we present a quantitative analysis of a large corpus of multiple-choice tests deployed in the classrooms of a primarily undergraduate university in Canada. Our report aims to establish three related things. First, reporting on the functional and psychometric operation of 182 multiple-choice tests deployed in a variety of courses at all undergraduate levels of education establishes a much-needed baseline for actual as-deployed classroom tests. Second, we motivate and present modified statistical measures—such as item-excluded correlation measures of discrimination and length-normalized measures of reliability—that should serve as useful parameters for future comparisons of classroom test psychometrics. Finally, we use the broad empirical data from our survey of tests to update widely used item-quality guidelines.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom