z-logo
Premium
Evaluation parameters for computer‐adaptive testing
Author(s) -
Georgiadou Elisabeth,
Triantafillou Evangelos,
Economides Anastasios A.
Publication year - 2006
Publication title -
british journal of educational technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 95
eISSN - 1467-8535
pISSN - 0007-1013
DOI - 10.1111/j.1467-8535.2005.00525.x
Subject(s) - computerized adaptive testing , computer science , usability , reliability (semiconductor) , test (biology) , key (lock) , item response theory , test validity , psychometrics , reliability engineering , human–computer interaction , computer security , psychology , clinical psychology , paleontology , power (physics) , physics , quantum mechanics , engineering , biology
With the proliferation of computers in test delivery today, adaptive testing has become quite popular, especially when examinees must be classified into two categories (pass/fail, master/nonmaster). Several well‐established organisations have provided standards and guidelines for the design and evaluation of educational and psychological testing. The purpose of this paper was not to repeat the guidelines and standards that exist in the literature but to identify and discuss the main evaluation parameters for a computer‐adaptive test (CAT). A number of parameters should be taken into account when evaluating CAT. Key parameters include utility, validity, reliability, satisfaction, usability, reporting, administration, security, and thoseassociated with adaptivity, item pool, and psychometric theory. These parameters are presented and discussed below and form a proposed evaluation model, Evaluation Model of Computer‐Adaptive Testing.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom