Premium
The risk–return trade‐off: Performance assessments and cognitive validation of inferences
Author(s) -
Leighton Jacqueline P.
Publication year - 2019
Publication title -
british journal of educational psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.557
H-Index - 95
eISSN - 2044-8279
pISSN - 0007-0998
DOI - 10.1111/bjep.12271
Subject(s) - cognition , psychology , process (computing) , cognitive interview , argument (complex analysis) , test (biology) , cognitive psychology , task (project management) , applied psychology , think aloud protocol , cognitive test , empirical evidence , social psychology , computer science , usability , paleontology , biochemistry , chemistry , philosophy , management , epistemology , human–computer interaction , neuroscience , economics , biology , operating system
Background and Aims In educational measurement, performance assessments occupy a niche for offering a true‐to‐life format that affords the measurement of high‐level cognitive competencies and the evidence to draw inferences about intellectual capital. However, true‐to‐life formats also introduce myriad complexities and can skew if not outright distort the accuracy of inferences. For validating claims about test‐takers from performance assessments, the collection of evidence about response processes is a necessity of sufficient import that the validation process needs to be labelled a cognitive validation to ensure that the cognitive is not forgotten in the logic of the validation process. Analysis and Example Cognitive validation is described as a three‐pronged process of (1) identifying the knowledge, skills, and attributes associated with the intellectual capital of interest, (2) selecting and/or developing tasks to elicit intellectual capital, and (3) collecting substantive empirical evidence of examinee response processes as part of the overall validity argument. This three‐pronged process is illustrated using the American Institute of CPA 's (2018) practice analysis, task‐based simulations ( TBS s), and use of think‐aloud interviews to evaluate claims. Conclusions Although cognitive laboratories and think alouds are used to measure distinct types of response processes as test‐takers interact with performance assessments, both methods are among the best for obtaining direct but differential evidence from test‐takers. The labour and cost of collecting this evidence are often not done or not done well by many testing programmes. However, for performance assessments to succeed in measuring what they purport to measure, the investment of cognitive validation must be made.