z-logo
open-access-imgOpen Access
COGNITIVE INTERVIEWS AS A TOOL FOR INVESTIGATING THE VALIDITY OF CONTENT KNOWLEDGE FOR TEACHING ASSESSMENTS
Author(s) -
Howell Heather,
Phelps Geoffrey,
Croft Andrew J.,
Kirui David,
Gitomer Drew
Publication year - 2013
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.2013.tb02326.x
Subject(s) - psychology , respondent , cognition , argument (complex analysis) , inference , mathematics education , internal validity , cognitive interview , generalizability theory , content validity , cognitive psychology , social psychology , applied psychology , psychometrics , computer science , statistics , developmental psychology , artificial intelligence , mathematics , biochemistry , chemistry , neuroscience , political science , law
This report provides a description of a cognitive interview study investigating validity of assessments designed to measure content knowledge for teaching (CKT). The report is intended both to provide information on the validity of the CKT measures and to provide guidance to researchers interested in replicating the design. The study takes an argument‐based approach to investigating validity by first articulating interpretive arguments that are central to the CKT measurement theory and then using the cognitive interview data to evaluate these arguments (Kane, 2006). The study is based on 30 interviews of elementary mathematics teachers and 30 interviews of elementary English language arts teachers. Teachers were selected using previous CKT assessment scores to represent high‐ and low‐scoring groups for each subject. The cognitive interviews were conducted separately for each subject and responses were coded and then analyzed to investigate the scoring and extrapolation inferences for the validity argument. Findings strongly support the scoring inference, providing evidence that the item keying for the items is correct. Results also indicate that the participants reasoned about the item in ways that conformed with the reasoning outlined in the task design rationales (TDR) for each item. These TDRs represent what reasoning should look like for each of these items for a respondent drawing on the desired CKT knowledge. As such, conformity with the TDRs supports the extrapolation inference, providing evidence that the reasoning used by the participant represents the underlying knowledge and skill domain we intend to measure through CKT assessments. The study design, instruments, methods, and results are described in detail, with discussion included to support researchers interested in replicating or capitalizing on the study design.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here