Premium
The Generalizability of Content Validity Ratings
Author(s) -
Crocker Linda,
Llabre Maria,
Miller M. David
Publication year - 1988
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.1988.tb00309.x
Subject(s) - generalizability theory , content validity , psychology , achievement test , test (biology) , relevance (law) , test validity , variance (accounting) , curriculum , item response theory , mathematics education , content (measure theory) , criterion referenced test , selection (genetic algorithm) , statistics , psychometrics , standardized test , computer science , mathematics , pedagogy , clinical psychology , artificial intelligence , developmental psychology , paleontology , mathematical analysis , accounting , political science , law , business , biology
The problem of assessing the content validity (or relevance) of standardized achievement tests is considered within the framework of generalizability theory. Four illustrative designs are described that may be used to assess test‐item fit to a curriculum. For each design, appropriate variance components are identified for making relative and absolute item (or test) selection decisions. Special consideration is given to use of these procedures for determining the number of raters and/or schools needed in a content‐validation decisionmaking study. Application of these procedures is illustrated using data from an international assessment of mathematics achievement