z-logo
Premium
The Quality of Content Analyses of State Student Achievement Tests and Content Standards
Author(s) -
Porter Andrew C.,
Polikoff Morgan S.,
Zeidner Tim,
Smithson John
Publication year - 2008
Publication title -
educational measurement: issues and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.158
H-Index - 52
eISSN - 1745-3992
pISSN - 0731-1745
DOI - 10.1111/j.1745-3992.2008.00134.x
Subject(s) - generalizability theory , content (measure theory) , inter rater reliability , content analysis , reliability (semiconductor) , student achievement , psychology , achievement test , reading (process) , learning standards , quality (philosophy) , mathematics education , state (computer science) , academic achievement , statistics , computer science , mathematics , pedagogy , standardized test , developmental psychology , political science , sociology , curriculum , social science , rating scale , philosophy , law , mathematical analysis , power (physics) , epistemology , quantum mechanics , physics , algorithm
This article examines the reliability of content analyses of state student achievement tests and state content standards. We use data from two states in three grades in mathematics and English language arts and reading to explore differences by state, content area, grade level, and document type. Using a generalizability framework, we find that reliabilities for four coders are generally greater than .80. For the two problematic reliabilities, they are partly explained by an odd rater out. We conclude that the content analysis procedures, when used with at least five raters, provide reliable information to researchers, policymakers, and practitioners about the content of assessments and standards.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here