z-logo
Premium
Validating Automated Measures of Text Complexity
Author(s) -
Sheehan Kathleen M.
Publication year - 2017
Publication title -
educational measurement: issues and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.158
H-Index - 52
eISSN - 1745-3992
pISSN - 0731-1745
DOI - 10.1111/emip.12155
Subject(s) - readability , assertion , computer science , reading (process) , reading comprehension , comprehension , test (biology) , information retrieval , natural language processing , linguistics , programming language , paleontology , philosophy , biology
Automated text complexity measurement tools (also called readability metrics) have been proposed as a way to help teachers, textbook publishers, and assessment developers select texts that are closely aligned with the new, more demanding text complexity expectations specified in the Common Core State Standards. This article examines a critical element of the validity arguments presented in support of proposed metrics: the claim that criterion text complexity scores developed from students’ responses to reading comprehension test items are reflective of the difficulties actually experienced by students while reading. Evidence that fails to support this assertion is examined, and implications relative to the goal of obtaining valid, unbiased evidence about the measurement properties of proposed readability metrics are discussed.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here