Premium
How Can We Improve the Accuracy of Screening Instruments?
Author(s) -
Johnson Evelyn S.,
Jenkins Joseph R.,
Petscher Yaacov,
Catts Hugh W.
Publication year - 2009
Publication title -
learning disabilities research and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.018
H-Index - 21
eISSN - 1540-5826
pISSN - 0938-8982
DOI - 10.1111/j.1540-5826.2009.00291.x
Subject(s) - fluency , reading (process) , psychology , literacy , set (abstract data type) , false positive paradox , curriculum based measurement , at risk students , dyslexia , response to intervention , developmental psychology , cognitive psychology , special education , mathematics education , computer science , artificial intelligence , linguistics , curriculum , pedagogy , philosophy , curriculum mapping , curriculum development , programming language
Screening for early reading problems is a critical step in early intervention and prevention of later reading difficulties. Evaluative frameworks for determining the utility of a screening process are presented in the literature but have not been applied to many screening measures currently in use in numerous schools across the nation. In this study, the accuracy of several Dynamic Indicators of Basic Early Literacy Skills (DIBELS) subtests in predicting which students were at risk for reading failure in first grade was examined in a sample of 12,055 students in Florida. Findings indicate that the DIBELS Nonsense Word Fluency, Initial Sound Fluency, and Phoneme Segmentation Fluency measures show poor diagnostic utility in predicting end of Grade 1 reading performance. DIBELS Oral Reading Fluency in fall of Grade 1 had higher classification accuracy than other DIBELS measures, but when compared to the classification accuracy obtained by assuming that no student had a disability, suggests the need to reevaluate the use of classification accuracy as a way to evaluate screening measures without discussion of base rates. Additionally, when cut scores on the screening tools were set to capture 90 percent of all students at risk for reading problems, a high number of false positives were identified. Finally, different cut scores were needed for different subgroups, such as English Language Learners. Implications for research and practice are discussed.