z-logo
Premium
Discrepancies Between Score Trends from NAEP and State Tests: A Scale‐Invariant Perspective
Author(s) -
Ho Andrew D.
Publication year - 2007
Publication title -
educational measurement: issues and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.158
H-Index - 52
eISSN - 1745-3992
pISSN - 0731-1745
DOI - 10.1111/j.1745-3992.2007.00104.x
Subject(s) - test (biology) , test score , perspective (graphical) , trend analysis , econometrics , context (archaeology) , statistics , scale (ratio) , mathematics education , psychology , mathematics , computer science , standardized test , geography , artificial intelligence , paleontology , cartography , archaeology , biology
State test score trends are widely interpreted as indicators of educational improvement. To validate these interpretations, state test score trends are often compared to trends on other tests such as the National Assessment of Educational Progress (NAEP). These comparisons raise serious technical and substantive concerns. Technically, the most commonly used trend statistics—for example, the change in the percent of proficient students—are misleading in the context of cross‐test comparisons. Substantively, it may not be reasonable to expect that NAEP and state test score trends should be similar. This paper motivates then applies a “scale‐invariant” framework for cross‐test trend comparisons to compare “high‐stakes” state test score trends from 2003 to 2005 to NAEP trends over the same period. Results show that state trends are significantly more positive than NAEP trends. The paper concludes with cautions against the positioning of trend discrepancies in a framework where only one trend is considered “true.”

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here