z-logo
open-access-imgOpen Access
A COMPARATIVE STUDY OF CURRICULUM EFFECTS ON THE STABILITY OF IRT AND CONVENTIONAL ITEM PARAMETER ESTIMATES 1 , 2 , 3
Author(s) -
Cook Linda L.,
Eignor Daniel R.,
Taft Hessy L.
Publication year - 1985
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2330-8516.1985.tb00123.x
Subject(s) - coursework , test (biology) , item response theory , mathematics education , curriculum , psychology , population , achievement test , statistics , standardized test , mathematics , psychometrics , pedagogy , demography , paleontology , sociology , biology
One very practical problem facing practitioners in the area of item response theory (IRT) is how to define the population from which samples will be drawn for parameter estimation purposes. A major concern of individuals interested in using IRT with achievement test data is that such tests have been specifically designed to reflect course content, and students taking the tests at different points in their course work may not constitute samples from the same population. This situation is most likely to exist for large scale admissions testing programs that offer achievement tests, to be used by colleges for admissions and placement purposes, at multiple administrations which span the school year. Students who elect to take the tests at one of the spring administrations typically have recently completed, or are about to complete, a course of instruction in the content area measured by the test, whereas, those electing to take the test at a fall administration may have completed their formal instruction in the content area six to eighteen months prior to taking the test. Because of these curriculum effects, it is quite likely that these two groups of students are not members of the same population, and that item parameter estimates (either IRT or conventional) obtained from say, the group of students who took the test after recently completing their coursework, may not be appropriate when applied to data obtained from students who took the test six months to a year after completing their coursework. The purposes of this study were three‐fold: 1) to examine and compare the stability of IRT item difficulty parameter estimates with conventional item difficulty estimates for a set of items from an admissions testing program Biology achievement test given to a group of students recently completing a biology course and another group of students who, for the most part, had received no formal instruction in the content area from 6 to 18 months prior to taking the test; 2) to assess the impact of the lack of stability of the item difficulty estimates on score equating for both IRT and conventional equating methods; and 3) using confirmatory factor analytic techniques, to assess differences in the factor structures of the set of common items given to the groups of interest in an attempt to determine the specific curriculum effects leading to a lack of stability in the item difficulty estimates.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here