Premium
An Empirical Investigation Demonstrating the Multidimensional DIF Paradigm: A Cognitive Explanation for DIF
Author(s) -
Walker Cindy M.,
Beretvas S. Natasha
Publication year - 2001
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.2001.tb01120.x
Subject(s) - construct (python library) , differential item functioning , psychology , test (biology) , cognition , trait , construct validity , cognitive psychology , item response theory , dimension (graph theory) , achievement test , social psychology , psychometrics , developmental psychology , mathematics education , standardized test , mathematics , computer science , paleontology , neuroscience , pure mathematics , biology , programming language
Differential Item Functioning (DIF) is traditionally used to identify different item performance patterns between intact groups, most commonly involving race or sex comparisons. This study advocates expanding the utility of DIF as a step in construct validation. Rather than grouping examinees based on cultural differences, the reference and focal groups are chosen from two extremes along a distinct cognitive dimension that is hypothesized to supplement the dominant latent trait being measured. Specifically, this study investigates DIF between proficient and non‐proficient fourth‐ and seventh‐grade writers on open‐ended mathematics test items that require students to communicate about mathematics. It is suggested that the occurrence of DIF in this situation actually enhances, rather than detracts from, the construct validity of the test because, according to the National Council of Teachers of Mathematics (NCTM), mathematical communication is an important component of mathematical ability, the dominant construct being assessed. However, the presence of DIF influences the validity of inferences that can be made from test scores and suggests that two scores should be reported, one for general mathematical ability and one for mathematical communication. The fact that currently only one test score is reported, a simple composite of scores on multiple‐choice and open‐ended items, may lead to incorrect decisions being made about examinees.