z-logo
open-access-imgOpen Access
GRE ® AUTOMATED‐EDITING‐TASK PHASE II REPORT: ITEM ANALYSES, REVISIONS, VALIDITY STUDY, AND TAXONOMY DEVELOPMENT
Author(s) -
Breland Hunter,
Kukich Karen,
Hemat Lisa
Publication year - 2001
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.2001.tb01854.x
Subject(s) - construct validity , task (project management) , construct (python library) , taxonomy (biology) , psychology , computer science , natural language processing , mathematics education , psychometrics , developmental psychology , botany , management , economics , biology , programming language
Two automated editing tasks developed in a Phase I study were subjected to item analyses, revised, and then used in a computer‐based test administration at a local college. The data collected in the administration were compared with questionnaire data obtained from students to examine the construct validity of the tasks. In a second approach to construct validation, a taxonomy of writing skills was developed and compared to the skills assessed by the editing tasks. Data analyses indicate that total editing score correlates more strongly with self‐reported English grades than with self‐reported mathematics grades, and that total editing score correlates positively with student self‐assessments of their writing skill, recent grades on writing assignments, and college grade point average. A review of the task elements against the taxonomy indicates that the editing tasks assess important writing skills not assessed by free‐response essays. For this and other reasons, it was concluded that automated editing tasks would serve as a useful complement to free‐response writing assessments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here