
Process Features in Writing: Internal Structure and Incremental Value Over Product Features
Author(s) -
Zhang Mo,
Deane Paul
Publication year - 2015
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12075
Subject(s) - keystroke logging , rubric , computer science , product (mathematics) , formative assessment , fluency , quality (philosophy) , process (computing) , variance (accounting) , writing process , writing assessment , natural language processing , psychology , artificial intelligence , mathematics education , mathematics , philosophy , geometry , accounting , epistemology , business , operating system
In educational measurement contexts, essays have been evaluated and formative feedback has been given based on the end product. In this study, we used a large sample collected from middle school students in the United States to investigate the factor structure of the writing process features gathered from keystroke logs and the association of that latent structure with the quality of the final product (i.e., the essay text). The extent to which those process factors had incremental value over product features was also examined. We extracted 29 process features using the keystroke logging engine developed at Educational Testing Service (ETS). We identified 4 factors that represent the extent of writing fluency, local word‐level editing, phrasal/chunk‐level editing, and planning and deliberation during writing. We found that 2 of the 4 factors—writing fluency, and planning and deliberation—significantly related to the quality of the final text, whereas the 4 factors altogether accounted for limited variance in human scores. In 1 of the 2 samples studied, the keystroke‐logging fluency factor added incrementally, but only marginally, to the prediction of human ratings of text‐production skills beyond product features. The limited power of the writing process features for predicting human scores and the lack of clear additional predictive value over product features are not surprising given that the human raters have no knowledge of the writing process leading to the final text and that the product features measure the basic text quality specified in the human scoring rubric. Study limitations and recommendation for future research are also provided.