Premium
Testing reading comprehension of theoretical discourse with cloze
Author(s) -
Greene Benjamin
Publication year - 2001
Publication title -
journal of research in reading
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.077
H-Index - 51
eISSN - 1467-9817
pISSN - 0141-0423
DOI - 10.1111/1467-9817.00134
Subject(s) - readability , reading comprehension , comprehension , psychology , linguistics , cloze test , test (biology) , cognitive psychology , sample (material) , reading (process) , natural language processing , computer science , paleontology , philosophy , chemistry , chromatography , biology
The ability of cloze tests containing frequent, every n‐th word deletions to measure comprehension of macropropositions has been challenged on both theoretical and empirical grounds, calling into question the validity of such tests for assessing comprehension of much of the discourse encountered by university‐level students. To evaluate the comprehension of a writer’s reasoning, it is recommended that cloze tests position gaps so as to target recognition of cohesive devices and the ability to draw inferences from other sentences. To test the validity of such a design, a large sample of scores on discourse cloze tests administered in introductory college economics is compared to scores on true–false comprehension tests designed to target recognition of connective propositions. The two distributions of scores do not differ significantly in terms of mean value, dispersion or frequency distribution, suggesting that appropriately designed cloze tests can provide a valid assessment of the reader’s integration of theoretical text. In addition, the usefulness of readability formulas based on surface characteristics of text is challenged when readability is defined in terms of the difficulty of constructing a coherent representation of theoretical text.