Premium
The Story Gestalt: A Model Of Knowledge‐Intensive Processes in Text Comprehension
Author(s) -
John Mark F.
Publication year - 1992
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1207/s15516709cog1602_5
Subject(s) - computer science , comprehension , natural language processing , generalization , proposition , artificial intelligence , coherence (philosophical gambling strategy) , constraint satisfaction , constraint (computer aided design) , linguistics , mathematics , mathematical analysis , philosophy , statistics , geometry , probabilistic logic , programming language
How are knowledge‐intensive, text‐comprehension processes computed? Specifically, how are (1) explicit propositions remembered correctly, (2) pronouns resolved, (3) coherence and prediction inferences drawn, (4) on‐going interpretations revised as more information becomes available, and (5) information learned in specific contexts generalized to novel texts? A constraint satisfaction model is presented that offers a number of advantages over previous models: Each of the previous processes can be seen as examples of the same process of constraint satisfaction, constraints can have strengths to represent the degrees of correlation among information, and the independence of constraints provides insight into generalization. In the model, propositions describing a simple event, such as going to the beach or a restaurant, are sequentially presented to a recurrent PDP network. The model is trained through practice processing a large number of example texts and answering questions. Questions are predicates from propositions explicit or inferable from the text, and the model has to answer with the proposition that fits that predicate. The model learns to perform well, though some processes require substantial training. A second simulation shows how the combinatorics in the training corpus can increase generalization. This effect is explained by introducing the concept of identity and associative constraints that are learned from a corpus. Overall, the model provides a number of insights into how a graded constraint‐satisfaction model can compute knowledge‐intensive processes in text comprehension.