An interactive activation model of speech perception
Author(s) -
Jeffrey L. Elman,
James L. McClelland
Publication year - 1981
Publication title -
the journal of the acoustical society of america
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.619
H-Index - 187
eISSN - 1520-8524
pISSN - 0001-4966
DOI - 10.1121/1.2019147
Subject(s) - percept , perception , computer science , speech recognition , speech perception , neurocomputational speech processing , motor theory of speech perception , natural language processing , artificial intelligence , psychology , neuroscience
We describe a model of speech perception [based on the Interactive Activation Model of Visual Word Perception (cf. McClelland and Rumelhart, in press; Rumelhart and McClelland, in press)] in which excitatory and inhibitory interactions among nodes for phonetic features, phonemes, and words are used to account for aspects of the interaction of bottom‐up and top‐down processes in perception of speech. Results from a working computer simulation of this model are presented. Input to the program consists of specifications of distinctive features of speech as they unfold in time. Features, phonemes, and words consistent with the input are activated, missing specifications may be filled in, and slight errors may be corrected so that the “percept” formed by the simulation exhibits such phenomena as phonemic restoration and related perceptual effects. [Work supported by NSF.]
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom