Premium
Lexicality and pronunciation in a simulated neural net
Author(s) -
Phillips W. A.,
Hay I. M.,
Smith L. S.
Publication year - 1993
Publication title -
british journal of mathematical and statistical psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.157
H-Index - 51
eISSN - 2044-8317
pISSN - 0007-1102
DOI - 10.1111/j.2044-8317.1993.tb01011.x
Subject(s) - pronunciation , computer science , reading (process) , artificial neural network , phonology , compression (physics) , artificial intelligence , associative property , natural language processing , machine learning , linguistics , mathematics , philosophy , materials science , pure mathematics , composite material
Self‐supervised compressive neural nets can perform nonlinear multilevel latent structure analysis. They therefore have promise for cognitive theory. We study their use in the Seidenberg & McClelland (1989) model of reading. Analysis shows that self‐supervised compression in their model can make only a limited contribution to lexical decision, and simulation shows that it interferes with the associative mapping into phonology. Self‐supervised compression is therefore put to no good use in their model. This does not weaken the arguments for self‐supervised compression, however, and we suggest possible beneficial uses that merit further study.