z-logo
Premium
Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008)
Author(s) -
Bowers Jeffrey S.,
Davis Colin J.
Publication year - 2009
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/j.1551-6709.2009.01062.x
Subject(s) - encoder , computer science , coding (social sciences) , position (finance) , recurrent neural network , artificial intelligence , sequence (biology) , natural language processing , cognitive science , artificial neural network , psychology , mathematics , statistics , finance , biology , economics , genetics , operating system
Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we agree this is a critical problem with many slot‐coding schemes, we question whether the sequence encoder model addresses this limitation, and we highlight another deficiency of the model. We conclude that alternative theories are more promising.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here