z-logo
Premium
Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)
Author(s) -
Sibley Daragh E.,
Kello Christopher T.,
Plaut David C.,
Elman Jeffrey L.
Publication year - 2009
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/j.1551-6709.2009.01064.x
Subject(s) - sequence (biology) , scale (ratio) , natural language processing , computer science , linguistics , cognitive science , artificial intelligence , psychology , cartography , geography , philosophy , biology , genetics
Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here