Premium
Fractal Analysis Illuminates the Form of Connectionist Structural Gradualness
Author(s) -
Tabor Whitney,
Cho Pyeong Whan,
Szkudlarek Emily
Publication year - 2013
Publication title -
topics in cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.191
H-Index - 56
eISSN - 1756-8765
pISSN - 1756-8757
DOI - 10.1111/tops.12036
Subject(s) - connectionism , recursion (computer science) , computer science , syntax , artificial intelligence , fractal , fractal analysis , generalization , artificial neural network , theoretical computer science , symbol (formal) , recurrent neural network , natural language processing , cognitive science , mathematics , algorithm , fractal dimension , programming language , psychology , mathematical analysis
We examine two connectionist networks—a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)—that are trained to process center‐embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite‐state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g.,a nb n), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a new training scheme that seems to facilitate learning, and note that the connectionist learning of these cases has a continuous metamorphosis property that looks very different from what is achievable with symbolic encodings. We identify a property— ragged progressive generalization —which helps make this difference clearer. We suggest two conclusions. First, the fractal analysis of these more complex learning cases reveals the possibility of comparing connectionist networks and symbolic models of grammatical structure in a principled way—this helps remove the black box character of connectionist networks and indicates how the theory they support is different from symbolic approaches. Second, the findings indicate the value of future, linked mathematical and empirical work on these models—something that is more possible now than it was 10 years ago.