Generative grammar, neural networks, and the implementational mapping problem: Response to Pater
Author(s) -
Ewan Dunbar
Publication year - 2019
Publication title -
language
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.115
H-Index - 76
eISSN - 1535-0665
pISSN - 0097-8507
DOI - 10.1353/lan.2019.0013
Subject(s) - representation (politics) , generative grammar , computer science , grammar , artificial intelligence , tree (set theory) , artificial neural network , natural language processing , linguistics , mathematics , mathematical analysis , philosophy , politics , political science , law
The target article (Pater 2019) proposes to use neural networks to model learning within existing grammatical frameworks. This is easier said than done. There is a fundamental gap to be bridged that does not receive attention in the article: how can we use neural networks to examine whether it is possible to learn some linguistic representation (a tree, for example) when, after learning is finished, we cannot even tell if this is the type of representation that has been learned (all we see is a sequence of numbers)? Drawing a correspondence between an abstract linguistic representational system and an opaque parameter vector that can (or perhaps cannot) be seen as an instance of such a representation is an implementational mapping problem. Rather than relying on existing frameworks that propose partial solutions to this problem, such as harmonic grammar, I suggest that fusional research of the kind proposed needs to directly address how to ‘find’ linguistic representations in neural network representations.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom