z-logo
Premium
Information‐theoretical Complexity Metrics
Author(s) -
Hale John
Publication year - 2016
Publication title -
language and linguistics compass
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.619
H-Index - 44
ISSN - 1749-818X
DOI - 10.1111/lnc3.12196
Subject(s) - parsing , computer science , rule based machine translation , artificial intelligence , natural language processing , grammar , entropy (arrow of time) , linguistics , cognitive grammar , value (mathematics) , information theory , cognition , cognitive science , psychology , mathematics , machine learning , philosophy , physics , statistics , quantum mechanics , neuroscience
Information‐theoretical complexity metrics are auxiliary hypotheses that link theories of parsing and grammar to potentially observable measurements such as reading times and neural signals. This review article considers two such metrics, Surprisal and Entropy Reduction, which are respectively built upon the two most natural notions of ‘information value’ for an observed event (Blachman [Blachman, N., 1968]). This review sketches their conceptual background and touches on their relationship to other theories in cognitive science. It characterizes them as ‘lenses’ through which theorists ‘see’ the information‐processing consequences of linguistic grammars. While these metrics are not themselves parsing algorithms, the review identifies candidate mechanisms that have been proposed for both of them.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here