Premium
The Role of Negative Information in Distributional Semantic Learning
Author(s) -
Johns Brendan T.,
Mewhort Douglas J. K.,
Jones Michael N.
Publication year - 2019
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/cogs.12730
Subject(s) - word2vec , computer science , word embedding , artificial intelligence , context (archaeology) , natural language processing , word (group theory) , representation (politics) , semantics (computer science) , machine learning , embedding , mathematics , paleontology , geometry , politics , political science , law , biology , programming language
Abstract Distributional models of semantics learn word meanings from contextual co‐occurrence patterns across a large sample of natural language. Early models, such as LSA and HAL (Landauer & Dumais, 1997; Lund & Burgess, 1996), counted co‐occurrence events; later models, such as BEAGLE (Jones & Mewhort, 2007), replaced counting co‐occurrences with vector accumulation. All of these models learned from positive information only: Words that occur together within a context become related to each other. A recent class of distributional models, referred to as neural embedding models, are based on a prediction process embedded in the functioning of a neural network: Such models predict words that should surround a target word in a given context (e.g., word2vec ; Mikolov, Sutskever, Chen, Corrado, & Dean, 2013). An error signal derived from the prediction is used to update each word's representation via backpropagation. However, another key difference in predictive models is their use of negative information in addition to positive information to develop a semantic representation. The models use negative examples to predict words that should not surround a word in a given context. As before, an error signal derived from the prediction prompts an update of the word's representation, a procedure referred to as negative sampling. Standard uses of word2vec recommend a greater or equal ratio of negative to positive sampling. The use of negative information in developing a representation of semantic information is often thought to be intimately associated with word2vec 's prediction process. We assess the role of negative information in developing a semantic representation and show that its power does not reflect the use of a prediction mechanism. Finally, we show how negative information can be efficiently integrated into classic count‐based semantic models using parameter‐free analytical transformations.