z-logo
Premium
Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large‐Scale Text Corpora
Author(s) -
Iordan Marius Cătălin,
Giallanza Tyler,
Ellis Cameron T.,
Beckage Nicole M.,
Cohen Jonathan D.
Publication year - 2022
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/cogs.13085
Subject(s) - computer science , leverage (statistics) , artificial intelligence , natural language processing , machine learning , semantic similarity , context (archaeology) , embedding , empirical research , similarity (geometry) , dimensionality reduction , representation (politics) , mathematics , paleontology , political science , statistics , politics , law , image (mathematics) , biology
Applying machine learning algorithms to automatically infer relationships between concepts from large‐scale collections of documents presents a unique opportunity to investigate at scale how human semantic knowledge is organized, how people use it to make fundamental judgments (“How similar are cats and bears?”), and how these judgments depend on the features that describe concepts (e.g., size, furriness). However, efforts to date have exhibited a substantial discrepancy between algorithm predictions and human empirical judgments. Here, we introduce a novel approach to generating embeddings for this purpose motivated by the idea that semantic context plays a critical role in human judgment. We leverage this idea by constraining the topic or domain from which documents used for generating embeddings are drawn (e.g., referring to the natural world vs. transportation apparatus). Specifically, we trained state‐of‐the‐art machine learning algorithms using contextually‐constrained text corpora (domain‐specific subsets of Wikipedia articles, 50+ million words each) and showed that this procedure greatly improved predictions of empirical similarity judgments and feature ratings of contextually relevant concepts. Furthermore, we describe a novel, computationally tractable method for improving predictions of contextually‐unconstrained embedding models based on dimensionality reduction of their internal representation to a small number of contextually relevant semantic features. By improving the correspondence between predictions derived automatically by machine learning methods using vast amounts of data and more limited, but direct empirical measurements of human judgments, our approach may help leverage the availability of online corpora to better understand the structure of human semantic representations and how people make judgments based on those.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here