z-logo
open-access-imgOpen Access
Large linguistic models: Investigating LLMs’ metalinguistic abilities
Author(s) -
Gasper Begus,
Maksymilian Dabkowski,
Ryan Rhodes
Publication year - 2025
Publication title -
ieee transactions on artificial intelligence
Language(s) - English
Resource type - Magazines
eISSN - 2691-4581
DOI - 10.1109/tai.2025.3575745
Subject(s) - computing and processing
The performance of large language models (LLMs) has recently improved to the point where models can perform well on many language tasks. We show here that—for the first time—the models can also generate valid metalinguistic analyses of language data. We outline a research program where the behavioral interpretability of LLMs on these tasks is tested via prompting. LLMs are trained primarily on text—as such, evaluating their metalinguistic abilities improves our understanding of their general capabilities and sheds new light on theoretical models in linguistics. We show that OpenAI’s (2024) o1 vastly outperforms other models on tasks involving drawing syntactic trees and phonological generalization. We speculate that OpenAI o1’s unique advantage over other models may result from the model’s chain-of-thought mechanism, which mimics the structure of human reasoning used in complex cognitive tasks, such as linguistic analysis.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom