
Artificial intelligence language models and the false fantasy of participatory language policies
Author(s) -
Mandy Lau
Publication year - 2021
Publication title -
working papers in applied linguistics and linguistics at york
Language(s) - English
Resource type - Journals
ISSN - 2564-2855
DOI - 10.25071/2564-2855.5
Subject(s) - language policy , language industry , citizen journalism , constructed language , covert , computer science , sociology , language model , comprehension approach , language education , linguistics , artificial intelligence , pedagogy , world wide web , philosophy
Artificial intelligence neural language models learn from a corpus of online language data, often drawn directly from user-generated content through crowdsourcing or the gift economy, bypassing traditional keepers of language policy and planning (such as governments and institutions). Here lies the dream that the languages of the digital world can bend towards individual needs and wants, and not the traditional way around. Through the participatory language work of users, linguistic diversity, accessibility, personalization, and inclusion can be increased. However, the promise of a more participatory, just, and emancipatory language policy as a result of neural language models is a false fantasy. I argue that neural language models represent a covert and oppressive form of language policy that benefits the privileged and harms the marginalized. Here, I examine the ideology underpinning neural language models and investigate the harms that result from these emerging subversive regulatory bodies.