Open Access
Investigating cross-lingual training for offensive language detection
Author(s) -
Andraž Pelicon,
Ravi Shekhar,
Blaž Škrlj,
Matthew Purver,
Senja Pollak
Publication year - 2021
Publication title -
peerj. computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.806
H-Index - 24
ISSN - 2376-5992
DOI - 10.7717/peerj-cs.559
Subject(s) - computer science , offensive , vocabulary , artificial intelligence , classifier (uml) , focus (optics) , language model , natural language processing , newspaper , transfer of learning , training set , machine learning , speech recognition , linguistics , philosophy , physics , business , management , advertising , optics , economics
Platforms that feature user-generated content (social media, online forums, newspaper comment sections etc.) have to detect and filter offensive speech within large, fast-changing datasets. While many automatic methods have been proposed and achieve good accuracies, most of these focus on the English language, and are hard to apply directly to languages in which few labeled datasets exist. Recent work has therefore investigated the use of cross-lingual transfer learning to solve this problem, training a model in a well-resourced language and transferring to a less-resourced target language; but performance has so far been significantly less impressive. In this paper, we investigate the reasons for this performance drop, via a systematic comparison of pre-trained models and intermediate training regimes on five different languages. We show that using a better pre-trained language model results in a large gain in overall performance and in zero-shot transfer, and that intermediate training on other languages is effective when little target-language data is available. We then use multiple analyses of classifier confidence and language model vocabulary to shed light on exactly where these gains come from and gain insight into the sources of the most typical mistakes.