
Ethical and technical challenges of AI in tackling hate speech
Author(s) -
Diogo Cortiz,
Arkaitz Zubiaga
Publication year - 2021
Publication title -
international review of information ethics
Language(s) - English
Resource type - Journals
ISSN - 2563-5638
DOI - 10.29173/irie416
Subject(s) - social media , moderation , process (computing) , computer science , pipeline (software) , scale (ratio) , data science , world wide web , machine learning , physics , quantum mechanics , programming language , operating system
In this paper, we discuss some of the ethical and technical challenges of using Artificial Intelligence for online content moderation. As a case study, we used an AI model developed to detect hate speech on social networks, a concept for which varying definitions are given in the scientific literature and consensus is lacking. We argue that while AI can play a central role in dealing with information overload on social media, it could cause risks of violating freedom of expression (if the project is not well conducted). We present some ethical and technical challenges involved in the entire pipeline of an AI project - from data collection to model evaluation - that hinder the large-scale use of hate speech detection algorithms. Finally, we argue that AI can assist with the detection of hate speech in social media, provided that the final judgment about the content has to be made through a process with human involvement.