Open Access
SAFE FROM “HARM”: THE GOVERNANCE OF VIOLENCE BY PLATFORMS
Author(s) -
Julia R. DeCook,
Kelley Cotter,
Shaheen Kanthawala
Publication year - 2021
Publication title -
selected papers of internet research
Language(s) - English
Resource type - Journals
ISSN - 2162-3317
DOI - 10.5210/spir.v2021i0.12160
Subject(s) - harm , normative , corporate governance , culpability , internet governance , ideology , political science , criminology , public relations , sociology , internet privacy , law and economics , law , business , politics , computer science , finance
Platforms have long been under fire for how they create and enforce policies around hate speech, harmful content, and violence. In this study, we examine how three major platforms (Facebook, Twitter, and YouTube) conceptualize and implement policies around how they moderate “harm,” “violence,” and “danger” on their platforms. Through a feminist discourse analysis of public facing policy documents from official blogs and help pages, we found that platforms are often narrowly defining harm and violence in ways that perpetuate ideological hegemony around what violence is, how it manifests, and who it affects. Through this governance, they continue to control normative notions of harm and violence, denying their culpability, and effectively manage perceptions of their actions and directing users’ understanding of what is “harmful” versus what is not. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm.