Premium
Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism
Author(s) -
Swiderska Aleksandra,
Küster Dennis
Publication year - 2020
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/cogs.12872
Subject(s) - dehumanization , psychology , harm , attribution , denial , agency (philosophy) , perception , social psychology , personhood , cognitive psychology , epistemology , sociology , psychoanalysis , philosophy , neuroscience , anthropology
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer‐generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human‐like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.