
Robot Criminals
Author(s) -
Ying Hu
Publication year - 2019
Publication title -
university of michigan journal of law reform
Language(s) - English
Resource type - Journals
eISSN - 2688-4933
pISSN - 0363-602X
DOI - 10.36646/mjlr.52.2.robot
Subject(s) - commit , harm , liability , robot , agency (philosophy) , moral agency , misconduct , function (biology) , value (mathematics) , criminal law , business , law , law and economics , political science , psychology , sociology , computer science , artificial intelligence , social science , database , evolutionary biology , machine learning , biology
When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot is capable of making, acting on, and communicating the reasons behind its moral decisions. If such a robot fails to observe the minimum moral standards that society requires of it, labeling it as a criminal can effectively fulfill criminal law’s function of censuring wrongful conduct and alleviating the emotional harm that may be inflicted on human victims.Imposing criminal liability on robots does not absolve robot manufacturers, trainers, or owners of their individual criminal liability. The former is not rendered redundant by the latter. It is possible that no human is sufficiently at fault in causing a robot to commit a particular morally wrongful action. Additionally, imposing criminal liability on robots might sometimes have significant instrumental value, such as helping to identify culpable individuals and serving as a self-policing device for individuals who interact with robots. Finally, treating robots that satisfy the above-mentioned conditions as moral agents appears much more plausible if we adopt a less human-centric account of moral agency.