
When Is a Robot a Moral Agent?
Author(s) -
John P Sullins lll
Publication year - 2006
Publication title -
international review of information ethics
Language(s) - English
Resource type - Journals
ISSN - 2563-5638
DOI - 10.29173/irie136
Subject(s) - harm , moral agency , robot , personhood , agency (philosophy) , moral disengagement , computer science , psychology , social psychology , artificial intelligence , political science , epistemology , law , philosophy
In this paper I argue that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition or ‘intention’ to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons.