z-logo
Premium
Killer Robots
Author(s) -
SPARROW ROBERT
Publication year - 2007
Publication title -
journal of applied philosophy
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.339
H-Index - 30
eISSN - 1468-5930
pISSN - 0264-3758
DOI - 10.1111/j.1468-5930.2007.00346.x
Subject(s) - officer , software deployment , robot , relation (database) , law , computer security , ammunition , political science , computer science , sociology , artificial intelligence , history , archaeology , database , operating system
 The United States Army's Future Combat Systems Project, which aims to manufacture a ‘robot army’ to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here