z-logo
open-access-imgOpen Access
Human Cooperation When Acting Through Autonomous Machines
Author(s) -
Celso M. de Melo,
Stacy Marsella,
Jonathan Gratch
Publication year - 2019
Publication title -
proceedings of the national academy of sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 5.011
H-Index - 771
eISSN - 1091-6490
pISSN - 0027-8424
DOI - 10.1073/pnas.1817656116
Subject(s) - salient , computer science , domain (mathematical analysis) , term (time) , autonomous agent , human–computer interaction , artificial intelligence , computer security , mathematical analysis , physics , mathematics , quantum mechanics
Significance Autonomous machines that act on our behalf—such as robots, drones, and autonomous vehicles—are quickly becoming a reality. These machines will face situations where individual interest conflicts with collective interest, and it is critical we understand if people will cooperate when acting through them. Here we show, in the increasingly popular domain of autonomous vehicles, that people program their vehicles to be more cooperative than they would if driving themselves. This happens because programming machines causes selfish short-term rewards to become less salient, and that encourages cooperation. Our results further indicate that personal experience influences how machines are programmed. Finally, we show that this effect generalizes beyond the domain of autonomous vehicles and we discuss theoretical and practical implications.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom