Premium
Analysis of Human‐robot Interaction at the DARPA Robotics Challenge Trials
Author(s) -
Yanco Holly A.,
Norton Adam,
Ober Willard,
Shane David,
Skinner Anna,
Vice Jack
Publication year - 2015
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21568
Subject(s) - robot , robotics , human–robot interaction , artificial intelligence , human–computer interaction , automation , task (project management) , field (mathematics) , humanoid robot , control (management) , agency (philosophy) , engineering , computer science , systems engineering , mechanical engineering , philosophy , mathematics , epistemology , pure mathematics
In December 2013, the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge (DRC) Trials were held in Homestead, Florida. The DRC Trials were designed to test the capabilities of humanoid robots in disaster response scenarios with degraded communications. Each team created their own interaction method to control their robot, either the Boston Dynamics Atlas robot or a robot built by the team itself. Of the 15 competing teams, eight participated in our study of human‐robot interaction. We observed the participating teams from the field (with the robot) and in the control room (with the operators), noting many performance metrics, such as critical incidents and utterances, and categorizing their interaction methods according to the number of operators, control methods, and amount of interaction. We decomposed each task into a series of subtasks, different from the DRC Trials official subtasks for points, to gain a better understanding of each team's performance in varying complexities of mobility and manipulation. Each team's interaction methods have been compared to their performance, and correlations have been analyzed to understand why some teams ranked higher than others. We discuss lessons learned from this study, and we have found in general that the guidelines for human‐robot interaction for unmanned ground vehicles still hold true: more sensor fusion, fewer operators, and more automation lead to better performance.