Grounding English Commands to Reward Functions
Author(s) -
James MacGlashan,
Monica Babeş-Vroman,
Marie desJardins,
Michael L. Littman,
Smaranda Muresan,
Shawn Squire,
Stefanie Tellex,
Dilip Arumugam,
Lei Yang
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.15607/rss.2015.xi.018
Subject(s) - computer science , ground , human–computer interaction , engineering , electrical engineering
As intelligent robots become more prevalent, methods to make interaction with the robots more accessible are increasingly important. Communicating the tasks that a person wants the robot to carry out via natural language, and training the robot to ground the natural language through demonstration, are especially appealing approaches for interaction, since they do not require a technical background. However, existing approaches map natural language commands to robot command languages that directly express the sequence of actions the robot should execute. This sequence is often specific to a particular situation and does not generalize to new situations. To address this problem, we present a system that grounds natural language commands into reward functions using demonstrations of different natural language commands being carried out in the environment. Because language is grounded to reward functions, rather than explicit actions that the robot can perform, commands can be high-level, carried out in novel environments autonomously, and even transferred to other robots with different action spaces. We demonstrate that our learned model can be both generalized to novel environments and transferred to a robot with a different action space than the action space used during training.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom