
Vision-Based Imitation Learning in Heterogeneous Multi-Robot Systems: Varying Physiology and Skill
Author(s) -
Jeff Allen,
John Anderson,
Jacky Baltes
Publication year - 2012
Publication title -
international journal of automation and smart technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.148
H-Index - 10
ISSN - 2223-9766
DOI - 10.5875/ausmt.v2i2.111
Subject(s) - imitation , task (project management) , computer science , robot , artificial intelligence , set (abstract data type) , human–computer interaction , machine learning , psychology , engineering , social psychology , systems engineering , programming language
Imitation learning enables a learner to improve its abilities by observing others. Most robotic imitation learning systems only learn from demonstrators that are similar physically and in terms of skill level. In order to employ imitation learning in a heterogeneous multi-agent environment, we must consider both differences in skill, and physical differences (physiology, size). This paper describes an approach to imitation learning from heterogeneous demonstrators, using global vision. It supports learning from physiologically different demonstrators (wheeled and legged, of various sizes), and self-adapts to demonstrators with varying levels of skill. The latter allows different parts of a task to be learned from different individuals (that is, worthwhile parts of a task can still be learned from a poorly-performing demonstrator). We assume the imitator has no initial knowledge of the observable effects of its own actions, and train a set of Hidden Markov Models to create an understanding of the imitator’s own abilities. We then use a combination of tracking sequences of primitives and predicting future primitives from existing combinations of primitives, using forward models to learn abstract behaviors from demonstrations. This approach is evaluated using a group of heterogeneous robots that have been previously used in RoboCup soccer competitions