Can state-of-the-art saliency systems model infant gazing behavior in tutoring situations?
Author(s) -
Britta Wrede
Publication year - 2011
Publication title -
frontiers in computational neuroscience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.794
H-Index - 58
ISSN - 1662-5188
DOI - 10.3389/conf.fncom.2011.52.00035
Subject(s) - computer science , humanoid robot , human–computer interaction , artificial intelligence , natural (archaeology) , robot , event (particle physics) , physics , archaeology , quantum mechanics , history
The behavior for a humanoid robot is often modeled in accordance with human behavior. Current research suggests that analyzing infant behavior as a basis for designing the robot behavior can guide us to a natural robot interface. Based on this idea many researchers support saliency systems as a bottom-up inspired way to simulate infant-like gazing behavior. In the field of saliency systems many different approaches have proposed and quantified in terms of speed, quality and other technical issues. But so far, no one compared and quantified them in terms of natural infant tutor interaction. The question we would like to address in this paper is: Can state-of-the-art saliency systems model infant gazing behavior in tutoring situations? By addressing these issues we want to take a step towards an autonomous robot system, which could be used more natural interaction experiments in future.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom