Extracting User Intent in Mixed Initiative Teleoperator Control
Author(s) -
Andrew H. Fagg,
Michael T. Rosenstein,
Robert W. Platt,
Roderic A. Grupen
Publication year - 2004
Publication title -
scholarworks@umassamherst (university of massachusetts amherst)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.2514/6.2004-6309
Subject(s) - computer science , control (management) , human–computer interaction , artificial intelligence
User fatigue is common with robot teleoperation interfaces. Mixed initiative control approaches attempt to reduce this fatigue by allowing control responsibility to be shared between the user and an intelligent control system. A critical challenge is how the user can communicate her intentions to the control system in an intuitive manner as possible. In the context of control of a humanoid robot, we propose an interface that uses the movement currently commanded by the user to assess the intended outcome. Specifically, given the observation of the motion of the teleoperated robot for a given period of time, we would like to automatically generate an abstract explanation of that movement. Such an explanation should facilitate the execution of the same movement under the same or similar conditions in the future. How do we translate these observations of teleoperator behavior into a deep representation of the teleoperator’s intentions? Neurophysiological evidence suggests that in primates, the mechanisms for the recognition of the actions of other agents are intertwined with the mechanisms for execution of the same actions. For example, Rizzolatti et al. (1988) identified neurons within the ventral premotor cortex of monkey that fired during execution of specific grasping movements. Although this area is traditionally thought of as a motor execution area, Rizzolatti et al. (1996) showed that neurons in a subarea were active not only when the monkey executed certain grasping actions, but also when the monkey observed others making similar movements. These and other results suggest that generators of action could also facilitate the recognition of motor actions taken by another entity (in our case, the teleoperator). The foci of this study are teleoperated pick-and-place tasks using the UMass Torso robot. This robot consists of an articulated, stereo biSight head; two 7-DOF Whole Arm Manipulators (WAMs); two 3-fingered hands (each finger is equipped with a six-axis force/torque sensor); and a quadraphonic audio input system. The teleoperator interface consists of a red/blue stereo display and a P5 Essential Reality glove that senses the position and orientation of the user’s hand, as well as the flexion of the user’s fingers.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom