Learning Spatial Object Localization from Vision on a Humanoid Robot
Author(s) -
Jürgen Leitner,
Simon Harding,
Mikhail Frank,
Alexander Förster,
Jürgen Schmidhuber
Publication year - 2012
Publication title -
international journal of advanced robotic systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.394
H-Index - 46
eISSN - 1729-8814
pISSN - 1729-8806
DOI - 10.5772/54657
Subject(s) - icub , computer science , artificial intelligence , computer vision , humanoid robot , workspace , robot , calibration , position (finance) , object (grammar) , statistics , mathematics , finance , economics
We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range) of objects seen.\ud\udBiologically inspired approaches, such as Artificial Neural Networks (ANN) and Genetic Programming (GP), are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed.\ud\udWe find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom