z-logo
open-access-imgOpen Access
Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance
Author(s) -
Luis Payá,
Luis M. Jiménez,
Miguel Juliá
Publication year - 2017
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0175938
Subject(s) - mobile robot , robot , trajectory , computer science , orientation (vector space) , computer vision , artificial intelligence , omnidirectional antenna , omnidirectional camera , position (finance) , landmark , visual servoing , human–computer interaction , mathematics , telecommunications , physics , geometry , finance , astronomy , antenna (radio) , economics
Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here