z-logo
open-access-imgOpen Access
Multi-Robot SLAM: A Vision-Based Approach
Author(s) -
Hassan Hajjdiab,
Robert Robert
Publication year - 2011
Publication title -
intech ebooks
Language(s) - English
Resource type - Book series
DOI - 10.5772/13389
Subject(s) - computer vision , artificial intelligence , computer science , robot , robot vision , mobile robot
The robot is said to be truly autonomous (Dissanayake et al. (2001)), if it has the ability to start at an unknown location in an unknown environment and then simultaneously build a map of the environment and localize it self in the map. Thus the robot has to solve the simultaneous localization and mapping (SLAM) problem. The information used are sequences of relative observations captured by the mobile robot. Many approaches has been proposed to solve the SLAM problem. Se et al. (2001) proposed an approach for a robot equipped with a trinocular stereo system (Murray & Little (1998)) and an odometer. The algorithm starts by detecting feature points (Lowe (1999)) in the three images. Then these feature points are matched using the epipolar and disparity constraints thats exist between the three cameras. Assuming known camera intrinsic parameters, the 3D position of each matched feature point is estimated. As the robot moves, the odometry readings are used to provide a rough estimate about the motion. This estimate can be employed in matching feature points among consecutive frames. The 3D position of each newly detected feature point is estimated and the motion of the robot is localized using a least-squares minimization scheme (Lowe (1991)). Finally, a map is built for the 3D positions of the detected feature points and the location graph of the robot is computed. Other SLAMmethods are based on evaluating some probabilistic models of the robot motion and sensed data from the environment. It is assumed that the robot can sense landmarks relative to its local coordinate frame. The landmarks may be naturally occurring in the environment like trees or artificially added like steel poles. Smith & Cheeseman (1986) use extended Kalman filters (EKF) to estimate the posterior distribution over the robot pose. The problem they solved can be stated as follows: Given a measurement of the environment zt = [z1,z2, ...,zt] and a set of control inputs ut = [u1,u2, ...,ut], determine the robot pose s t and the location of all the k landmarks m = [m1,m2, ...,mk]. In probabilistic terms, this can be expressed as the posterior:

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom