z-logo
open-access-imgOpen Access
Parallel Projection Based Self Localization Method for Mobile Navigation Applications
Author(s) -
Shung Han,
Yuntai Kyong,
Yunyoung Nam,
Sangjin Hong
Publication year - 2010
Publication title -
intech ebooks
Language(s) - English
Resource type - Book series
DOI - 10.5772/9004
Subject(s) - computer science , projection (relational algebra) , computer vision , artificial intelligence , computer graphics (images) , human–computer interaction , algorithm
In navigation application of mobile robot, position is estimated based on odometry and the measurement obtained by peripheral devices such as ultra-sonar, range finder, and visual sensor. Odometry is usually unreliable due to the internal factor of encoder device and the external factor of slippage. These kinds of errors accumulated in navigation continuously affect the estimation accuracy (Borenstein & Feng, 1996b; Martinelli et al., 2007). Range finder and sonar are often used in navigation (Borenstein et al., 1996a; Borenstein et al., 1997; Groβmann & Poli, 2001; Demirli & Molhim, 2004). However such measurement is not reliable in highly-dynamic environments where radar or sonar beams can be frequently blocked or confused by moving objects such as people. They are not applicable to localization in large area either because of their limited range. Also, passive sensor requires active landmarks such as beacon, which requires modification of environment and is not practical especially in an outdoor environment. Moreover, interference among several mobile sensors causes inability to properly localize their locations. Visual-information-based self-localization has advantages over above methods in two-folds. First, it does not require active landmarks (i.e., reference objects) such as beacon and also natural objects can serve as landmarks. The other advantage is that it is more effective and reliable in dynamic environment as the sensible range is not limited by the line-of-sight. These advantages have fostered much effort in research community of navigation application (Jang et al., 2002; Hayet et al., 2007; Se et al., 2002). Reference objects can be either artificial or natural objects (Betke & Gurvits, 1997; Briggs et al., 2000; Thrun, 1998; Dao et al., 2004). We assume that reference objects can be uniquely identified by a mobile robot. The global coordinates of landmarks or reference objects are known to mobile robot. There have been extensive researches about robot localization with known reference objects. The conventional self localization algorithms use the bearings of the landmarks relative to each other. This is called “visual angle” formed by the rays from a mobile robot to each reference point (Sutherland & Thompson, 1993). The location of a mobile robot is estimated by finding the intersection point of the circles passing the 7

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom