z-logo
open-access-imgOpen Access
Simulation and Experimental Tests of Robot Using Feature-Based and Position-Based Visual Servoing Approaches
Author(s) -
Minghong Ma,
Fatemeh Heidari,
Hadi Aliakbarpour
Publication year - 2006
Language(s) - English
Resource type - Book series
DOI - 10.5772/4912
Subject(s) - visual servoing , artificial intelligence , computer vision , feature (linguistics) , position (finance) , computer science , robot , economics , linguistics , philosophy , finance
Discussion of visual control of robots has been introduced since many years ago. Related applications are extensive, encompassing manufacturing, teleoperation and missile tracking cameras as well as robotic ping-pong, juggling and etc. Early work fails to meet strict definition of visual servoing and now would be classed as look-then-move robot control (Corke, 1996). Gilbert describes an automatic rocket-tracking camera which keeps the target centered in the camera's image plane by means of pan/tilt controls (Gilbert et al., 1983). Weiss proposed the use of adaptive control for the non-linear time varying relationship between robot pose and image features in image-based servoing. Detailed simulations of image-based visual servoing are described for a variety of manipulator structures of 3-DOF (Webber &.Hollis, 1988). Mana Saedan and Marcelo H. Ang worked on relative target-object (rigid body) pose estimation for vision-based control of industrial robots. They developed and implemented a closedform target pose estimation algorithm (Saedan & Marcelo, 2001). Skaar et al. use a 1-DOF robot to catch a ball. Lin et al. propose a two-stage algorithm for catching moving targets; coarse positioning to approach the target in near-minimum time and `fine tuning' to match robot acceleration and velocity with the target. Image based visual controlling of robots have been considered by many researchers. They used a closed loop to control robot joints. Feddema uses an explicit feature-space trajectory generator and closed-loop joint control to overcome problems due to low visual sampling rate. Experimental work demonstrates image-based visual servoing for 4-DOF (Kelly & Shirkey, 2001). Haushangi describes a similar approach using the task function method and show experimental results for robot positioning using a target with four circle features (Haushangi, 1990). Hashimoto et al. present simulations to compare position-based and image-based approaches (Hashimoto et al., 1991). In simulating behavior and environment of robots many researches have been done. Korayem et al. designed and simulated vision based control and performance tests for a 3P robot by visual C++ software. They minimized error in positioning of end effector and also they analyzed the error using ISO9283 and ANSI-RIAR15.05-2 standards and suggested ways to improve error (Korayem et al., 2005, 2006). They used a camera which was installed on end effector of robot to find a target and with feature-based-visual servoing controlled end effector of robot to reach the target. But the vision-based control in this work is

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom