z-logo
open-access-imgOpen Access
The Virtual Vision Lab: A Simulated/Real Environment For Interactive Education
Author(s) -
Timothy Jones,
Peter K. Allen,
P. A. McCoog,
Jennie Crosby
Publication year - 2020
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--6392
Subject(s) - computer science , presentation (obstetrics) , robot , session (web analytics) , usable , task (project management) , robotics , artificial intelligence , human–computer interaction , multimedia , educational robotics , world wide web , engineering , medicine , systems engineering , radiology
The Virtual Vision Lab (VVL) is a project aimed at producing instructional lab modules for new and emerging techniques in robotic vision. VVL uses an integrated multi-media presentation format that allows the student to learn about robot vision techniques from textual sources, runtime algorithm codes, live and canned digital imagery, interactive modification of program parameters and insertion of student developed code for certain parts of the tutorial. It aims to translate a research paper in robot vision into a usable and understandable laboratory exercise that highlights the important aspects of the research in a realistic environment that combines both simulated virtual components and real camera imagery. The task the tutorial uses to demonstrate some basic principles of robotics and computer vision is the “pick and place task” which is implemented using a movable robot mounted camera that produces stereo imagery inside a robotic workcell.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom