z-logo
open-access-imgOpen Access
A Low-Cost Robot Positioning System for a First-Year Engineering Cornerstone Design Project
Author(s) -
David Frank,
Kevin Witt,
Chris Hartle,
Jacob Enders,
Veronica Beiring,
Richard Freuler
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/p.26355
Subject(s) - cornerstone , robot , computer science , global positioning system , engineering design process , simulation , systems engineering , engineering , artificial intelligence , mechanical engineering , telecommunications , art , visual arts
Researchers in autonomous robotic design have leveraged a variety of technologies to simulate the Global Positioning System (GPS) on a smaller laboratory or commercial scale. In the interest of cost and accuracy, a system was developed for The Ohio State University Fundamentals of Engineering for Honors (FEH) Program's "Cornerstone" Design Project. The system utilizes high definition commercial web cameras to accurately simulate a GPS for the autonomous robots created by students. For the past 21 years The Ohio State University has provided a "Cornerstone" Design Project for first-year honors engineering students. In this course, teams of students compete in a robot design competition, designing a fully-autonomous robot around a given microcontroller and within specified size and budget. The robots are tasked with completing several objectives on an 18 square foot course within a two-minute time period. High definition Logitech C920 webcams were chosen for the project based on their commercial availability and operating resolution of 1920 by 1080 pixels at a rate of 30 frames per second. Additionally, the cameras had a wide viewing angle which allowed them to be mounted six feet above each course. This provided sufficient coverage of each course and gave positional information to within a quarter of an inch and to within one degree. The system detected micro Quick Response (QR) codes, which were printed on three inch squares and mounted on each student's robot. The micro QR code data contained the name designation of each team. The cameras were controlled by a National Instruments (NI) LabVIEW application. Via user interface, three specific locations were selected on each course to calibrate the coordinate systems and to account for any rotation with respect to the camera. Based on the calibrated coordinate system, the detected location and orientation was transmitted over radio frequency to each robot. Information relating to the progress of each robot in completing course tasks was overlaid with this positional information onto a live video feed of each course. These feeds were displayed in sets of four in a global user interface for a large-scale, real-time, visual representation of each competition round for viewing by observers during final competition. The use of simulated GPS in the "Cornerstone" Design course gave the students the advantage of working with real world concepts. The system introduced students to designing programs that interact with external systems in real time. It also introduced students to navigation without physical interaction with obstacles. This added to the variety of tools available to students for navigation which facilitated discussion between students on best design strategies. It allowed students to not only design software that acted based on a variety of inputs, but to design software that seamlessly transitioned between them as well. Background and Introduction For the last twenty-one years in each spring term, The Ohio State University FEH Program has incorporated an autonomous robot design project in which college freshman honors engineering students design, build, and program autonomous vehicles to perform certain well-defined tasks within a two-minute time limit1. The tasks the robots must complete revolve around a central theme developed each year by the teaching assistants and faculty of the Honors engineering classes. The theme for spring 2015 was “Arctic Storm”, and the robot competition course is shown as a CAD model in Figure 1. Figure 1. Diagram of 2015 Robot Competition Course The project uses two robot courses, which are each divided into four identical course sections that each consist of an upper and lower region, with a ramp connecting the two. Each course section contains a set of four or five tasks each team must complete, all of which revolve around the course theme. For instance, in 2015, teams had to push a set of three buttons in a specified order, pick up and move a small salt bag, rotate a pentagon-shaped crank either clockwise or counterclockwise, and push a lever horizontally. Above the course there is a frame that holds four cameras for the Robot Positioning System (RPS). This frame can be seen in Figure 1. Students are divided into teams of three or four based on the class in which they are enrolled. In 2015, sixty-three teams participated in the project. Students are provided with a custom-built microcontroller and a list of project requirements, including size and budget constraints. After eight weeks, each team participates in the individual competition, where teams are given three attempts to complete all tasks on the course in under two minutes. The scores from the individual competition are used to seed teams for the final competition the following week. At the final competition, teams compete four at a time in three rounds of round-robin style play, and then in a single elimination tournament based on seeding determined by the individual competition to determine the robot champion for the year1. The students are encouraged to be creative so that each group produces a unique robot. In order to allow for a wide range of flexibility in design, the courses are built to enable different approaches. Some students may choose to navigate the course using line following, utilizing the lines placed on the floor of the course. Others may choose to track position based using encoders to measure distance traveled and/or aligning with walls using bump switches. The Robot Positioning System provides students with an additional navigation option. With a basis similar to Global Positioning System technology that many students are familiar with, the RPS allows students to work with simulated real world tools to aid in their design. A large advantage is the ability to pinpoint robot location within the physical boundaries of the course. A bump switch may relay information as to which wall or walls a robot is aligned with but after moving away from the wall, the bump switches cannot provide positional information. The first iteration of a robot positioning system used the infrared (IR) sensors found in Nintendo Wii Remotes to track a pair of IR LEDs that were mounted on the students’ robots. This system then used an RF transmitter to send data to a robot controller so the students could use the data to further manage the navigation of their robots2. Previous research on different robotic tracking systems also exists. There have been similar systems that have utilized QR code-like tracking symbols3. Additionally, systems have been developed that utilize ultrasonic pulses or Microsoft Xbox Kinect cameras4,5. Motivation for Development There are several reasons why it was decided to overhaul the original system that utilized IR sensors. First, the limitations of the Wii remote IR sensor were continuously causing setbacks. The proper IR sensors themselves were exclusive to the Nintendo Wii Remote. It was near impossible to obtain the sensor as its own component for less than the cost of a Wii remote. Thus, the sensor then had to be unsoldered from the main circuit board in the Wii Remote which at times led to damage to the delicate pins coming off of the sensors. In order to mitigate this issue, an imaging device with an off-the-shelf replacement was desirable. Also, all of the Wii Remotes communicated over the Inter-Integrated Circuit (I2C) protocol. Since the sensors were sufficiently far from the main processor and since I2C does not inherently make use of differential signaling, corrupted data was experienced due to the length of the wire between the sensor and the processor. In the new system, it was determined that a protocol that makes use of differential signaling would be ideal. Additionally, each of the sensors had the same nonconfigurable I2C address meaning that they could not be daisy chained6. Because of this, additional custom hardware was needed in order to correctly communicate with the sensors (e.g. an I2C multiplexer). Lastly, using active equipment on the students’ robots (i.e. the two IR LEDs) was difficult to maintain and often times were inadvertently destroyed by students shorting out pins on the LED bar that was given to them. Additionally, it was difficult to determine if they were working or not because IR light is undetectable by the human eye. Moreover, the system was fairly complex and difficult to maintain by the teaching assistant staff. In the first year program, the teaching assistants normally stayed with the program only while obtaining their degrees. Because of the turnover of personnel within the teaching assistant staff, it is necessary to make a system that is easy to learn in order to ensure that it is maintainable.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom