Indoor Navigation of Unmanned Grounded Vehicle using CNN
Author(s) -
Arindam Jain,
Ayush Singh,
Deepanshu Bansal,
Prof. Madan Mohan Tripathi
Publication year - 2020
Publication title -
international journal of recent technology and engineering (ijrte)
Language(s) - English
Resource type - Journals
ISSN - 2277-3878
DOI - 10.35940/ijrte.f7972.038620
Subject(s) - computer science , convolutional neural network , process (computing) , training (meteorology) , artificial intelligence , software , computer vision , real time computing , simulation , microprocessor , computer hardware , programming language , operating system , physics , meteorology
This paper presents a hardware and software architecture for an indoor navigation of unmanned ground vehicles. It discusses the complete process of taking input from the camera to steering the vehicle in a desired direction. Images taken from a single front-facing camera are taken as input. We have prepared our own dataset of the indoor environment in order to generate data for training the network. For training, the images are mapped with steering directions, those are, left, right, forward or reverse. The pre-trained convolutional neural network(CNN) model then predicts the direction to steer in. The model then gives this output direction to the microprocessor, which in turn controls the motors to transverse in that direction. With minimum amount of training data and time taken for training, very accurate results were obtained, both in the simulation as well as actual hardware testing. After training, the model itself learned to stay within the boundary of the corridor and identify any immediate obstruction which might come up. The system operates at a speed of 2 fps. For training as well as making real time predictions, MacBook Air was used.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom