
CICERONE- A Real Time Object Detection for Visually Impaired People
Author(s) -
Therese Yamuna Mahesh,
S S Parvathy,
Steve Thomas,
Shilpa Rachel Thomas,
Thomas Sebastian
Publication year - 2021
Publication title -
iop conference series. materials science and engineering
Language(s) - English
Resource type - Journals
eISSN - 1757-899X
pISSN - 1757-8981
DOI - 10.1088/1757-899x/1085/1/012006
Subject(s) - computer science , software portability , object (grammar) , raspberry pi , headset , backup , computer vision , human–computer interaction , artificial intelligence , multimedia , computer security , internet of things , database , operating system , telecommunications
Our work provides a solution to people who are suffering from partial blindness due to diseases of the eye or due to accidents. Unlike people who are born blind, they depend a lot on other people for doing their day to day activities. Our project is a boon to such people as our end product; a smart walking stick detects the trained objects that are used daily by the person and intimates the person using audio messages. This project changes the visual world into an audio world by informing the visually impaired of the objects in their environment. These people can use this particular prototype for self -navigating their way. A YOLO (You look only once) algorithm is used in our project. Real-time objects in an image are detected with their names represented on a bounding box and these names are converted to speech signals. The conversion to audio signals is done by using an e-Speak tool which forms Googles Text to Speech (gTTS) system. The prototype consists of several modules. The Raspberry Pi camera module takes the image and transfers it to the Raspberry Pi desktop. Then, real-time object detection is carried out by using YOLO network. The detected image is converted to speech by using the gTTS module and the audio result is provided to the user through a headset. For achieving the required portability, the battery backup is being used.