z-logo
open-access-imgOpen Access
Emotion Recognition System for Visually Impaired
Author(s) -
E. Kodhai,
A. Pooveswari,
P. Sharmila,
N. Ramiya
Publication year - 2020
Publication title -
international journal of engineering and advanced technology
Language(s) - English
Resource type - Journals
ISSN - 2249-8958
DOI - 10.35940/ijeat.d6733.049420
Subject(s) - computer science , convolutional neural network , facial recognition system , artificial intelligence , python (programming language) , emotion recognition , headphones , facial expression , sketch recognition , task (project management) , face detection , process (computing) , histogram , human–computer interaction , computer vision , speech recognition , pattern recognition (psychology) , gesture recognition , gesture , management , electrical engineering , economics , image (mathematics) , engineering , operating system
Machine learning is one of the current technologies that use computers to perform tasks similar to humans. It is adopted in many applications like face recognition, Chabots, self-driving cars etc. This work focuses on emotion recognition which is part of computer vision technology. Emotion recognition is mainly used in cybersecurity, online shopping, police investigations, and interview process and so on. In this paper, an emotion recognition system is built for the visually impaired people. The blind people cannot recognize the facial expressions of the person interacting with them. These people can be provided with a device that can recognize the emotions of people through a camera and conveys the kind of emotion via headphones. The system will be made using a Raspberry Pi computer to perform the entire task and it is portable for the user. The emotion recognition model will be trained using convolution neural network (CNN) with the fer2013 dataset that contains more than 30,000 images. The human face is detected using the OpenCV library and some features like Histogram of Oriented gradients (HOG) are also passed with input images for better accuracy. The recognized emotion is then converted to a speech using a python library Pyttsx3 that make use of eSpeak engine.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here