z-logo
open-access-imgOpen Access
Mouth Gesture Interface for VLC Media Player
Author(s) -
A Rajaprabu,
A. Geetha
Publication year - 2019
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.f1002.0486s419
Subject(s) - gesture , computer science , interface (matter) , convolutional neural network , classifier (uml) , computer vision , artificial intelligence , gesture recognition , speech recognition , bubble , maximum bubble pressure method , parallel computing
This proposed work presents a frame work of mouth gesture recognition for Human Computer Interface (HCI). It replaces the traditional input devices such as mouse and keyboard which allows a user to work on a computer using his/her mouth gestures. This work is aimed at helping severely disabled and paralyzed people. The entire work includes mouth detection, region extraction, gesture classification, and interface creation with computer applications. Initially face and mouth regions are detected using Haar-cascaded classifier. Secondly, the gesture recognition is done using the concept of Deep learning through Convolutional Neural Network (CNN). The mouth gestures are recognized and classified as mouth close, mouth open, tongue left and tongue right. Finally an HCI is created by mapping the mouth gestures into VLC player operations such as play, pause, forward jump and backward jump. The performance of the proposed method is measured and compared with other existing methods. This work is found to perform better than the other methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here