z-logo
open-access-imgOpen Access
Improving User Identification Accuracy in Facial and Voice Based Mood Analytics using Fused Feature Extraction
Author(s) -
Dolly Reney,
Neeta Tripathi
Publication year - 2019
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.b1118.1292s319
Subject(s) - computer science , mel frequency cepstrum , biometrics , feature extraction , classifier (uml) , speech recognition , analytics , identification (biology) , artificial intelligence , field (mathematics) , usability , pattern recognition (psychology) , machine learning , human–computer interaction , data mining , botany , biology , mathematics , pure mathematics
User identification involves a lot of complex procedures including image processing, voice processing, biometric data processing and other user specific parameters. This can be applied to various fields including but not limited to smartphone authentication, bank transactions, location based identity access and various others areas. In this work, we present a novel approach for uniquely identifying users based on their facial and voice data. Our approach uses an intelligent and adaptive combination of facial geometry and mel frequency analysis (via Mel Frequency Cepstral Co-efficient or MFCC) of user voice data, in order to generate a mood based personality profile which is almost unique for each user. Combination of these features is given to a machine learning based classifier, which has proven to produce more than 90% accuracy with a false positive rate of less than 7%. We have also compared the proposed approach with various other standard implementations and observed that our implementation produces better results than most of them under real time conditions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here