z-logo
open-access-imgOpen Access
Mood Perception Model for Social Robot Based on Facial and Bodily Expression Using a Hidden Markov Model
Author(s) -
Jiraphan Inthiam,
Abbe Mowshowitz,
Eiji Hayashi
Publication year - 2019
Publication title -
journal of robotics and mechatronics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.257
H-Index - 19
eISSN - 1883-8049
pISSN - 0915-3942
DOI - 10.20965/jrm.2019.p0629
Subject(s) - hidden markov model , facial expression , robot , perception , human–robot interaction , mood , computer science , nonverbal communication , viterbi algorithm , flexibility (engineering) , psychology , cognitive psychology , expression (computer science) , speech recognition , artificial intelligence , social robot , conversation , communication , social psychology , mobile robot , robot control , mathematics , neuroscience , programming language , statistics
In the normal course of human interaction people typically exchange more than spoken words. Emotion is conveyed at the same time in the form of nonverbal messages. In this paper, we present a new perceptual model of mood detection designed to enhance a robot’s social skill. This model assumes 1) there are only two hidden states (positive or negative mood), and 2) these states can be recognized by certain facial and bodily expressions. A Viterbi algorithm has been adopted to predict the hidden state from the visible physical manifestation. We verified the model by comparing estimated results with those produced by human observers. The comparison shows that our model performs as well as human observers, so the model could be used to enhance a robot’s social skill, thus endowing it with the flexibility to interact in a more human-oriented way.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom