Premium
Automatic text‐to‐gesture rule generation for embodied conversational agents
Author(s) -
Ali Ghazanfar,
Lee Myungho,
Hwang JaeIn
Publication year - 2020
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.1944
Subject(s) - gesture , computer science , embodied cognition , set (abstract data type) , embedding , speech recognition , natural language processing , human–computer interaction , artificial intelligence , programming language
Interactions with embodied conversational agents can be enhanced using human‐like co‐speech gestures. Traditionally, rule‐based co‐speech gesture mapping has been utilized for this purpose. However, the creation of this mapping is laborious and often requires human experts. Moreover, human‐created mapping tends to be limited, therefore prone to generate repeated gestures. In this article, we present an approach to automate the generation of rule‐based co‐speech gesture mapping from publicly available large video data set without the intervention of human experts. At run‐time, word embedding is utilized for rule searching to get the semantic‐aware, meaningful, and accurate rule. The evaluation indicated that our method achieved comparable performance with the manual map generated by human experts, with a more variety of gestures activated. Moreover, synergy effects were observed in users' perception of generated co‐speech gestures when combined with the manual map.