
A Gesture Recognition Approach Using Multimodal Neural Network
Author(s) -
Xiaoyu Song,
Hong Chen,
Qing Wang
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1544/1/012127
Subject(s) - gesture , computer science , gesture recognition , classifier (uml) , modal , speech recognition , artificial intelligence , fuse (electrical) , artificial neural network , pattern recognition (psychology) , engineering , chemistry , polymer chemistry , electrical engineering
Gesture recognition based on visual modal often encounters the problem of reduced recognition rate in some extreme environments such as in a dim or near-skinned background. When human beings make judgments, they will integrate various modal information. There should also be some connections between human gestures and speech. Based on this, we propose a multimodal gesture recognition network. We use 3D CNN to extract visual features, GRU to extract speech features, and fuse them at late stage to make the final judgment. At the same time, we use a two-stage structure, a shallow network as detector and a deep network as classifier to reduce the memory usage and energy consumption. We make a gesture dataset recorded in a dim environment, named DarkGesture. In this dataset, people say the gesture’s name when they make a gesture. Then, the network proposed in this paper is compared with the single-modal recognition network based on DarkGesture. The results show that the multi-modal recognition network proposed in this paper has better recognition effect.