
A Continuous Gesture Segmentation and Recognition Method for Human-Robot Interaction
Author(s) -
Jiangwen Fan,
Yue Yang,
Yu Wang,
Bei Wan,
Xudong Li,
Gengpai Hua
Publication year - 2022
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/2213/1/012039
Subject(s) - gesture , gesture recognition , computer science , sign language , artificial intelligence , speech recognition , computer vision , sketch recognition , segmentation , robot , pattern recognition (psychology) , human–computer interaction , philosophy , linguistics
The process of human-computer cooperation using gesture recognition can make people get rid of the limitations of traditional input devices such as mouse and keyboard, and control artificial intelligence devices more efficiently and naturally. As a new way of human-robot interaction (HRI), gesture recognition has made some progress. There are many ways to realize gesture recognition combined with visual recognition, motion information acquisition and EMG signal. The research on isolated language gesture recognition has been quite mature, but the expression semantics of isolated gestures is single. In order to improve the interaction efficiency, the application of continuous gesture recognition is essential. This paper studies its application in continuous sign language sentence recognition based on inertial sensor and rule matching recognition algorithm. The recognition rate of nine single HRI gestures is 92.7%, and the HRI of combined gestures is realized.