z-logo
Premium
Hand gesture recognition using multimodal data fusion and multiscale parallel convolutional neural network for human–robot interaction
Author(s) -
Gao Qing,
Liu Jinguo,
Ju Zhaojie
Publication year - 2021
Publication title -
expert systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.365
H-Index - 38
eISSN - 1468-0394
pISSN - 0266-4720
DOI - 10.1111/exsy.12490
Subject(s) - computer science , gesture , convolutional neural network , artificial intelligence , gesture recognition , computer vision , reliability (semiconductor) , upsampling , pattern recognition (psychology) , speech recognition , image (mathematics) , power (physics) , physics , quantum mechanics
Hand gesture recognition plays an important role in human–robot interaction. The accuracy and reliability of hand gesture recognition are the keys to gesture‐based human–robot interaction tasks. To solve this problem, a method based on multimodal data fusion and multiscale parallel convolutional neural network (CNN) is proposed in this paper to improve the accuracy and reliability of hand gesture recognition. First of all, data fusion is conducted on the sEMG signal, the RGB image, and the depth image of hand gestures. Then, the fused images are generated to two different scale images by downsampling, which are respectively input into two subnetworks of the parallel CNN to obtain two hand gesture recognition results. After that, hand gesture recognition results of the parallel CNN are combined to obtain the final hand gesture recognition result. Finally, experiments are carried out on a self‐made database containing 10 common hand gestures, which verify the effectiveness and superiority of the proposed method for hand gesture recognition. In addition, the proposed method is applied to a seven‐degree‐of‐freedom bionic manipulator to achieve robotic manipulation with hand gestures.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here