z-logo
open-access-imgOpen Access
Joint learning of convolution neural networks for RGB‐D‐based human action recognition
Author(s) -
Ren Ziliang,
Zhang Qieshi,
Qiao Piye,
Niu Maolong,
Gao Xiangyang,
Cheng Jun
Publication year - 2020
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2020.2148
Subject(s) - rgb color model , artificial intelligence , computer science , convolutional neural network , benchmark (surveying) , convolution (computer science) , pattern recognition (psychology) , joint (building) , deep learning , artificial neural network , action recognition , modalities , modality (human–computer interaction) , feature extraction , computer vision , machine learning , class (philosophy) , engineering , architectural engineering , social science , geodesy , sociology , geography
RGB‐D‐based human action recognition aims to learn distinctive features from different modalities and has shown good progress in practice. However, it is difficult to improve the recognition performance through directly training multiple individual convolutional networks (ConvNets) and fusing features later because complmentary information between different modalities cannot be learned. To address this issue, this Letter proposes a single two‐stream ConvNets framework for multimodality learning that extract features through RGB and depth streams. Specifically, the authors first represent RGB‐D sequence to motion images as the inputs of the proposed ConvNets for obtaining spatial–temporal information. Then, a features fusion and joint training strategy is adapted to learn RGB‐D complementary features simultaneously. Experimental results on benchmark NTU RGB+D 120 dataset validate the effectiveness of the proposed framework and demonstrate that two‐stream ConvNets outperforms the current state‐of‐the‐art approaches.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here