z-logo
open-access-imgOpen Access
Multi‐dimensional data modelling of video image action recognition and motion capture in deep learning framework
Author(s) -
Gao Peijun,
Zhao Dan,
Chen Xuanang
Publication year - 2020
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.0588
Subject(s) - artificial intelligence , computer science , deep learning , convolutional neural network , feature (linguistics) , computer vision , rgb color model , pattern recognition (psychology) , data set , histogram , optical flow , feature extraction , feature learning , image (mathematics) , philosophy , linguistics
In order to improve the accuracy of small‐range human motion recognition in video and the computational efficiency of large‐scale data sets, a multi‐dimensional data model of motion recognition and motion capture in video image based on deep‐learning framework was proposed. First, the moving foreground of the target is extracted by the Gauss mixture model, and the human body is recognised by the gradient histogram. At the second level, the dense trajectory feature and the deep learning feature are fused, according to the integration of global encoding algorithm and convolutional neural network. In the deep learning feature, the fusion of the deep video feature and the video RGB tricolour feature is taken as the feature of deep learning. Finally, the classification is based on the deep learning network model. The simulation experiments based on large‐scale real data sets and small‐scale gesture data sets show that the algorithm has high recognition accuracy for large‐scale data sets and small‐scale gesture actions. In addition, Imperial Computer Vision & Learning Lab human behaviour data set is used to classify the experimental data. The average classification accuracy is 85.79%. The algorithm can run at a speed of about 20 frames per second.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here