z-logo
Premium
Temporal sparse feature auto‐combination deep network for video action recognition
Author(s) -
Wang Qicong,
Gong Dingxi,
Qi Man,
Shen Yehu,
Lei Yunqi
Publication year - 2018
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.4487
Subject(s) - computer science , constraint (computer aided design) , artificial intelligence , pattern recognition (psychology) , action recognition , feature (linguistics) , encoding (memory) , convolution (computer science) , artificial neural network , mathematics , linguistics , philosophy , geometry , class (philosophy)
Summary In order to deal with action recognition for large‐scale video data, we present a spatio‐temporal auto‐combination deep network, which is able to extract deep features from short video segments by making full use of temporal contextual correlation of corresponding pixels among successive video frames. Based on conventional sparse encoding, we further consider the representative features in adjacent nodes of the hidden layers according to activation states similarities. A sparse auto‐combination strategy is applied to multiple input maps in each convolution stage. An information constraint of the representative features of hidden layer nodes is imposed to handle the adaptive sparse encoding of the topology. As a result, the learned features can represent the spatio‐temporal transition relationships better and the number of hidden nodes can be restricted to a certain range. We conduct a series of experiments on two public data sets. The experimental results show that our approach is more effective and robust in video action recognition compared with traditional methods.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here