z-logo
open-access-imgOpen Access
Human Action Recognition in Videos using a Robust CNN LSTM Approach
Author(s) -
Carlos Ismael Orozco,
Eduardo Xamena,
María Elena Buemi,
Julio Jacobo Berllés
Publication year - 2020
Publication title -
ciencia y tecnología/revista de ciencia y tecnología
Language(s) - English
Resource type - Journals
eISSN - 2344-9217
pISSN - 1850-0870
DOI - 10.18682/cyt.vi0.3288
Subject(s) - computer science , convolutional neural network , artificial intelligence , action recognition , search engine indexing , metric (unit) , pattern recognition (psychology) , carry (investment) , machine learning , action (physics) , class (philosophy) , operations management , physics , finance , quantum mechanics , economics
Action recognition in videos is currently a topic of interest in the area of computer vision, due to potential applications such as: multimedia indexing, surveillance in public spaces, among others. In this paper we propose (1) Implement a CNN–LSTM architecture. First, a pre-trained VGG16 convolutional neural network extracts the features of the input video. Then, an LSTM classifies the video in a particular class. (2) Study how the number of LSTM units affects the performance of the system. To carry out the training and test phases, we used the KTH, UCF-11 and HMDB-51 datasets. (3) Evaluate the performance of our system using accuracy as evaluation metric. We obtain 93%, 91% and 47% accuracy respectively for each dataset. 

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here