z-logo
open-access-imgOpen Access
HUMAN ACTIVITY DETECTION AND ACTION RECOGNITION IN VIDEOS USING CONVOLUTIONAL NEURAL NETWORKS
Author(s) -
Jagadeesh Basavaiah,
Chandrashekar M Patil
Publication year - 2020
Publication title -
journal of information and communication technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.217
H-Index - 10
eISSN - 2180-3862
pISSN - 1675-414X
DOI - 10.32890/jict2020.19.2.1
Subject(s) - computer science , activity recognition , artificial intelligence , convolutional neural network , optical flow , pattern recognition (psychology) , feature extraction , field (mathematics) , feature (linguistics) , computation , action recognition , artificial neural network , orientation (vector space) , computer vision , machine learning , image (mathematics) , class (philosophy) , linguistics , philosophy , geometry , mathematics , algorithm , pure mathematics
Human activity recognition from video scenes has become a significant area of research in the field of computer vision applications. Action recognition is one of the most challenging problems in the area of video analysis and it finds applications in human-computer interaction, anomalous activity detection, crowd monitoring and patient monitoring. Several approaches have been presented for human activity recognition using machine learning techniques. The main aim of this work is to detect and track human activity, and classify actions for two publicly available video databases. In this work, a novel approach of feature extraction from video sequence by combining Scale Invariant Feature Transform and optical flow computation are used where shape, gradient and orientation features are also incorporated for robust feature formulation. Tracking of human activity in the video is implemented using the Gaussian Mixture Model. Convolutional Neural Network based classification approach is used for database training and testing purposes. The activity recognition performance is evaluated for two public datasets namely Weizmann dataset and Kungliga Tekniska Hogskolan dataset with action recognition accuracy of 98.43% and 94.96%, respectively. Experimental and comparative studies have shown that the proposed approach outperformed state-of the art techniques.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom