z-logo
open-access-imgOpen Access
Gaze-Assisted Multi-Stream Deep Neural Network for Action Recognition
Author(s) -
Yinan Liu,
Qingbo Wu,
Liangzhi Tang,
Hengcan Shi
Publication year - 2017
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2017.2753830
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
There are two important aspects in human action recognition. The first one is how to locate the area that better indicates what the subjects in the videos are doing. The second one is how we can utilize the appearance and motion information from the video data. In this paper, we propose a gaze-assisted deep neural network, which performs the action recognition task with the help of human visual attention. Based on the above-mentioned consideration, we first collect a large number of human gaze data by recording the eye movements of human subjects when they watch the video. Then, we employ a fully convolutional network to learn to predict the human gaze. To efficiently utilize the human gaze, inspired by the rank pooling concept, which can encode the video into one image, we design a novel video representation named by dynamic gaze. The proposed dynamic gaze captures both the appearance and motion information from the video, and our human gaze data can better locate the area of interest. Based on the dynamic gaze, we build our dynamic gaze stream. We combine the proposed dynamic gaze stream together with the two-stream architecture as our final multi-stream architecture. We have collected over 300-k human gaze maps for the J-HMDB data set in this paper, and experiments show that the proposed multi-stream architecture can achieve comparable results with the state of the art in the task of action recognition with both collected human gaze data and predicted human gaze data.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom