z-logo
open-access-imgOpen Access
Detecting Parts for Action Localization
Author(s) -
Nicolas Chesneau,
Grégory Rogez,
Karteek Alahari,
Cordelia Schmid
Publication year - 2017
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.31.51
Subject(s) - bounding overwatch , computer science , artificial intelligence , convolutional neural network , action (physics) , computer vision , frame (networking) , action recognition , tracking (education) , pattern recognition (psychology) , image (mathematics) , human body , detector , feature extraction , class (philosophy) , psychology , telecommunications , pedagogy , physics , quantum mechanics
In this paper, we propose a new framework for action localization that tracks people in videos and extracts full-body human tubes, i.e., spatio-temporal regions localizing actions, even in the case of occlusions or truncations. This is achieved by training a novel human part detector that scores visible parts while regressing full-body bounding boxes. The core of our method is a convolutional neural network which learns part proposals specific to certain body parts. These are then combined to detect people robustly in each frame. Our tracking algorithm connects the image detections temporally to extract full-body human tubes. We apply our new tube extraction method on the problem of human action localization, on the popular JHMDB dataset, and a very recent challenging dataset DALY (Daily Action Localization in YouTube), showing state-of-the-art results.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom