z-logo
open-access-imgOpen Access
Unlabelled 3D Motion Examples Improve Cross-View Action Recognition
Author(s) -
Ankur Gupta,
Alireza Shafaei,
James J. Little,
Robert J. Woodham
Publication year - 2014
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.28.46
Subject(s) - computer science , hallucinating , feature (linguistics) , artificial intelligence , transformation (genetics) , motion (physics) , viewpoints , pattern recognition (psychology) , action (physics) , action recognition , feature learning , computer vision , machine learning , physics , quantum mechanics , class (philosophy) , art , philosophy , linguistics , biochemistry , chemistry , visual arts , gene
We demonstrate a novel strategy for unsupervised cross-view action recognition using multi-view feature synthesis. We do not rely on cross-view video annotations to transfer knowledge across views, but use local features generated using motion capture data to learn the feature transformation. Motion capture data allows us to build a feature level correspondence between two synthesized views. We learn a feature mapping scheme for each view change by making a naive assumption that all features transform independently. This assumption along with the exact feature correspondences dramatically simplifies learning. With this learned mapping we are able to “hallucinate” action descriptors corresponding to different viewpoints. This simple approach effectively models the transformation of BoW based action descriptors under viewpoint change and outperforms the state of the art on the INRIA IXMAS dataset.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom