Viewpoint Manifolds for Action Recognition
Author(s) -
Richard Souvenir,
Kyle Parrigan
Publication year - 2009
Publication title -
eurasip journal on image and video processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.341
H-Index - 40
eISSN - 1687-5281
pISSN - 1687-5176
DOI - 10.1155/2009/738702
Subject(s) - viewpoints , action recognition , computer science , biometrics , artificial intelligence , invariant (physics) , action (physics) , representation (politics) , computer vision , motion (physics) , pattern recognition (psychology) , machine learning , mathematics , physics , quantum mechanics , class (philosophy) , art , politics , political science , law , visual arts , mathematical physics
Action recognition from video is a problem that has many important applications to human motion analysis. In real-world settings, the viewpoint of the camera cannot always be fixed relative to the subject, so view-invariant action recognition methods are needed. Previous view-invariant methods use multiple cameras in both the training and testing phases of action recognition or require storing many examples of a single action from multiple viewpoints. In this paper, we present a framework for learning a compact representation of primitive actions (e.g., walk, punch, kick, sit) that can be used for video obtained from a single camera for simultaneous action recognition and viewpoint estimation. Using our method, which models the low-dimensional structure of these actions relative to viewpoint, we show recognition rates on a publicly available dataset previously only achieved using multiple simultaneous views
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom