z-logo
open-access-imgOpen Access
Recognising human interaction from videos by a discriminative model
Author(s) -
Kong Yu,
Liang Wei,
Dong Zhen,
Jia Yunde
Publication year - 2014
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2013.0042
Subject(s) - discriminative model , artificial intelligence , computer science , adaboost , feature (linguistics) , ambiguity , action (physics) , class (philosophy) , motion (physics) , interdependence , pattern recognition (psychology) , machine learning , human interaction , human–computer interaction , support vector machine , philosophy , linguistics , physics , quantum mechanics , political science , law , programming language
This study addresses the problem of recognising human interactions between two people. The main difficulties lie in the partial occlusion of body parts and the motion ambiguity in interactions. The authors observed that the interdependencies existing at both the action level and the body part level can greatly help disambiguate similar individual movements and facilitate human interaction recognition. Accordingly, they proposed a novel discriminative method, which model the action of each person by a large‐scale global feature and local body part features, to capture such interdependencies for recognising interaction of two people. A variant of multi‐class Adaboost method is proposed to automatically discover class‐specific discriminative three‐dimensional body parts. The proposed approach is tested on the authors newly introduced BIT‐interaction dataset and the UT‐interaction dataset. The results show that their proposed model is quite effective in recognising human interactions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here