
Back‐dropout transfer learning for action recognition
Author(s) -
Ren Huamin,
Kanhabua Nattiya,
Møgelmose Andreas,
Liu Weifeng,
Kulkarni Kaustubh,
Escalera Sergio,
Baró Xavier,
Moeslund Thomas B.
Publication year - 2018
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2016.0309
Subject(s) - dropout (neural networks) , computer science , transfer of learning , artificial intelligence , machine learning , action (physics) , pattern recognition (psychology) , transfer (computing) , physics , quantum mechanics , parallel computing
Transfer learning aims at adapting a model learned from source dataset to target dataset. It is a beneficial approach especially when annotating on the target dataset is expensive or infeasible. Transfer learning has demonstrated its powerful learning capabilities in various vision tasks. Despite transfer learning being a promising approach, it is still an open question how to adapt the model learned from the source dataset to the target dataset. One big challenge is to prevent the impact of category bias on classification performance. Dataset bias exists when two images from the same category, but from different datasets, are not classified as the same. To address this problem, a transfer learning algorithm has been proposed, called negative back‐dropout transfer learning (NB‐TL), which utilizes images that have been misclassified and further performs back‐dropout strategy on them to penalize errors. Experimental results demonstrate the effectiveness of the proposed algorithm. In particular, the authors evaluate the performance of the proposed NB‐TL algorithm on UCF 101 action recognition dataset, achieving 88.9% recognition rate.