Joint Sparse Representation and Robust Feature-Level Fusion for Multi-Cue Visual Tracking
Author(s) -
Xiangyuan Lan,
Andy J. Ma,
Pong C. Yuen,
Rama Chellappa
Publication year - 2015
Publication title -
ieee transactions on image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.778
H-Index - 288
eISSN - 1941-0042
pISSN - 1057-7149
DOI - 10.1109/tip.2015.2481325
Subject(s) - signal processing and analysis , communication, networking and broadcast technologies , computing and processing
Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom