
Dual‐scale weighted structural local sparse appearance model for object tracking
Author(s) -
Zeng Xianyou,
Xu Long,
Cen Yigang,
Zhao Ruizhen,
Feng Wanli
Publication year - 2019
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2018.5158
Subject(s) - robustness (evolution) , active appearance model , artificial intelligence , clutter , computer science , computer vision , benchmark (surveying) , video tracking , pattern recognition (psychology) , generative model , sparse approximation , eye tracking , tracking (education) , object (grammar) , generative grammar , image (mathematics) , radar , psychology , telecommunications , pedagogy , biochemistry , chemistry , geodesy , gene , geography
It is a great challenge to develop an effective appearance model for robust visual tracking due to various interfering factors, such as pose change, occlusion, background clutter etc. More and more visual tracking methods tend to exploit the local appearance model to deal with the above challenges. In this study, the authors present a simple yet effective weighted structural local sparse appearance model, which can better describe the target appearance information through patch‐based generative weight. To further improve the robustness of tracking, they implement this appearance model on two‐scale patches. The two derived appearance models are then combined to form a collaborative model to play their advantages. Extensive experiments on the tracking benchmark dataset show that the proposed method performs favourably against several state‐of‐the‐art methods.