
Robust part‐based visual tracking via adaptive collaborative modelling
Author(s) -
Kong Jun,
Wang Benxuan,
Jiang Min
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.6027
Subject(s) - computer science , discriminative model , benchmark (surveying) , artificial intelligence , video tracking , eye tracking , filter (signal processing) , computer vision , tracking (education) , dimensionality reduction , machine learning , pattern recognition (psychology) , object (grammar) , psychology , pedagogy , geodesy , geography
Discriminative correlation filter‐based tracking algorithms have recently shown impressive performance on benchmark data sets. However, visual tracking is still a challenging task in the case of partial occlusions, irregular deformations and so on. In this study, the authors intend to solve these issues by introducing the adaptive collaborative model into part‐based tracking. First, instead of a simple linear superposition, the collaborative strategy they proposed combines the template model and colour‐based model adaptively and relies on the strengths of both to promote the accuracy. Second, we utilise the voting strategy to figure out the final object position from reliable parts, and the motion information is used in evaluation for reliable parts to enable the tracker to be robust in various situations. Third, the authors utilise a discriminative multi‐scale estimate method to solve the problem of scale variations. Finally, they introduce a dimensionality reduction method to limit the computational complexity of the tracker. Abundant experiments demonstrate that the tracker performs superiorly against several advanced algorithms on both the Online Tracking Benchmark (OTB) 2013 and OTB2015 data sets while maintaining the high frame rates.