Robust Visual Tracking Based on Convolutional Sparse Coding
Author(s) -
Yun Liang,
Dong Wang,
Yijin Chen,
Lei Xiao,
Caixing Liu
Publication year - 2021
Publication title -
wireless communications and mobile computing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.42
H-Index - 64
eISSN - 1530-8677
pISSN - 1530-8669
DOI - 10.1155/2021/5531222
Subject(s) - computer science , artificial intelligence , computer vision , active appearance model , kernel (algebra) , neural coding , coding (social sciences) , tracking (education) , eye tracking , rectangle , pattern recognition (psychology) , benchmark (surveying) , image (mathematics) , mathematics , psychology , pedagogy , statistics , geometry , geodesy , combinatorics , geography
This paper proposes a new visual tracking method by constructing the robust appearance model of the target with convolutional sparse coding. First, our method uses convolutional sparse coding to divide the interest region of the target into a smooth image and four detail images with different fitting degrees. Second, we compute the initial target region by tracking the smooth image with the kernel correlation filtering. We define an appearance model to describe the details of the target based on the initial target region and the combination of four detail images. Third, we propose a matching method by the overlap rate and Euclidean distance to evaluate candidates and the appearance model to compute the tracking results based on detail images. Finally, the two tracking results are separately computed by the smooth image, and the detail images are combined to produce the final target rectangle. Many experiments on videos from Tracking Benchmark 2015 demonstrate that our method produces much better results than most of the present visual tracking methods.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom