z-logo
open-access-imgOpen Access
Robust video tracking algorithm: a multi‐feature fusion approach
Author(s) -
Wang Howard,
Nguang Sing Kiong,
Wen Jiwei
Publication year - 2018
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2017.0404
Subject(s) - computer science , artificial intelligence , computer vision , pixel , robustness (evolution) , mean shift , kernel (algebra) , algorithm , pattern recognition (psychology) , video tracking , video processing , mathematics , biochemistry , chemistry , combinatorics , gene
This study proposes a novel robust video tracking algorithm consists of target detection, multi‐feature fusion, and extended Camshift. Firstly, a novel target detection method that integrates Canny edge operator, three‐frame difference, and improved Gaussian mixture model (IGMM)‐based background modelling is provided to detect targets. The IGMM‐based background modelling divides video frames into meshes to avoid pixel‐wise processing. In addition, the output of the target detection is utilised to initialise the IGMM and to accelerate the convergence of iterations. Secondly, low‐dimensional regional covariance matrices are introduced to describe video targets by fusing multiple features like pixel location, colour index, rotation and scale invariant features as well as uniform local binary patterns, and directional derivatives. Thirdly, an extended Camshift based on adaptive kernel bandwidth and robust H ∞ state estimation is proposed to predict the states of fast moving targets and to reduce the mean shift iterations. Finally, the effectiveness of the proposed tracking algorithm is demonstrated via experiments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here