
Actor-Critic Tracking with Precise Scale Estimation and Advantage Function
Author(s) -
Chuyao Wang,
Yuchen Ling
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1827/1/012064
Subject(s) - computer science , object (grammar) , artificial intelligence , tracking (education) , variation (astronomy) , reinforcement learning , boundary (topology) , computer vision , action (physics) , scale (ratio) , function (biology) , process (computing) , space (punctuation) , regression , mathematics , statistics , geography , psychology , mathematical analysis , pedagogy , physics , cartography , quantum mechanics , evolutionary biology , astrophysics , biology , operating system
In this work, a deep reinforcement learning (DRL) method is proposed to address the problem of real-time object tracking. The adopted framework in this paper is based on the ‘Actor-Critic’ tracker (ACT), since ACT only considers the scale change instead of regression object boundary, which cannot adapt the object size variation. To this end, the ACT method is improved by using a more reasonable action space, which contains a left-top and right-bottom corner coordinates. Precise shape estimation is given by regressing the variation of width and height, respectively. Furthermore, to speed up the whole training and tracking process, the Advantage Function (AF) is adopted, and its performance is compared with ACT, ACT with improved action space (IAS), and ACT with IAS and AF. This method is tested on the OTB100 dataset to validate its effectiveness.