
Power‐efficient real‐time solution for adaptive vision algorithms
Author(s) -
Tabkhi Hamed,
Sabbagh Majid,
Schirner Gunar
Publication year - 2015
Publication title -
iet computers and digital techniques
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.219
H-Index - 46
eISSN - 1751-861X
pISSN - 1751-8601
DOI - 10.1049/iet-cdt.2014.0075
Subject(s) - computer science , power (physics) , algorithm , parallel computing , quantum mechanics , physics
This study focuses on embedded realisation of adaptive vision algorithms, and illustrates the challenges using mixture of Gaussian (MoG) background subtraction. MoG is a frequently used adaptive vision kernel, for example, for surveillance applications. It involves massive computation and communication demands, which renders a software approach infeasible considering a 1 W power budget. To address these challenges, the authors employ a systematic system‐level design approach and first analyse the demands at high‐level, explore opportunities for bandwidth reduction, and derive a customised system‐level specification. Based on the system‐level exploration, this study then proposes a communication‐centric architecture template that simplifies implementing embedded adaptive vision algorithms. To achieve high efficiency, they propose to separate steaming and algorithm‐intrinsic traffic. This allows customising the traffic handling based on role of the data, as well as simplifying interconnecting multiple heterogeneous nodes. The authors demonstrate the benefits of traffic separation and the communication‐centric architecture template based on MoG. They realise MoG on the Zynq‐7000 SoC processing 1080p 30 Hz stream in real‐time. The MoG processing kernel consists of 77 pipeline stages operating at 148.5 MHz. The authors' solution is more than 600 × faster than an ARM Cortex‐A9 with 666 MHz. It only consumes 151 mW of on‐chip power operating in real‐time.