
Deep learning‐based vehicle detection with synthetic image data
Author(s) -
Wang Ye,
Deng Weiwen,
Liu Zhenyi,
Wang Jinsong
Publication year - 2019
Publication title -
iet intelligent transport systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.579
H-Index - 45
eISSN - 1751-9578
pISSN - 1751-956X
DOI - 10.1049/iet-its.2018.5365
Subject(s) - detector , computer science , artificial intelligence , convolutional neural network , pipeline (software) , synthetic data , object detection , computer vision , deep learning , transfer of learning , annotation , domain (mathematical analysis) , pattern recognition (psychology) , telecommunications , mathematical analysis , mathematics , programming language
Deep convolutional neural network (CNN)‐based object detectors perform better than other kinds of object detectors in recent development. Training CNNs needs large amounts of annotated data. The acquisition and annotation of real images are arduous and often lead to inaccuracy. In order to solve this problem, the authors try to use synthetic images as substitute to train a vehicle detector. Annotation on synthetic images can be performed automatically, and can obtain better uniformity. Furthermore, it is also easy to get more variations in synthetic images. The authors present a pipeline to generate synthetic images for training vehicle detector. In this pipeline, many factors are considered to add more variations and extend the domain of training dataset. Thus, the detector trained with synthetic images is expected to perform well. The extent to which these factors influence detection performance is illustrated. However, the performance of a vehicle detector trained with synthetic images is not as good as that with real images because of domain gap. In this study, the authors develop a transfer learning approach to improve the performance of authors' vehicle detector with only a few manually annotated real images.