
VR‐based dataset for autonomous‐driving system
Author(s) -
Yao Shouwen,
Zhang Jiahao,
Wang Yu
Publication year - 2020
Publication title -
the journal of engineering
Language(s) - English
Resource type - Journals
ISSN - 2051-3305
DOI - 10.1049/joe.2019.1206
Subject(s) - computer science , computer vision , artificial intelligence , block (permutation group theory) , virtual reality , object (grammar) , transformation (genetics) , train , coordinate system , biochemistry , chemistry , geometry , mathematics , cartography , gene , geography
At present, visual recognition systems have acquired wide employment in the autonomous‐driving area. The lack of fully featured benchmarks that mimic scenarios faced by autonomous‐driving system is the core factor limiting the visual understanding of complex urban traffic scenes. However, to establish a dataset adequately captures the complexity of real‐world urban traffics consuming time and effort. In order to solve these difficulties, authors involve virtual reality to develop a large‐scale dataset, which trains and tests approaches for autonomous‐driving vehicles. Using the label of the object in virtual scenes, the coordinate transformation of a 3D object to a 2D plane is calculated, which makes the label of the pixel block corresponding to the object in the 2D plane accessible. Their recording platform is equipped with video camera models, LiDAR model and positioning system. By using the pilot‐in‐the‐loop method with driving simulator hardware and VR devices, the authors acquire and establish a large, diverse dataset comprising stereo video sequences recorded in streets and mountain roads from several different environments. Their pioneering method of using VR technology significantly mitigates the costs of acquisition of training data. Crucially, their effort exceeds previous attempts in terms of dataset size, scene variability and complexity.