z-logo
open-access-imgOpen Access
Automatic Generation and Population of a Graphics-Based Driving Simulator
Author(s) -
Michael Brogan,
David Kaneswaran,
Seán Commins,
Charles H. Markham,
Catherine Deegan
Publication year - 2014
Publication title -
transportation research record
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.624
H-Index - 119
eISSN - 2169-4052
pISSN - 0361-1981
DOI - 10.3141/2434-12
Subject(s) - computer science , driving simulator , fidelity , graphics , ground truth , population , computer graphics , simulation , computer vision , artificial intelligence , computer graphics (images) , real time computing , telecommunications , demography , sociology
A stereo image and high-accuracy positional data were used to generate and populate a low-fidelity graphics model. The data were acquired by a simple mobile mapping system. The use of positional data made it possible to generate automatically a sparse model consisting of a road, central road marking, a green area, and a skybox. This allowed for several applications, such as the synchronization of the model with the video and the semiautomatic population of road signs into the model data. An experiment was conducted to evaluate the model and the video as viable sources for behavioral testing of drivers. The correlations between driver speed in response to the model and the video are presented; this presentation allows for an examination of the effect of the fidelity of the driving simulator's visual cue stream. The study results were used to compare driver speed in a real vehicle with driver speeds in the video and model roads, with correlations of 84.6% (between video and ground truth), 87.3% (between model and ground truth), and 92.8% (between video and model).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here