z-logo
open-access-imgOpen Access
Synthetic Traffic Sign Image Generation Applying Generative Adversarial Networks
Author(s) -
Christine Dewi,
Rung-Ching Chen,
Yanting Liu
Publication year - 2022
Publication title -
vietnam journal of computer science
Language(s) - English
Resource type - Journals
eISSN - 2196-8888
pISSN - 2196-8896
DOI - 10.1142/s2196888822500191
Subject(s) - traffic sign recognition , computer science , artificial intelligence , convolutional neural network , pattern recognition (psychology) , pixel , consistency (knowledge bases) , similarity (geometry) , generative grammar , image (mathematics) , adversarial system , sign (mathematics) , deep learning , mathematics , traffic sign , mathematical analysis
Recently, it was shown that convolutional neural networks (CNNs) with suitably annotated training data and results produce the best traffic sign detection (TSD) and recognition (TSR). The whole system’s efficiency is determined by the data collecting process based on neural networks. As a result, the datasets for traffic signs in most nations throughout the globe are difficult to recognize because of their diversity. To address this problem, we must create a synthetic image to enhance our dataset. We apply deep convolutional generative adversarial networks (DCGAN) and Wasserstein generative adversarial networks (Wasserstein GAN, WGAN) to generate realistic and diverse additional training images to compensate for the original image distribution’s data shortage. This study focuses on the consistency of DCGAN and WGAN images created with varied settings. We utilize an actual picture with various numbers and scales for training. Additionally, the Structural Similarity Index (SSIM) and the Mean Square Error (MSE) were used to determine the image’s quality. In our study, we computed the SSIM values between pictures and their corresponding real images. When more training images are used, the images created have a significant degree of similarity to the original image. The results of our experiment reveal that the most leading SSIM values are achieved when 200 total images of [Formula: see text] pixels are utilized as input and the epoch is 2000.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here