z-logo
open-access-imgOpen Access
Robust Automatic License Plate Recognition Using Synthetic Data and Transformer-Based Deep Learning Model
Author(s) -
Abdulrahman Aal Abdulsalam,
Farzan Saeedi,
Mohammed Ambusaidi
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3621237
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Car license plates in Oman feature a unique vertical arrangement of Arabic and English letters. This distinctive format, combined with the often low-resolution images, presents challenging condition for existing Automatic License Plate Recognition (ALPR) systems which mostly assume horizontally oriented plates with monolingual characters. This frequently leads to issues such as missed image regions or inaccurate segmentation, resulting in incorrect alphanumeric sequences. Traditional deep learning models used in ALPR systems typically need extensive hand-labeled real-world datasets to work effectively under these conditions. To overcome this, our study introduces a robust method that uses 86,000 synthetically generated images. The proposed method is designed to replicate various real-world plate configurations by applying a series of steps to mimic the low-resolution and noisy characteristics of real images. We evaluated the proposed method using a real-world dataset of 783 real car plate images. Our proposed approach which leverages synthetic data samples demonstrated more than 40% reduction in Character Error Rate (CER) achieving performance comparable to a transformer-based Optical Character Recognition (OCR) model fine-tuned with real images. This highlights the utility of synthetic data in OCR tasks, greatly reducing the need for laborious hand-labeled real-world datasets. The code and datasets used in this work are available in the Github repository: https://github.com/DRAGON20-3/Plate_Reading.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom