z-logo
open-access-imgOpen Access
QEST: Quantized and Efficient Scene Text Detector Using Deep Learning
Author(s) -
Kanak Manjari,
Madhushi Verma,
Gaurav Singal,
Suyel Namasudra
Publication year - 2023
Publication title -
acm transactions on asian and low-resource language information processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.239
H-Index - 14
eISSN - 2375-4702
pISSN - 2375-4699
DOI - 10.1145/3526217
Subject(s) - computer science , inference , quantization (signal processing) , detector , artificial intelligence , point (geometry) , text detection , computer vision , machine learning , pattern recognition (psychology) , image (mathematics) , mathematics , telecommunications , geometry
Scene text detection is a complicated and one of the most challenging tasks due to different environmental restrictions, such as illuminations, lighting conditions, tiny and curved texts, and many more. Most of the works on scene text detection have overlooked the primary goal of increasing model accuracy and efficiency, resulting in heavy-weight models that require more processing resources. A novel lightweight model has been developed in this paper to improve the accuracy and efficiency of scene text detection. The proposed model relies on ResNet50 and MobileNetV2 as backbones with quantization used to make the resulting model lightweight. During quantization, the precision has been changed from float32 to float16 and int8 for making the model lightweight. In terms of inference time and Floating-Point Operations Per Second (FLOPS), the proposed method outperforms the state-of-the-art techniques by around 30-100 times. Here, well-known datasets, i.e. ICDAR2015 and ICDAR2019, have been utilized for training and testing to validate the performance of the proposed model. Finally, the findings and discussion indicate that the proposed model is more efficient than the existing schemes.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here