z-logo
open-access-imgOpen Access
Model for Converting PDF to Audio Format (Listen Your Book)
Author(s) -
Shailendra Singh
Publication year - 2021
Publication title -
international journal for research in applied science and engineering technology
Language(s) - English
Resource type - Journals
ISSN - 2321-9653
DOI - 10.22214/ijraset.2021.36522
Subject(s) - optical character recognition , computer science , reading (process) , character (mathematics) , speech recognition , speech synthesis , artificial intelligence , hidden markov model , image (mathematics) , natural language processing , linguistics , philosophy , geometry , mathematics
The present paper has introduced an innovative and efficient technique that enables user to hear the contents of text images instead of reading through them. In the current world, there is a great increase in the utilization of digital technology and multiple methods are available for the people to capture images. such images may contain important textual content that the user may need to edit or store digitally. It merges the concept of Optical Character Recognition (OCR) and Text to Speech Synthesizer (TTS). This can be done using Optical Character Recognition with the use of Tesseract OCR Engine. OCR is a branch of AI that is used in applications to recognize text from scanned documents or images. The analyzed text can also be converted to audio format to help visually impaired people hear the content that they wish to know. Text-to-Speech conversion is a method that scans and reads alphabets and numbers that are in the image using OCR technique and convert it into voices. The aim is to study and compare the multiple methods used for STT conversions and to figure out the most efficient technique that can be adapted for the conversion processes. As a result, based on review study it is found that HMM is a statistical model which is most suitable for TTS conversions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here