z-logo
open-access-imgOpen Access
Effective preprocessing based neural machine translation for English to Telugu cross-language information retrieval
Author(s) -
Bala Raju,
Manoj Raju,
K. Satyanarayana
Publication year - 2021
Publication title -
iaes international journal of artificial intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.341
H-Index - 7
eISSN - 2252-8938
pISSN - 2089-4872
DOI - 10.11591/ijai.v10.i2.pp306-315
Subject(s) - computer science , machine translation , artificial intelligence , natural language processing , telugu , preprocessor , perplexity , parallel corpora , language model , example based machine translation
In cross-language information retrieval (CLIR), the neural machine translation (NMT) plays a vital role. CLIR retrieves the information written in a language which is different from the user's query language. In CLIR, the main concern is to translate the user query from the source language to the target language. NMT is useful for translating the data from one language to another. NMT has better accuracy for different languages like English to German and so-on. In this paper, NMT has applied for translating English to Indian languages, especially for Telugu. Besides NMT, an effort is also made to improve accuracy by applying effective preprocessing mechanism. The role of effective preprocessing in improving accuracy will be less but countable. Machine translation (MT) is a data-driven approach where parallel corpus will act as input in MT. NMT requires a massive amount of parallel corpus for performing the translation. Building an English - Telugu parallel corpus is costly because they are resource-poor languages. Different mechanisms are available for preparing the parallel corpus. The major issue in preparing parallel corpus is data replication that is handled during preprocessing. The other issue in machine translation is the out-of-vocabulary (OOV) problem. Earlier dictionaries are used to handle OOV problems. To overcome this problem the rare words are segmented into sequences of subwords during preprocessing. The parameters like accuracy, perplexity, cross-entropy and BLEU scores shows better translation quality for NMT with effective preprocessing.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here