z-logo
Premium
Natural language processing‐based lexical meaning analysis: An application of in‐network caching‐oriented translation system
Author(s) -
Wang Bozhou,
Jia Chunmei
Publication year - 2021
Publication title -
internet technology letters
Language(s) - English
Resource type - Journals
ISSN - 2476-1508
DOI - 10.1002/itl2.290
Subject(s) - computer science , bottleneck , scalability , overhead (engineering) , artificial neural network , artificial intelligence , meaning (existential) , transmission (telecommunications) , natural language processing , machine translation , distributed computing , embedded system , database , psychology , telecommunications , psychotherapist , operating system
The massive amount of data affects the scaling of neural network‐based Natural Language Processing (NLP). Although distributed training can solve this to some extent, it brings the problem of transmission bottleneck of communication networks. To address the communication bottleneck of distributed parallel training, (a) an in‐network caching‐oriented training system architecture is proposed, which utilizes the in‐network caches to reduce the parameter transmission and reduce the communication overhead, and (b) an improved attention model based on the variation algorithm is proposed to further reduce the model size and improve the lexical meaning analysis capability from two aspects. The experimental results show that the proposed system can effectively improve the scalability of neural networks and the translation quality.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here