z-logo
open-access-imgOpen Access
Improvements for NLP Models by Considering Language Differences
Author(s) -
Ruiqi Zhang
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1769/1/012003
Subject(s) - computer science , natural language processing , artificial intelligence , focus (optics) , natural language , language model , linguistics , physics , optics , philosophy
Aspect extraction has been playing a critical role in opinion mining of product reviews. Most of the existing works tackling various Natural Language Processing tasks focus on English corpus and barely talked about the effect of language differences when apply the algorithm to other languages. Based on a previous aspect extraction neural model and its application to text data in Chinese, this paper analyzed possible reasons and proposed plausible approaches to address the importance of handling language differences in Natural Language Processing tasks. By bringing the importance of adaptiveness of language models to attention, future NLP research work might produce more robust and comprehensive work regardless of the impact of language differences.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here