z-logo
open-access-imgOpen Access
Tackling Graphical Natural Language Processing’s Problems with Recurrent Neural Networks
Author(s) -
Ali Sami Sosa,
Saja Majeed Mohammed,
Haider Hadi Abbas,
Israa Al_Barazanchi
Publication year - 2019
Publication title -
xi'nan jiaotong daxue xuebao
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.308
H-Index - 21
ISSN - 0258-2724
DOI - 10.35741/issn.0258-2724.54.5.35
Subject(s) - recurrent neural network , computer science , artificial neural network , artificial intelligence , time delay neural network , serialization , graph , nervous system network models , encode , types of artificial neural networks , deep learning , machine learning , theoretical computer science , programming language , gene , biochemistry , chemistry
Recent years have witnessed the success of artificial intelligence–based automated systems that use deep learning, especially recurrent neural network-based models, on many natural language processing problems, including machine translation and question answering. Besides, recurrent neural networks and their variations have been extensively studied with respect to several graph problems and have shown preliminary success. Despite these successes, recurrent neural network -based models continue to suffer from several major drawbacks. First, they can only consume sequential data; thus, linearization is required to serialize input graphs, resulting in the loss of important structural information. In particular, graph nodes that are originally located closely to each other can be very far away after linearization, and this introduces great challenges for recurrent neural networks to model their relation. Second, the serialization results are usually very long, so it takes a long time for recurrent neural networks to encode them. In the methodology of this study, we made the resulting graphs more densely connected so that more useful facts could be inferred, and the problem of graphical natural language processing could be easily decoded with graph recurrent neural network. As a result, the performances with single-typed edges were significantly better than the Local baseline, whereas the combination of all types of edges achieved a much better accuracy than just that of the Local using recurrent neural network. In this paper, we propose a novel graph neural network, named graph recurrent network.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here