z-logo
open-access-imgOpen Access
Improving the Robustness of GraphSAINT via Stability Training
Author(s) -
Yuying Wang,
Huixuan Chi,
Qinfen Hao
Publication year - 2021
Publication title -
paradigmplus
Language(s) - English
Resource type - Journals
ISSN - 2711-4627
DOI - 10.55969/paradigmplus.v2n3a1
Subject(s) - computer science , robustness (evolution) , machine learning , graph , artificial intelligence , scalability , training set , normalization (sociology) , data mining , theoretical computer science , biochemistry , chemistry , database , sociology , anthropology , gene
Graph Neural Networks (GNNs) field has a dramatic development nowadays due to the strong representation capabilities for data in non-Euclidean space, such as graph data. However, as the scale of the dataset continues to expand, sampling is commonly introduced to obtain scalable GNNs, which leads to the instability problem during training. For example, when Graph SAmpling based INductive learning meThod (GraphSAINT) is applied for the link prediction task, it may not converge in training with a probability range from 0.1 to 0.4. This paper proposes the improved GraphSAINTs by introducing two normalization techniques and one Graph Neural Network (GNN) trick into the traditional GraphSAINT to solve the problem of the training stability and obtain more robust training results. The improved GraphSAINTs successfully eliminate the instability during training and improve the robustness of the traditional model. Besides, we can also accelerate the training procedure convergence of the traditional GraphSAINT and obtain a generally higher performance in the prediction accuracy by applying the improved GraphSAINTs. We validate our improved methods by using the experiments on the citation dataset of Open Graph Benchmark (OGB).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here