z-logo
open-access-imgOpen Access
Directional Adversarial Training for Recommender Systems
Author(s) -
Yangjun Xu,
Liang Chen,
Fenfang Xie,
Weibo Hu,
Jieming Zhu,
Chuan Chen,
Zibin Zheng
Publication year - 2020
Language(s) - English
DOI - 10.3233/faia200138
Adversarial training is shown as an effective method to improve the generalization ability of deep learning models by making random perturbations in the input space during model training. A recent study has successfully applied adversarial training into recommender systems by perturbing the embeddings of users and items through a minimax game. However, this method ignores the collaborative signal in recommender systems and fails to capture the smoothness in data distribution. We argue that the collaborative signal, which reveals the behavioural similarity between users and items, is critical to modeling recommender systems. In this work, we develop the Directional Adversarial Training (DAT) strategy by explicitly injecting the collaborative signal into the perturbation process. That is, both users and items are perturbed towards their similar neighbours in the embedding space with proper restriction. To verify its effectiveness, we demonstrate the use of DAT on Generalized Matrix Factorization (GMF), one of the most representative collaborative filtering methods. Our experimental results on three public datasets show that our method (called DAGMF) achieves a significant accuracy improvement over GMF and meanwhile, it is less prone to overfitting than GMF.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom