
Understanding Negative Sampling in Knowledge Graph Embedding
Author(s) -
Jing Qian,
Gangmin Li,
Katie Atkinson,
Yong Yue
Publication year - 2021
Publication title -
international journal of artificial intelligence and applications
Language(s) - English
Resource type - Journals
eISSN - 0976-2191
pISSN - 0975-900X
DOI - 10.5121/ijaia.2021.12105
Subject(s) - embedding , computer science , sampling (signal processing) , categorization , graph , theoretical computer science , knowledge graph , vector space , data mining , machine learning , artificial intelligence , mathematics , computer vision , geometry , filter (signal processing)
Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) into a low-dimensional vector space, which has made steady progress in recent years. Conventional KGE methods, especially translational distance-based models, are trained through discriminating positive samples from negative ones. Most KGs store only positive samples for space efficiency. Negative sampling thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks, such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.