z-logo
open-access-imgOpen Access
Commonsense Knowledge Base Completion with Structural and Semantic Context
Author(s) -
Chaitanya Malaviya,
Chandra Bhagavatula,
Antoine Bosselut,
Yejin Choi
Publication year - 2020
Publication title -
proceedings of the ... aaai conference on artificial intelligence
Language(s) - English
Resource type - Journals
eISSN - 2374-3468
pISSN - 2159-5399
DOI - 10.1609/aaai.v34i03.5684
Subject(s) - computer science , commonsense knowledge , graph , knowledge graph , knowledge base , theoretical computer science , artificial intelligence , natural language processing , boosting (machine learning) , ranking (information retrieval) , key (lock) , information retrieval , machine learning , computer security
Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs ( ∼18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures — a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes.In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here