MuLAN: Multi-level attention-enhanced matching network for few-shot knowledge graph completion

Neural Netw. 2024 Jun:174:106222. doi: 10.1016/j.neunet.2024.106222. Epub 2024 Feb 29.

Abstract

Recent years have witnessed increasing interest in the few-shot knowledge graph completion due to its potential to augment the coverage of few-shot relations in knowledge graphs. Existing methods often use the one-hop neighbors of the entity to enhance its embedding and match the query instance and support set at the instance level. However, such methods cannot handle inter-neighbor interaction, local entity matching and the varying significance of feature dimensions. To bridge this gap, we propose the Multi-Level Attention-enhanced matching Network (MuLAN) for few-shot knowledge graph completion. In MuLAN, a multi-head self-attention neighbor encoder is designed to capture the inter-neighbor interaction and learn the entity embeddings. Then, entity-level attention and instance-level attention are responsible for matching the query instance and support set from the local and global perspectives, respectively, while feature-level attention is utilized to calculate the weights of the feature dimensions. Furthermore, we design a consistency constraint to ensure the support instance embeddings are close to each other. Extensive experiments based on two well-known datasets (i.e., NELL-One and Wiki-One) demonstrate significant advantages of MuLAN over 11 state-of-the-art competitors. Compared to the best-performing baseline, MuLAN achieves 14.5% higher MRR and 13.3% higher Hits@K on average.

Keywords: Attention mechanism; Few-shot learning; Few-shot relation; Knowledge graph completion; Knowledge graphs.

MeSH terms

  • Knowledge*
  • Learning
  • Pattern Recognition, Automated*