Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models
On this page
Knowledge graph completion (KGC) is a widely used method to tackleincompleteness in knowledge graphs (KGs) by making predictions for missinglinks. Description-based KGC leverages pre-trained language models to learnentity and relation representations with their names or descriptions, whichshows promising results. However, the performance of description-based KGC isstill limited by the quality of text and the incomplete structure, as it lackssufficient entity descriptions and relies solely on relation names, leading tosub-optimal results. To address this issue, we propose MPIKGC, a generalframework to compensate for the deficiency of contextualized knowledge andimprove KGC by querying large language models (LLMs) from various perspectives,which involves leveraging the reasoning, explanation, and summarizationcapabilities of LLMs to expand entity descriptions, understand relations, andextract structures, respectively. We conducted extensive evaluation of theeffectiveness and improvement of our framework based on four description-basedKGC models and four datasets, for both link prediction and tripletclassification tasks.
Further reading
- Access Paper in arXiv.org