基于大规模语言模型的知识图谱可微规则抽取  被引量:4

Differentiable Rule Extraction with Large Language Model for Knowledge Graph Reasoning

在线阅读下载全文

作  者:潘雨黛 张玲玲[1,2] 蔡忠闽 赵天哲[1,2] 魏笔凡 刘均[1,2] PAN Yudai;ZHANG Lingling;CAI Zhongmin;ZHAO Tianzhe;WEI Bifan;LIU Jun(College of Computer Science and Technology,Xi’an Jiaotong University,Xi’an 710049,China;Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering,Xi’an 710049,China;Institute of System Engineering,Xi’an Jiaotong University,Xi’an 710049,China)

机构地区:[1]西安交通大学计算机科学与技术学院,西安710049 [2]陕西省大数据知识工程重点实验室,西安710049 [3]西安交通大学系统工程研究所,西安710049

出  处:《计算机科学与探索》2023年第10期2403-2412,共10页Journal of Frontiers of Computer Science and Technology

基  金:国家重点研发计划(2022YFC3303600);国家自然科学基金(62137002,62293550,62293553,62293554,61937001,62250066,62176209,62176207,62106190,62192781,62250009);国家自然科学基金创新研究群体项目(61721002);教育部创新团队项目(IRT_17R86);国家科技重点实验室基金;陕西省自然科学基金(2023-JC-YB-593);陕西省高校青年创新团队项目;中央高校基本科研业务费专项资金(xhj032021013-02)。

摘  要:知识图谱上的推理是预测不完整三元组中缺失的实体或关系,对结构化知识进行补全,并用于不同下游任务的过程。不同于被普遍研究的黑盒方法,如基于表示学习的推理方法,基于规则抽取的推理方法通过从知识图谱中泛化出一阶逻辑规则,实现一种可解释的推理范式。为解决离散的符号空间与连续的嵌入空间之间的鸿沟,提出一种基于大规模预训练语言模型的知识图谱可微规则抽取方法DRaM,将离散的一阶逻辑规则与连续的向量空间进行融合。针对规则中的原子公式顺序对推理过程产生的影响,通过引入大规模预训练语言模型对推理过程进行编码来解决。融合一阶逻辑规则的可微推理方法DRaM,在三个知识图谱数据集Family、Kinship和UMLS上进行的链接预测任务获得了较好的结果,尤其针对链接预测指标Hits@10,DRaM获得了最佳的推理结果。实验结果表明,DRaM能够有效地解决知识图谱上可微推理存在的问题,并且可以从推理过程中抽取带有置信度的一阶逻辑规则。DRaM不仅通过一阶逻辑规则增强了推理效果,同时增强了方法的可解释性。Knowledge graph(KG)reasoning is to predict missing entities or relationships in incomplete triples,complete structured knowledge,and apply to different downstream tasks.Different from black-box methods which are widely studied,such as methods based on representation learning,the method based on rule extraction achieves an interpretable reasoning paradigm by generalizing first-order logic rules from the KG.To address the gap between discrete symbolic space and continuous embedding space,a differentiable rule extracting method based on the large pre-trained language model(DRaM)is proposed,which integrates discrete first-order logical rules with continuous vector space.In view of the influence of atom sequences in first-order logic rules for the reasoning process,a large pre-trained language model is introduced to encode the reasoning process.The differentiable method DRaM,which integrates first-order logical rules,achieves good results in link prediction tasks on three knowledge graph datasets,Family,Kinship and UMLS,especially for the indicator Hits@10.Comprehensive experimental results show that DRaM can effectively solve the problems of differentiable reasoning on the KGs,and can extract first-order logic rules with confidences from the reasoning process.DRaM not only enhances the reasoning performance with the help of first-order logic rules,but also enhances the interpretability of the method.

关 键 词:知识图谱上的推理 一阶逻辑规则 大规模语言模型(LLM) 可解释推理 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象