检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Tao HE Ming LIU Yixin CAO Zekun WANG Zihao ZHENG Bing QIN
机构地区:[1]Research Center for Social Computing and Information Retrieval,Harbin Institute of Technology,Harbin 150001,China [2]Peng Cheng Laboratory,Shenzhen 518000,China [3]SMU School of Computing and Information Systems,Singapore Management University,Singapore 178902,Singapore
出 处:《Frontiers of Computer Science》2025年第2期31-42,共12页计算机科学前沿(英文版)
基 金:supported by the National Key R&D Program of China(2022YFF0903301);the National Natural Science Foundation of China(Grant Nos.U22B2059,61976073,62276083);the Shenzhen Foundational Research Funding(JCYJ20200109113441941);the Major Key Project of PCL(PCL2021A06).
摘 要:Sparse Knowledge Graph(KG)scenarios pose a challenge for previous Knowledge Graph Completion(KGC)methods,that is,the completion performance decreases rapidly with the increase of graph sparsity.This problem is also exacerbated because of the widespread existence of sparse KGs in practical applications.To alleviate this challenge,we present a novel framework,LR-GCN,that is able to automatically capture valuable long-range dependency among entities to supplement insufficient structure features and distill logical reasoning knowledge for sparse KGC.The proposed approach comprises two main components:a GNN-based predictor and a reasoning path distiller.The reasoning path distiller explores high-order graph structures such as reasoning paths and encodes them as rich-semantic edges,explicitly compositing long-range dependencies into the predictor.This step also plays an essential role in densifying KGs,effectively alleviating the sparse issue.Furthermore,the path distiller further distills logical reasoning knowledge from these mined reasoning paths into the predictor.These two components are jointly optimized using a well-designed variational EM algorithm.Extensive experiments and analyses on four sparse benchmarks demonstrate the effectiveness of our proposed method.
关 键 词:knowledge graph completion graph neural networks reinforcement learning
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7