检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Zhisheng Huang Xudong Jia Tao Chen Zhongwei Zhang
机构地区:[1]Faculty of Intelligent Manufacturing,Wuyi University,Jiangmen 529020,Guangdong,China [2]College of Engineering and Computer Science,California State University,Northridge CA 91330,USA
出 处:《Data Intelligence》2025年第1期124-142,共19页数据智能(英文)
基 金:supported by the Wuyi University-Hong Kong-Macao Joint Funding Scheme(No.2022WGALH17);the Research Platform and Project of Universities of Education Department of Guangdong Province,China 2023(No.2023ZDZX1030).
摘 要:Knowledge selection is a challenging task that often deals with semantic drift issues when knowledge is retrieved based on semantic similarity between a fact and a question. In addition, weak correlations embedded in pairs of facts and questions and gigantic knowledge bases available for knowledge search are also unavoidable issues. This paper presents a scalable approach to address these issues. A sparse encoder and a dense encoder are coupled iteratively to retrieve fact candidates from a large-scale knowledge base. A pre-trained language model with two rounds of fine-tuning using results of the sparse and dense encoders is then used to re-rank fact candidates. Top-k facts are selected by a specific re-ranker. The scalable approach is applied on two textual inference datasets and one knowledge-grounded question answering dataset. Experimental results demonstrate that (1) the proposed approach can improve the performance of knowledge selection by reducing the semantic drift;(2) the proposed approach produces outstanding results on the benchmark datasets. The code is available at https://github.com/hhhhzs666/KSIHER.
关 键 词:Knowledge selection Textual inference Semantic drift Coarse-to-Fine
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.198