Unveiling factuality and injecting knowledge for LLMs via reinforcement learning and data proportion  

在线阅读下载全文

作  者:Wenjun KE Ziyu SHANG Zhizhao LUO Peng WANG Yikai GUO Qi LIU Yuxuan CHEN 

机构地区:[1]School of Computer Science and Engineering,Southeast University,Nanjing 210096,China [2]Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications(Southeast University),Nanjing 210096,China [3]Beijing Institute of Technology Zhuhai,Zhuhai 519088,China [4]Beijing Institute of Computer Technology and Application,Beijing 100048,China

出  处:《Science China(Information Sciences)》2024年第10期385-386,共2页中国科学(信息科学)(英文版)

基  金:supported by the National Science Foundation of China (Grant No. 62376057)。

摘  要:Large language models(LLMs) have demonstrated remarkable effectiveness across various natural language processing(NLP) tasks, as evidenced by recent studies [1, 2]. However, these models often produce responses that conflict with reality due to the unreliable distribution of facts within their training data, which is particularly critical for applications requiring high credibility and accuracy [3].

关 键 词:KNOWLEDGE learning CRITICAL 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程] TP391.1[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象