面向开源情报的信息抽取大语言模型  

Large language models for open-source intelligence information extraction

在线阅读下载全文

作  者:赵勤博 王又辰 陈荣 宋颖毅 栾真 田夫兰 ZHAO Qin-bo;WANG You-chen;CHEN Rong;SONG Ying-yi;LUAN Zhen;TIAN Fu-lan(Institute 706,Second Academy of China Aerospace Science and Industry Corporation,Beijing 100854,China;Information Technology Center,General Office of Yunnan Provincial Committee of the Communist Party of China,Kunming 650228,China)

机构地区:[1]中国航天科工集团第二研究院七〇六所,北京100854 [2]中共云南省委办公厅信息技术中心,云南昆明650228

出  处:《计算机工程与设计》2024年第12期3772-3778,共7页Computer Engineering and Design

摘  要:针对开源情报信息抽取过程依赖多类专用模型和抽取属性限制强等问题,基于一种GLM大语言模型进行指令微调和上下文学习提高信息抽取准确率,利用指令自动化生成方法对原始问题进行泛化,构建SFT数据集。开展多任务统一的微调学习常见抽取模式,通过自动思维链扩充提示增强模型推理能力。实验结果表明,该方法在开源情报命名实体识别、关系抽取和事件抽取任务上,微调模型能满足不同场景下的抽取要求,具有较好的抽取效果。To address the issues of the dependency on multiple specialized models and limitations on extraction attributes in the process of open source intelligence extraction,generative language model was adopted as an extraction tool and the accuracy of information extraction was improved through instruction fine-tuning and in-context learning.The SFT dataset was constructed using automated instruction generation methods to generalize the original problems.The fine-tuning was conducted for multiple tasks to learn common extraction patterns.The automatic thinking chain expansion prompts were employed to enhance the model’s reasoning ability.Experimental results demonstrate that this method,in tasks such as named entity recognition,relation extraction,and event extraction in open source intelligence,achieves satisfactory extraction results in various scenarios,indicating its effectiveness in extraction.

关 键 词:开源情报 大语言模型 信息抽取 指令自动化生成 指令微调 上下文学习 自动思维链 

分 类 号:TP311[自动化与计算机技术—计算机软件与理论]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象