结合提示学习和Qwen大语言模型的裁判文书摘要方法  

Method for judicial document summarization by combining prompt learning and Qwen large language models

在线阅读下载全文

作  者:李佳沂 黄瑞章[1,2] 陈艳平 林川[1,2] 秦永彬[1,2] LI Jiayi;HUANG Ruizhang;CHEN Yanping;LIN Chuan;QIN Yongbin(Text Computing&Cognitive Intelligence Engineering Research Center of National Education Ministry,College of Computer Science and Technology,Guizhou University,Guiyang 550025,China;State Key Laboratory of Public Big Data,Guizhou University,Guiyang 550025,China)

机构地区:[1]贵州大学计算机科学与技术学院,贵阳550025 [2]贵州大学公共大数据国家重点实验室,贵阳550025

出  处:《清华大学学报(自然科学版)》2024年第12期2007-2018,共12页Journal of Tsinghua University(Science and Technology)

基  金:国家自然科学基金资助项目(62066008);贵州省科学技术基金重点资助项目(黔科合基础[2020]1Z055);贵州省科学技术基金重点资助项目(黔科合重大专项字[2024]003)。

摘  要:尽管大语言模型在新闻、艺术等领域的文本摘要任务上取得了良好的效果,但由于大语言模型缺乏对司法领域知识的学习,同时难以理解裁判文书的结构特征和逻辑关系,导致生成的裁判文书摘要质量不佳。该文提出结合提示学习和Qwen大语言模型的裁判文书摘要方法,将裁判文书数据作为SFT(supervised fine-tuning)技术对大语言模型微调的输入,增强其法律领域适用性;同时设计融入结构信息与角色指令的提示模板,以优化摘要生成,使其更精准地反映文书结构特征与逻辑关系。实验结果表明,该方法在ROUGE-1、 ROUGE-2和ROUGE-L的F1值上比基线模型分别提升了21.44%、 28.50%和28.97%,说明大语言模型经裁判文书数据微调并引入结构信息后,在裁判文书摘要任务中展现了卓越的性能与巨大的应用潜力。[Objective] The increasing maturity of large language model technology has facilitated its widespread application in downstream tasks across various vertical fields.Large language models have exhibited beneficial performance in text summarization tasks in general fields,such as news and art.However,the highly specific language style in the judicial field and the unique complexity of judicial documents in terms of structure and logic make it difficult for large language models to generate judicial document summaries.This study aims to combine prompt learning with large language models to explore their performance in summarizing judicial documents.Prompt templates containing structural information and judicial documents are used as inputs for fine-tuning large language models.As a result,large language models can generate judicial document summaries that adhere to judicial language styles and the structural and logical complexities of judicial documents.[Methods] This study proposes a judicial document summary method that combines prompt learning and the Qwen large language model.Judicial document data are used as the input for fine-tuning a large language model using supervised fine-tuning technology to enhance its applicability in the judicial field.Simultaneously,prompt templates that incorporate structural information and role instructions are designed to optimize summary generation to more accurately reflect the structural characteristics and logical relationships of documents.According to the characteristics of the pretraining data format of the large language model,the fine-tuning data were constructed in the form of question-answer pairs.[Results] The experimental results show that the proposed method improves the F1 of the baseline model by 21.44%,28.50%,and 28.97% in ROUGE-1,ROUGE-2,and ROUGE-L,respectively,and exceeds all of the comparison models.The ablation experiment demonstrated that the summary generation method using prompt learning was superior to the method without prompt learning for all indicat

关 键 词:裁判文书摘要 文本摘要 大语言模型 提示学习 

分 类 号:TP393.1[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象