检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张旭[1] 张丽 ZHANG Xu;ZHANG Li(School of Computer Science,Beijing University of Technology,Beijing 100124,China)
出 处:《中国数字医学》2025年第4期47-54,共8页China Digital Medicine
摘 要:目的:构建基于大模型的医患对话摘要生成系统,在无需任何标注数据的情况下,自动提取和总结医患对话中的关键医学信息。方法:借助规模较大的大语言模型对医患对话数据进行预标注,然后基于这批伪平行数据对参数规模较小的大语言模型进行训练;推理阶段引入语境学习方法,提供少量示例结合指令工程,使大语言模型能更精准地理解医患对话,生成最终的摘要。结果:该研究构建的系统在保留关键医疗信息方面显著优于现有的无监督摘要生成技术及大语言模型。结论:使用大参数大语言模型对医患对话进行预标注,利用知识蒸馏方法可以实现小参数模型对大参数模型的能力继承,以降低对训练数据的依赖,提升模型通用性和可移植性。Objective To construct a doctor-patient dialogue abstract generation system based on large language model,which can automatically extract and summarize key medical information from doctor-patient dialogues without any annotated data.Methods The doctor-patient dialogue data were pre-labeled with a larger-scale large language model,and then the large language model with small parameter was trained based on the pseudo-parallel data.In the reasoning stage,contextual learning method was introduced,which provided a few examples integrated with command engineering,enabling the large language model to understand the doctor-patient dialogue more accurately and generate the final summary.Results The system constructed in this study significantly outperformed the existing unsupervised summarization techniques and large language models in retaining key medical information.Conclusion The ability inheritance of the small-parameter model to the large-parameter model can be realized by using the knowledge distillation method to pre-label the doctor-patient dialogue,so as to reduce the dependence on training data and improve the generality and portability of the model.
分 类 号:R319[医药卫生—基础医学] TP399[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.33