检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李红莲[1] 陈浩天 张乐 吕学强[2] 田驰 Li Honglian;Chen Haotian;Zhang Le;Lv Xueqiang;Tian Chi(School of Information&Communication Engineering,Beijing Information Science&Technology University,Beijing 100101,China;Beijing Key Laboratory of Internet Culture and Digital Dissemination Research,Beijing Information Science&Technology University,Beijing 100101,China)
机构地区:[1]北京信息科技大学信息与通信工程学院,北京100101 [2]北京信息科技大学网络文化与数字传播北京市重点实验室,北京100101
出 处:《数据分析与知识发现》2024年第6期30-43,共14页Data Analysis and Knowledge Discovery
基 金:国家自然科学基金项目(项目编号:62171043);国家语委重点项目(项目编号:ZDI145-10);北京市教育委员会科学研究计划项目(项目编号:KM202311232001)的研究成果之一。
摘 要:【目的】针对传统的自动摘要无法深度融合评论的情感和主题信息,无法解决词汇不足的问题,提出一种融合情感-主题双通道信息的评论摘要生成模型。【方法】运用TextRank动态抽取评论主题句,借助PyABSA模型抽取主题句中的方面词-情感词序列拼接主题句得到最终的主题信息,并通过构建情感词集和融合主题的Bi-LSTM情感词抽取模型获取情感句,将评论原文和情感句进行拼接,与主题句形成双通道信息,分别采用注意力机制得到主题注意力和情感注意力,并将其叠加进行深度融合得到融合注意力,替换指针生成网络的单通道注意力,通过指针网络生成最终的评论摘要。【结果】所提融合双通道信息的指针生成网络与对比实验主题+PNG相比,在ROUGE-1、ROUGE-2和ROUGE-L值上分别提升2.87、6.14和2.64百分点,消融实验结果表明融合双通道信息比单通道信息在ROUGE-1、ROUGE-2和ROUGE-L上分别提升4.49、3.66和4.16百分点。【局限】未考虑到融合更细粒度的属性。【结论】所提模型能够有效融合评论的主题信息和情感信息,提升双通道信息融合的质量,在摘要生成结果中优于对比模型,生成的摘要能够包含更多的情感和主题信息。[Objective]This paper aims to solve the problem that traditional automatic summarization technology cannot deeply integrate emotion and topic information synthetically,and cannot solve the lexical deficiency,a review summary generation model integrating emotion and topic information is proposed.[Methods]TextRank is used to dynamically extract the comment topic sentence,and PyABSA model is used to extract the aspect wordemotion word sequence in the topic sentence to concatenate the topic sentence to obtain the final topic information.The emotion sentence is obtained by constructing the emotion word set and Bi-LSTM emotion word extraction model integrating the topic,and the comment text and emotion sentence are concatenated to form dualchannel information with the topic sentence.The attention mechanism is used to obtain topic attention and emotion attention,respectively,and the superposition of them is deeply fused to obtain fusion attention.The single-channel attention of the pointer generation network is replaced,and the final comment summary is generated by the pointer network.[Results]Compared with the comparative experiment Topic+PNG,the proposed pointer generation network with dual-channel information improves the ROUGE-1,ROUGE-2 and ROUGE-L values by 2.87%,6.14%and 2.64%,respectively.The ablation experiment showed that ROUGE-1,ROUGE-2 and ROUGE-L value of integrating dual-channel information were 4.49%,3.66%and 4.16%higher than single-channel information.[Limitations]Because fine-grained attribute words may appear in comments,the integration of fine-grained attributes is not considered.[Conclusions]The model can effectively integrate the topic information and emotion information of the comments,improve the quality of the two-channel information fusion,and outperform the comparison model in the summary generation results.The generated summary can contain more emotion and topic information.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222