检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:袁玥 刘永彬[1] 欧阳纯萍[1] 田纹龙 方文泷 YUAN Yue;LIU Yongbin;OUYANG Chunping;TIAN Wenlong;FANG Wenlong(School of Computer,University of South China,Hengyang,Hunan 421001,China)
出 处:《中文信息学报》2023年第9期131-139,共9页Journal of Chinese Information Processing
基 金:国家自然科学基金(N061402220);湖南省教育厅重点科研项目(19A49);湖南省自然科学基金(2020JJ4525,2022JJ30495)。
摘 要:面向多模态的虚假新闻检测工作大部分是利用文本和图片之间的一对一关系,将文本特征和图片特征进行简单融合,忽略了帖子内多张图片内容的有效特征,对帖子间的语义关联建模不足。为了克服现有方法的局限性,该文提出了一种基于文图一对多关系的多模态虚假新闻检测模型。利用跨模态注意力网络筛选多张图片的有效特征,通过多模态对比学习网络动态调整帖子间高层次的语义特征关联,增强融合图文特征的联合表示。在新浪微博数据集上的实验结果表明,该模型能充分利用文图一对多关系的有效信息和帖子之间的语义特征关系,比基线模型准确率提升了3.15%。Most of the existing works for multi-modal fake news detection simply fuse textual and image features in a one-to-one manner,while ignoring the information of multiple images in news posts as well as the relationship between different news posts.To overcome these limitations,this paper proposes a model employing the one-to-many relationship of text and images for multi-modal fake newsdetection(OMMFN).In our method,the cross-modal attention network(CMA)is used to extract the effective features of multiple images.Then,the multi-modal contrast learning network(MCL)is used to dynamically adjust the semantic feature relationship between different news posts to improve multi-modal joint feature representation of text and images.Experiments on Sina Weibo dataset show that our model can capture the effective information of text and images with the one-to-many relationship and make full use of the semantic feature relationship between different news posts.The performance in accuracy is improved by 3.15%over the state of the art significantly.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28