检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yang YANG Ran BAO Weili GUO De-Chuan ZHAN Yilong YIN Jian YANG
机构地区:[1]School of Computer Science and Engineering,Nanjing University of Science and Technology,Nanjing 210094,China [2]National Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China [3]School of Software,Shandong University,Shandong 250101,China
出 处:《Science China(Information Sciences)》2023年第12期12-28,共17页中国科学(信息科学)(英文版)
基 金:supported by National Natural Science Foundation of China (Grant Nos.62006118,61906092,61773198,91746301);Natural Science Foundation of Jiangsu Province (Grant Nos.BK20200460,BK20190441);Jiangsu Shuangchuang (Mass Innovation and Entrepreneurship)Talent Program;CAAI-Huawei Mind Spore Open Fund (Grant No.CAAIXSJLJJ2021-014B)。
摘 要:With the development of the Internet,users can freely publish posts on various social media platforms,which offers great convenience for keeping abreast of the world.However,posts usually carry many rumors,which require plenty of manpower for monitoring.Owing to the success of modern machine learning techniques,especially deep learning models,we tried to detect rumors as a classification problem automatically.Early attempts have always focused on building classifiers relying on image or text information,i.e.,single modality in posts.Thereafter,several multimodal detection approaches employ an early or late fusion operator for aggregating multiple source information.Nevertheless,they only take advantage of multimodal embeddings for fusion and ignore another important detection factor,i.e.,the intermodal inconsistency between modalities.To solve this problem,we develop a novel deep visual-linguistic fusion network(DVLFN)considering cross-modal inconsistency,which detects rumors by comprehensively considering modal aggregation and contrast information.Specifically,the DVLFN first utilizes visual and textual deep encoders,i.e.,Faster R-CNN and bidirectional encoder representations from transformers,to extract global and regional embeddings for image and text modalities.Then,it predicts posts'authenticity from two aspects:(1)intermodal inconsistency,which employs the Wasserstein distance to efficiently measure the similarity between regional embeddings of different modalities,and(2)modal aggregation,which experimentally employs the early fusion to aggregate two modal embeddings for prediction.Consequently,the DVLFN can compose the final prediction based on the modal fusion and inconsistency measure.Experiments are conducted on three real-world multimedia rumor detection datasets collected from Reddit,Good News,and Weibo.The results validate the superior performance of the proposed DVLFN.
关 键 词:multimodal learning Wasserstein distance rumor detection
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.144.178.2