Deep visual-linguistic fusion network considering cross-modal inconsistency for rumor detection  

在线阅读下载全文

作  者:Yang YANG Ran BAO Weili GUO De-Chuan ZHAN Yilong YIN Jian YANG 

机构地区:[1]School of Computer Science and Engineering,Nanjing University of Science and Technology,Nanjing 210094,China [2]National Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China [3]School of Software,Shandong University,Shandong 250101,China

出  处:《Science China(Information Sciences)》2023年第12期12-28,共17页中国科学(信息科学)(英文版)

基  金:supported by National Natural Science Foundation of China (Grant Nos.62006118,61906092,61773198,91746301);Natural Science Foundation of Jiangsu Province (Grant Nos.BK20200460,BK20190441);Jiangsu Shuangchuang (Mass Innovation and Entrepreneurship)Talent Program;CAAI-Huawei Mind Spore Open Fund (Grant No.CAAIXSJLJJ2021-014B)。

摘  要:With the development of the Internet,users can freely publish posts on various social media platforms,which offers great convenience for keeping abreast of the world.However,posts usually carry many rumors,which require plenty of manpower for monitoring.Owing to the success of modern machine learning techniques,especially deep learning models,we tried to detect rumors as a classification problem automatically.Early attempts have always focused on building classifiers relying on image or text information,i.e.,single modality in posts.Thereafter,several multimodal detection approaches employ an early or late fusion operator for aggregating multiple source information.Nevertheless,they only take advantage of multimodal embeddings for fusion and ignore another important detection factor,i.e.,the intermodal inconsistency between modalities.To solve this problem,we develop a novel deep visual-linguistic fusion network(DVLFN)considering cross-modal inconsistency,which detects rumors by comprehensively considering modal aggregation and contrast information.Specifically,the DVLFN first utilizes visual and textual deep encoders,i.e.,Faster R-CNN and bidirectional encoder representations from transformers,to extract global and regional embeddings for image and text modalities.Then,it predicts posts'authenticity from two aspects:(1)intermodal inconsistency,which employs the Wasserstein distance to efficiently measure the similarity between regional embeddings of different modalities,and(2)modal aggregation,which experimentally employs the early fusion to aggregate two modal embeddings for prediction.Consequently,the DVLFN can compose the final prediction based on the modal fusion and inconsistency measure.Experiments are conducted on three real-world multimedia rumor detection datasets collected from Reddit,Good News,and Weibo.The results validate the superior performance of the proposed DVLFN.

关 键 词:multimodal learning Wasserstein distance rumor detection 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象