检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:黄懿蕊 罗俊玮 陈景强 HUANG Yirui;LUO Junwei;CHEN Jingqiang(School of Computer Science,Nanjing University of Posts and Telecommunications,Nanjing Jiangsu 210023,China;China Mobile Communications Group Chongqing Company Limited,Chongqing 401120,China;Jiangsu Key Laboratory for Big Data Security and Intelligent Processing(Nanjing University of Posts and Telecommunications),Nanjing Jiangsu 210023,China)
机构地区:[1]南京邮电大学计算机学院、软件学院、网络空间安全学院,南京210023 [2]中国移动通信集团重庆有限公司,重庆401120 [3]江苏省大数据安全与智能处理重点实验室(南京邮电大学),南京210023
出 处:《计算机应用》2024年第1期32-38,共7页journal of Computer Applications
基 金:国家自然科学基金资助项目(61806101)
摘 要:社交媒体网站上使用GIF(Graphics Interchange Format)作为消息的回复相当普遍。但目前大多方法针对问题“如何选择一个合适的GIF回复消息”,没有很好地利用社交媒体上的GIF附属标记信息。为此,提出基于对比学习和GIF标记的多模态对话回复检索(CoTa-MMD)方法,将标记信息整合到检索过程中。具体来说就是使用标记作为中间变量,文本→GIF的检索就被转换为文本→GIF标记→GIF的检索,采用对比学习算法学习模态表示,并利用全概率公式计算检索概率。与直接的文本图像检索相比,引入的过渡标记降低了不同模态的异质性导致的检索难度。实验结果表明,CoTa-MMD模型相较于深度监督的跨模态检索(DSCMR)模型,在PEPE-56多模态对话数据集和Taiwan多模态对话数据集上文本图像检索任务的召回率之和分别提升了0.33个百分点和4.21个百分点。GIFs(Graphics Interchange Formats)are frequently used as responses to posts on social media platforms,but many approaches do not make good use of the GIF tag information on social media when dealing with the question“how to choose an appropriate GIF to reply to a post”.A Multi-Modal Dialog reply retrieval based on Contrast learning and GIF Tag(CoTa-MMD)approach was proposed,by which the tag information was integrated into the retrieval process.Specifically,the tags were used as intermediate variables,the retrieval of text to GIF was then converted to the retrieval of text to GIF tag to GIF.Then the modal representation was learned by a contrastive learning algorithm and the retrieval probability was calculated using a full probability formula.Compared to direct text image retrieval,the introduction of transition tags reduced retrieval difficulties caused by the heterogeneity of different modalities.Experimental results show that the CoTa-MMD model improved the recall sum of the text image retrieval task by 0.33 percentage points and 4.21 percentage points compared to the DSCMR(Deep Supervised Cross-Modal Retrieval)model on PEPE-56 multimodal dialogue dataset and Taiwan multimodal dialogue dataset,respectively.
关 键 词:跨模态检索 多模态对话 GIF 对比学习 表示学习
分 类 号:TP391.3[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49