检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:吕国俊 曹建军 郑奇斌 常宸 翁年凤 LüGuojun;Cao Jianjun;Zheng Qibin;Chang Chen;Weng Nianfeng(Institute of Command and Control Engineering,Army Engineering University,Nanjing,210007,China;The Sixty third Research Institute,National University of Defense Technology,Nanjing,210007,China)
机构地区:[1]陆军工程大学指挥控制工程学院,南京210007 [2]国防科技大学第六十三研究所,南京210007
出 处:《南京大学学报(自然科学版)》2020年第2期197-205,共9页Journal of Nanjing University(Natural Science)
基 金:国家自然科学基金(61371196);中国博士后科学基金(20090461425,201003797);国家重大科技专项(2015ZX01040201 003)。
摘 要:跨模态实体分辨旨在从不同模态的数据中找到对同一实体的不同客观描述.常用的跨模态实体分辨方法通过将不同模态数据映射到同一空间中进行相似性度量,大多通过使用类别信息建立映射前后的语义联系,却忽略了对跨模态成对样本信息的有效利用.在真实数据源中,给大量的数据进行标注耗时费力,难以获得足够的标签数据来完成监督学习.对此,提出一种基于结构保持的对抗网络跨模态实体分辨方法(Structure Maintenance based Adversarial Network,SMAN),在对抗网络模型下构建模态间的K近邻结构损失,利用模态间成对信息在映射前后的结构保持学习更一致的表示,引入联合注意力机制实现模态间成对样本信息的对齐.实验结果表明,在不同数据集上,SMAN和其他无监督方法和一些典型的有监督方法相比有更好的性能.Cross-modal entity resolution aims to find different objective descriptions of the same entity in different modalities.The common way to solve the problem is to construct a shared space to measure similarity where multi-modal examples can be represented uniformly. Lots of methods establish semantic connections by using category information,while ignoring the effective usage of information for sample pairs. In real data sources,annotating a great deal of data may consume a lot of time and labor,which is difficult to obtain enough labeled data for supervised learning. According to this,a Structure Maintenance based Adversarial Network(SMAN) is proposed for cross-modal entity resolution. Under the adversarial network,K-nearest structure loss is built between modalities to learn more consistent representation which uses the maintenance of pairs information after nonlinear mapping,and co-attention mechanism is designed to achieve the alignment of pairs information between modalities. The experimental results on different datasets show the superior performance of the proposed method compared with other unsupervised methods and some typical supervised methods.
关 键 词:数据质量 跨模态实体分辨 无监督学习 对抗学习 K近邻 联合注意力
分 类 号:TP311[自动化与计算机技术—计算机软件与理论]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222