面向多模态自监督特征融合的音视频对抗对比学习  被引量:2

Audio-visual adversarial contrastive learning-based multi-modal self-supervised feature fusion

在线阅读下载全文

作  者:盛振涛 陈雁翔[1,2] 齐国君 Sheng Zhentao;Chen Yanxiang;Qi Guojun(School of Computer Science and Information Engineering,Hefei University of Technology,Hefei 230601,China;Intelligent Interconnection System Anhui Provincial Laboratory(Hefei University of Technology),Hefei 230601,China;Laboratory for Machine Perception and Learning(University of Central Florida),Orlando 32816,USA)

机构地区:[1]合肥工业大学计算机与信息学院,合肥230601 [2]智能互联系统安徽省实验室(合肥工业大学),合肥230601 [3]机器感知与学习实验室(美国中佛罗里达大学),美国奥兰多32816

出  处:《中国图象图形学报》2023年第1期317-332,共16页Journal of Image and Graphics

基  金:国家自然科学基金项目(61972127)。

摘  要:目的同一视频中的视觉与听觉是两个共生模态,二者相辅相成,同时发生,从而形成一种自监督模式。随着对比学习在视觉领域取得很好的效果,将对比学习这一自监督表示学习范式应用于音视频多模态领域引起了研究人员的极大兴趣。本文专注于构建一个高效的音视频负样本空间,提高对比学习的音视频特征融合能力。方法提出了面向多模态自监督特征融合的音视频对抗对比学习方法:1)创新性地引入了视觉、听觉对抗性负样本集合来构建音视频负样本空间;2)在模态间与模态内进行对抗对比学习,使得音视频负样本空间中的视觉和听觉对抗性负样本可以不断跟踪难以区分的视听觉样本,有效地促进了音视频自监督特征融合。在上述两点基础上,进一步简化了音视频对抗对比学习框架。结果本文方法在Kinetics-400数据集的子集上进行训练,得到音视频特征。这一音视频特征用于指导动作识别和音频分类任务,取得了很好的效果。具体来说,在动作识别数据集UCF-101和HMDB-51(human metabolome database)上,本文方法相较于Cross-AVID(cross-audio visual instance discrimination)模型,视频级别的TOP1准确率分别高出了0.35%和0.83%;在环境声音数据集ECS-50上,本文方法相较于Cross-AVID模型,音频级别的TOP1准确率高出了2.88%。结论音视频对抗对比学习方法创新性地引入了视觉和听觉对抗性负样本集合,该方法可以很好地融合视觉特征和听觉特征,得到包含视听觉信息的音视频特征,得到的特征可以提高动作识别、音频分类任务的准确率。Objective Video clip-based vision and audition are two kind of interactive and synchronized symbiotic modalities to develop a self-supervised mode.Current researches demonstrate that human-perception is derived from visual auditory vision to understand dynamic events.Therefore,the feature extracted from audio-visual clips contains richer information.In recent years,data feature-based contrastive learning has promoted visual domain dramatically via the mutual information prediction between pairs of samples.Much more concerns are related to the application of contrastive learning,a self-supervised representation learning paradigm for the audio-visual multi-modal domain.It is essential to deal with the issue of an audio-visual negative sample space construction,where contrastive learning can extract negative samples.To improve the audio-visual feature fusion capability of contrastive learning,our research is focused on building up an efficient audio-visual negative sample space.Method We develop a method of audio-visual adversarial contrastive learning for multi-modal self-supervised feature fusion.Visual and auditory negative sample sets are initialized as standard normal distribution,which can construct the audio-visual negative sample space.In order to ensure the scaled audio-visual negative sample space,the number of visual and auditory adversarial negative samples is defined as 65536.The path of cross-modal adversarial contrastive learning is described as following:1)we used the paired visual feature and auditory feature extracted from the same video clip as the positive sample,while the auditory adversarial negative samples are used to construct the negative sample space,the visual feature will be close to the corresponding auditory positive sample during the training of cross-modal contrastive learning,while discretes from the auditory adversarial negative samples farther.2)Auditory adversarial negative samples are updated during cross-modal adversarial learning,which makes them closer to the visual feature.I

关 键 词:自监督特征融合 对抗对比学习 音视频多模态 视听觉对抗性负样本 预训练 

分 类 号:TP37[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象