检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:于莉 王思拓 陈亚当 高攀 孙玉宝[1] YU Li;WANG Si-Tuo;CHEN Ya-Dang;GAO Pan;SUN Yu-Bao(School of Computer Science&School of Cyber Science and Engineering,Nanjing University of Information Science&Technology,Nanjing 210044,China;College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China)
机构地区:[1]南京信息工程大学计算机学院、网络空间安全学院,南京210044 [2]南京航空航天大学计算机科学与技术学院,南京211106
出 处:《计算机系统应用》2025年第3期51-61,共11页Computer Systems & Applications
基 金:国家自然科学基金(62002172,62276139,U2001211)。
摘 要:面对视频质量评估领域标记数据不足的问题,研究者开始转向自监督学习方法,旨在借助大量未标记数据来学习视频质量评估模型.然而现有自监督学习方法主要聚焦于视频的失真类型和视频内容信息,忽略了视频随时间变化的动态信息和时空特征,这导致在复杂动态场景下的评估效果不尽人意.针对上述问题,提出了一种新的自监督学习方法,通过播放速度预测作为预训练的辅助任务,使模型能更好地捕捉视频的动态变化和时空特征,并结合失真类型预测和对比学习,增强模型对视频质量差异的敏感性学习.同时,为了更全面捕捉视频的时空特征,进一步设计了多尺度时空特征提取模块等以加强模型的时空建模能力.实验结果显示,所提方法在LIVE、CSIQ以及LIVE-VQC数据集上,性能显著优于现有的基于自监督学习的方法,在LIVE-VQC数据集上,本方法在PLCC指标上平均提升7.90%,最高提升17.70%.同样,在KoNViD-1k数据集上也展现了相当的竞争力.这些结果表明,本文提出的自监督学习框架有效增强视频质量评估模型的动态特征捕捉能力,并在处理复杂动态视频中显示出独特优势.Faced with insufficient labeled data in the field of video quality assessment,researchers begin to turn to selfsupervised learning methods,aiming to learn video quality assessment models with the help of large amounts of unlabeled data.However,existing self-supervised learning methods primarily focus on video distortion types and content information,while ignoring dynamic information and spatiotemporal features of videos changing over time.This leads to unsatisfactory evaluation performance in complex dynamic scenes.To address these issues,a new self-supervised learning method is proposed.By taking playback speed prediction as an auxiliary pretraining task,the model can better capture dynamic changes and spatiotemporal features of videos.Combined with distortion type prediction and contrastive learning,the model’s sensitivity to video quality differences is enhanced.At the same time,to more comprehensively capture the spatiotemporal features of videos,a multi-scale spatiotemporal feature extraction module is further designed to enhance the model’s spatiotemporal modeling capability.Experimental results demonstrate that the proposed method significantly outperforms existing self-supervised learning-based approaches on the LIVE,CSIQ,and LIVE-VQC datasets.On the LIVE-VQC dataset,the proposed method achieves an average improvement of 7.90%and a maximum improvement of 17.70%in the PLCC index.Similarly,it also shows considerable competitiveness on the KoNViD-1k dataset.These results indicate that the proposed self-supervised learning framework effectively enhances the dynamic feature capture ability of the video quality assessment model and exhibits unique advantages in processing complex dynamic videos.
关 键 词:视频质量评估 自监督学习 多任务学习 播放速度预测 多尺度
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.37