机构地区:[1]江西财经大学计算机与人工智能学院,南昌330032
出 处:《中国图象图形学报》2024年第12期3699-3711,共13页Journal of Image and Graphics
基 金:国家重点研发计划资助(2023YFE0210700);江西省自然科学基金项目(20232BAB202001,20243BCE51139)。
摘 要:目的 全景图像质量评价(omnidirectional image quality assessment,OIQA)旨在定量描述全景图像降质情况,对于算法提升和系统优化起着重要的作用。早期的OIQA方法设计思想主要是结合全景图像的几何特性(如两级畸变和语义分布不均匀)和2D-IQA方法,这类方法并未考虑用户的观看行为,因而性能一般;现有的OIQA方法主要通过模拟用户的观看行为,提取观看视口序列;进一步,计算视口序列失真情况,然后融合视口失真得到全景图像的全局质量。然而,观看视口序列预测较为困难,且预测模型的实时性和鲁棒性难以保证。为了解决上述问题,提出一种非视口依赖的抗畸变无参考(no reference,NR)OIQA(NR-OIQA)模型。针对全景图像等距柱状投影(equirectangular projection,ERP)所带来的规律性几何畸变问题,提出一种可同时处理不规则语义和规律性畸变的新型卷积方法,称为等矩形可变形卷积方法,并基于该卷积方法构建NR-OIQA模型。方法 该模型主要由先验指导的图像块采样(prior-guided patch sampling,PPS)模块、抗畸变特征提取(deformation-unaware feature extraction,DUFE)模块和块内—块间注意力聚集(intra-inter patch attention aggregation,A-EPAA)模块3个部件组成。其中,PPS模块根据先验概率分布从高分辨率的全景图像采样提取相同分辨率的图像块;DUFE模块通过等矩形可变形卷积渐进式地提取输入图像块质量相关特征;A-EPAA模块旨在调整单个图像块内部特征以及各图像块对整体质量评价的影响程度,以提升模型对全景图像质量的评价准确度。结果 在3个公开数据集上将本文模型与其他IQA和OIQA模型进行性能比较,与性能第1的Assessor360相比,参数量减少了93.7%,计算量减少了95.4%;与模型规模近似的MC360IQA相比,在CVIQ、OIQA和JUFE数据集上的斯皮尔曼相关系数分别提升了1.9%、1.7%和4.3%。结论 本文所提出的NR-OIQA模型,充分考虑了全�Objective With the rapid development of the virtual reality(VR) industry,the omnidirectional image acts as an important medium of visual representation of VR and may degrade in the procedure of acquisition,transmission,processing,and storage.Omnidirectional image quality assessment(OIQA) is an evaluation technique that aims to quantitatively describe the degradation of omnidirectional images and plays a crucial role in algorithm improvement and system optimization.Generally,the omnidirectional image has some inherent characteristics,i.e.,geometric deformation in the polar region and semantic information more concentrated on the equatorial region.The viewing behavior can conspicuously affect the perceptual quality of an omnidirectional image.Early OIQA methods that simply fuse this inherent characteristic in 2DIQA do not consider the significant user viewing behavior,thus obtaining suboptimal performance.Considering the viewport representation that is in line with the user viewing behavior,some deep learning-based OIQA methods have recently achieved promising performance by taking the predicted viewport sequence as the model input and computing the degradation.However,the prediction of the viewport sequence is difficult and viewport extraction needs a series of pixel-wise computations,thus leading to a significant computation load and hampering the application in the industry environment.To address the above problems,we proposed a new no-reference OIQA model,which introduces an equirectangular modulated deformable convolution(EquiMdconv) that can deal with the irregular semantics and the regular deformation caused by equirectangular projection simultaneously without the predicted viewport sequence.Method We propose a viewportindependent and deformation-unaware no-reference OIQA model for omnidirectional image quality assessment.Our model is composed of three parts:a prior-guided patch sampling(PPS) module,a deformable-unaware feature extraction(DUFE) module,and an intra-interpatch attention aggregation(A-EPAA) mod
关 键 词:图像质量评价(IQA) 全景图像 可变形卷积 注意力机制 无参考 视口
分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...