分布式策略下的解码端增强图像压缩网络  

Decoder-side enhanced image compression network under distributed strategy

在线阅读下载全文

作  者:张静[1] 吴慧雪 张少博 李云松[1] ZHANG Jing;WU Huixue;ZHANG Shaobo;LI Yunsong(State Key Laboratory of Integrated Services Networks,Xidian University,Xi’an 710071,China)

机构地区:[1]西安电子科技大学空天地一体化综合业务网全国重点实验室,陕西西安710071

出  处:《西安电子科技大学学报》2025年第1期1-13,共13页Journal of Xidian University

基  金:国家自然科学基金(62371362)。

摘  要:随着多媒体的快速发展,大规模的图像数据对网络带宽和存储空间造成了巨大压力。目前,基于深度学习的图像压缩方法仍然存在重建图像有压缩伪影和训练速度慢等问题。针对上述问题,提出了分布式策略下的解码端增强图像压缩网络,用于减少重建图像压缩伪影和提高训练速度。一方面,原有的信息聚合子网不能有效利用超先验解码器的输出信息,不可避免地会在重建图像中产生压缩伪影。因此,使用解码端增强模块预测重建图像中的高频分量,减少压缩伪影。随后,为了进一步提高网络的非线性能力,引入了特征增强模块,进一步提高重建图像质量。另一方面,采用分布式训练解决单个节点训练速度较慢的问题,通过使用分布式训练有效缩短了训练时间。然而,分布式训练过程中梯度同步会产生大量通信开销,将梯度稀疏算法加入了分布式训练,每个节点按照概率将重要的梯度传递到主节点进行更新,进一步提高了训练速度。实验结果表明,分布式训练可以在保证重建图像质量的基础上加速训练。With the rapid development of multimedia,large-scale image data causes a great pressure on network bandwidth and storage.Presently,deep learning-based image compression methods still have problems such as compression artifacts in the reconstructed image and a slow training speed.To address the above problems,we propose a decoder-side enhanced image compression network under distributed strategy to reduce the reconstructed image compression artifacts and improve the training speed.On the one hand,the original information aggregation subnetwork cannot effectively utilize the output information of the hyperpriori decoder,which inevitably generates compression artifacts in the reconstructed image and negatively affects the visual effect of the reconstructed image.Therefore,we use the decoder-side enhancement module to predict the high-frequency components in the reconstructed image and reduce the compression artifacts.Subsequently,in order to further improve the nonlinear capability of the network,a feature enhancement module is introduced to further improve the reconstructed image quality.On the other hand,distributed training is introduced in this paper to solve the problem of slow training of traditional single node networks,and the training time is effectively shortened by using distributed training.However,the gradient synchronization during distributed training generates a large amount of communication overhead,so we add the gradient sparse algorithm to distributed training,and each node passes the important gradient to the master node for updating according to the probability,which further improves the training speed.Experimental results show that distributed training can accelerate training on the basis of ensuring the quality of the reconstructed image.

关 键 词:分布式训练 解码端增强 深度学习 图像压缩 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象