基于参数量化的轻量级图像压缩神经网络研究  被引量:5

Lightweight image compression neural network based on parameter quantization

在线阅读下载全文

作  者:孙浩然 王伟 陈海宝 SUN Hao-ran;WANG Wei;CHEN Hai-bao(School of Microelectronics,Shanghai Jiaotong University,Shanghai 200240,China;Beijing Institute of Astronautical Systems Engineering,Beijing 100076,China)

机构地区:[1]上海交通大学微电子学院,上海200240 [2]北京宇航系统工程研究所,北京100076

出  处:《信息技术》2020年第10期87-91,共5页Information Technology

摘  要:随着深度学习的发展,神经网络模型参数的数量越来越大,消耗了大量的存储与计算资源。而在面向图像压缩应用的自编码神经网络中,其编码器网络和解码器网络往往占用着更大的存储空间。因此,文中提出了一种基于参数量化的轻量级图像压缩神经网络,采用训练中参数量化的方法将模型参数从32位浮点型量化到8位整型。实验结果表明,相比原始模型,提出的轻量级图像压缩神经网络模型节约了73%的存储空间。在图像压缩码率小于0.16bpp的条件下,重建图像的多尺度结构相似度指标MS-SSIM仅损失1.68%,依然优于经典压缩标准JPEG与JPEG2000。With the development of Deep Learning,the number of neural network model parameters is getting larger and larger,consuming a large amount of storage and computing resources.In the auto-encoder neural network for image compression applications,the encoder network and decoder network often occupy more storage space.Therefore,a lightweight image compression neural network model is proposed based on parameter quantization.The method of aware training quantization is used to quantize the model parameters from 32-bit floating point to 8-bit integer.The experimental results show that compared to the original model,the proposed lightweight image compression neural network model saves 73%of storage space.When the image compression bit-rate is less than 0.16bpp,MS-SSIM(multi-scale structure similarity)of the reconstructed image just loses 1.68%,and is still higher than the classic compression standards JPEG and JPEG2000.

关 键 词:参数量化 模型压缩 图像压缩 神经网络 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象