检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:孙浩然 王伟 陈海宝 SUN Hao-ran;WANG Wei;CHEN Hai-bao(School of Microelectronics,Shanghai Jiaotong University,Shanghai 200240,China;Beijing Institute of Astronautical Systems Engineering,Beijing 100076,China)
机构地区:[1]上海交通大学微电子学院,上海200240 [2]北京宇航系统工程研究所,北京100076
出 处:《信息技术》2020年第10期87-91,共5页Information Technology
摘 要:随着深度学习的发展,神经网络模型参数的数量越来越大,消耗了大量的存储与计算资源。而在面向图像压缩应用的自编码神经网络中,其编码器网络和解码器网络往往占用着更大的存储空间。因此,文中提出了一种基于参数量化的轻量级图像压缩神经网络,采用训练中参数量化的方法将模型参数从32位浮点型量化到8位整型。实验结果表明,相比原始模型,提出的轻量级图像压缩神经网络模型节约了73%的存储空间。在图像压缩码率小于0.16bpp的条件下,重建图像的多尺度结构相似度指标MS-SSIM仅损失1.68%,依然优于经典压缩标准JPEG与JPEG2000。With the development of Deep Learning,the number of neural network model parameters is getting larger and larger,consuming a large amount of storage and computing resources.In the auto-encoder neural network for image compression applications,the encoder network and decoder network often occupy more storage space.Therefore,a lightweight image compression neural network model is proposed based on parameter quantization.The method of aware training quantization is used to quantize the model parameters from 32-bit floating point to 8-bit integer.The experimental results show that compared to the original model,the proposed lightweight image compression neural network model saves 73%of storage space.When the image compression bit-rate is less than 0.16bpp,MS-SSIM(multi-scale structure similarity)of the reconstructed image just loses 1.68%,and is still higher than the classic compression standards JPEG and JPEG2000.
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.138.140.5