检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]国网宁波供电公司,信息通信分公司,浙江 宁波 [2]华东师范大学,软件工程学院,上海
出 处:《软件工程与应用》2021年第4期595-605,共11页Software Engineering and Applications
摘 要:运行功能强大的神经网络需要消耗大量的存储空间和计算资源,这对资源有限的移动设备和嵌入式设备是无法接受的。针对这个问题,本文基于贪心策略提出了一个高效的GreedyPruner算法用来自动修剪网络模型。该算法首先预训练一个超网络,这个超网络可以预测任意给定网络结构的性能;其次,引入精度队列和压缩池分别保存性能较好和剪枝率较高的网络结构,提出贪心训练策略对超网络进行二次训练,将训练空间从全体解空间贪婪地转移到精度队列和压缩池中;最后,取精度队列和压缩池中的最优网络结构进行权值微调,得到修剪后的网络模型。实验结果表明,GreedyPruner可以在网络性能几乎不变的情况下,大幅压缩模型的参数和运算量,压缩后的网络模型更有利于部署在移动设备和嵌入式设备中。A powerful neural network needs to consume a lot of storage space and computing resources, which is unacceptable for mobile devices and embedded devices with limited resources. In response to this problem, this paper proposes an efficient GreedyPruner algorithm based on the greedy strate-gy to automatically prune the network model. The algorithm first pretrains a SuperNet, which can predict the performance of any given network structure;secondly, the precision queue and the compression pool are introduced to preserve the network structure with better performance and higher pruning rate respectively, and the greedy training strategy is proposed to train the SuperNet work twice, and the training space is greedily transferred from the whole solution space to the precision queue and the compression pool;finally, take the optimal network structure in the precision queue and the compression pool to fine-tune the weights to obtain the pruned network model. Experimental results show that GreedyPruner can greatly compress the parameters and calculations of the model while the network performance is almost unchanged. The compressed network model is more conducive to deployment on mobile devices and embedded devices.
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15