检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:刘奇 陈莹 LIU Qi;CHEN Ying(The Key Laboratory of Advanced Process Control for Light Industry(Ministry of Education),Jiangnan University,Wuxi,Jiangsu 214122,China)
机构地区:[1]江南大学轻工过程先进控制教育部重点实验室,江苏无锡214122
出 处:《电子学报》2023年第8期2202-2212,共11页Acta Electronica Sinica
基 金:国家自然科学基金(No.62173160)。
摘 要:目前流行的模型压缩剪枝算法裁减的对象通常是整个卷积核.一些网络结构中存在特征图维度匹配的硬性要求,如ResNet中的残差结构主干上最后一个卷积层的卷积核个数以及Inception网络中的级联操作前所有分支上最后一个卷积层的卷积核个数都不能改变,这直接限定了剪枝的空间.本文提出一种正则化机制下的多粒度神经网络剪枝方法,针对维度匹配限制了剪枝空间的问题,设计从粗到细的多粒度剪枝策略,在稀疏化的同时维持了处于维度匹配位置的卷积层中卷积核的数量不变.并且,本文提出一种自适应L1正则化的稀疏方式,可以使网络在更新参数的同时兼顾到网络结构的变化.稀疏化后的卷积核不仅有比原卷积核更少的参数和计算量,而且拥有更加优异的结构性质,使网络具有更高的表达能力.例如,在CIFAR-10上,针对VGG-16,相比基准网络,在计算量压缩了76.73%的情况下,准确率提高了0.19%;针对ResNet-56,在计算量压缩了82.54%的情况下,准确率只下降了0.14%.在ImageNet上,针对ResNet-50,在计算量压缩了56.95%的情况下,准确率只下降了0.48%.本文方法优于现有先进的剪枝方法.At present,the object of pruning algorithm is usually the whole convolution kernel.The rigid requirement of feature graph dimension matching in some network structures,e.g.the number of the last convolution kernel on the backbone of residual structure in ResNet and the number of convolution kernel of all branches before concatenation opera⁃tion in Inception network cannot be changed,directly limits the pruning space.To solve the problem of dimensional match⁃ing that limits the pruning space,a multi-granularity pruning strategy from coarse to fine is designed to maintain dimension⁃al matching,which keeps the number of convolution kernels in the convolution layers positioning for dimensional matching unchanged while increasing the sparsity of the neural network.Moreover,an adaptive L1 regularization sparse method is presented,which enables the network update parameters while taking into account the changes in the network structure.The sparse convolution kernel not only has fewer parameters and calculations than the original convolution kernel,but also has more excellent structural properties,which enables the network better ability for feature representation.For VGG-16 on CIFAR-10,the accuracy is increased by 0.19%when the calculation amount is compressed by 76.73%compared with the baseline network;for ResNet-56,the accuracy rate is reduced by only 0.14%when the calculation amount is compressed by 82.54%.For ResNet-50 on ImageNet,when the calculation amount is compressed by 56.95%,the accuracy rate is only re⁃duced by 0.48%.So the proposed method is better than the existing advanced pruning methods.
关 键 词:卷积神经网络 正则化 剪枝 维度匹配 自适应L1正则化
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.145.83.240