检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:高旭章 凌书扬 陈壮志 宣琦[1] GAO Xuzhang;LING Shuyang;CHEN Zhuangzhi;XUAN Qi(School of Information Engineering,Zhejiang University of Technology,Hangzhou 310000,China)
出 处:《小型微型计算机系统》2024年第6期1347-1355,共9页Journal of Chinese Computer Systems
基 金:国家自然科学基金项目(61973273)资助;中国浙江省自然科学基金项目(LR19F030001)资助。
摘 要:为了提高智能化信号调制识别任务的实时性,本文对用于信号调制识别的深度学习模型进行了轻量化研究.通道剪枝是减小模型复杂度的有效方法,但现有的方法受原始模型深度的限制,虽然剪枝后的计算量减少,但是加速效果并不明显.针对上述问题,本文提出一种面向信号调制识别的神经网络模型轻量化方法,以卷积层作为最小剪枝单元,为每个卷积层生成代理分类器,根据代理分类器的分类精度评估卷积层的重要性,移除重要性较小的模块从而实现剪枝,对剪枝后的模型进行微调,以恢复模型的准确率.在公开数据集上,对多种信号分类模型进行实验,与通道剪枝方法相比,可以更有效地加速模型推理,模型参数量和浮点计算量下降80%以上,推理时间下降70%以上.而针对在高压缩率下模型性能损失问题,本文提出以知识蒸馏代替微调过程,将“知识”直接从原模型转移到剪枝后的紧凑模型,相比于微调,准确率提升可达2%.In order to improve the real-time performance of intelligent signal modulation recognition tasks,this paper conducts lightweight research on the deep learning model for signal modulation recognition.Channel pruning is an effective way to reduce the complexity of the model,but the existing methods are limited by the depth of the original model,and although the amount of computation after pruning is reduced,the acceleration effect is not obvious.In order to solve the above problems,this paper proposes a lightweight method for neural network model for signal modulation recognition,takes the convolutional layer as the minimum pruning unit,generates a proxy classifier for each convolutional layer,evaluates the importance of the convolutional layer according to the classification accuracy of the proxy classifier,removes the less important module to achieve pruning,and fine-tunes the pruned model to restore the accuracy of the model.On the public dataset,experiments on a variety of signal classification models can accelerate model inference more effectively than the channel pruning method,and the amount of model parameters and floating-point computation can be reduced by more than 80%,and the inference time can be reduced by more than 70%.In view of the loss of model performance under high compression ratio,this paper proposes to replace the fine-tuning process with knowledge distillation,and transfer the"knowledge"directly from the original model to the compact model after pruning,which improves the accuracy by up to 2%compared with fine-tuning.
关 键 词:调制识别 深度学习 神经网络 轻量化 边缘端设备
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49