检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:申云飞 申飞[2,3] 李芳 张俊[2,3] SHEN Yunfei;SHEN Fei;LI Fang;ZHANG Jun(Institute of Physical Science and Information Technology,Anhui University,Hefei Anhui 230031,China;High Magnetic Field Laboratory,Hefei Institutes of Physical Science,Chinese Academy of Sciences,Hefei Anhui 230031,China;High Magnetic Field Laboratory of Anhui Province,Hefei Anhui 230031,China)
机构地区:[1]安徽大学物质科学与信息技术研究院,合肥230031 [2]中国科学院合肥物质科学研究院强磁场科学中心,合肥230031 [3]强磁场安徽省实验室,合肥230031
出 处:《计算机应用》2023年第9期2836-2844,共9页journal of Computer Applications
基 金:安徽省重点研究与开发计划项目(202004h07020031);中国科学院合肥大科学中心重点研发项目(2019HSC-KPRD003)。
摘 要:随着人工智能(AI)技术的蓬勃发展,深度神经网络(DNN)模型被大规模应用到各类移动端与边缘端。然而,边缘端算力低、内存容量小,且实现模型加速需要深入掌握边缘端硬件知识,这增加了模型的部署难度,也限制了模型的推广应用。因此,基于张量虚拟机(TVM)提出一种DNN加速与部署方法,从而实现卷积神经网络(CNN)模型在现场可编程门阵列(FPGA)上的加速,并在分心驾驶分类应用场景下验证了所提方法的可行性。通过计算图优化方法减小了模型的访存和计算开销,通过模型量化方法减小了模型尺寸,通过计算图打包方法将卷积计算卸载到FPGA上执行以提高模型推理速度。与微处理器(MPU)相比,所提方法可使ResNet50和ResNet18在MPU+FPGA上的推理时间分别减少88.63%和77.53%;而在AUC(American University in Cairo)数据集上,相较于MPU,两个模型在MPU+FPGA上的top1推理精度仅下降了0.26和0.16个百分点。可见,所提方法可以降低不同模型在FPGA上的部署难度。With the development of Artificial Intelligence(AI)technology,the Deep Neural Network(DNN)models have been applied to various mobile and edge devices widely.However,the model deployment becomes challenging and the popularization and application of the models are limited due to the facts that the computing power of edge devices is low,the memory capacity of edge devices is small,and the realization of model acceleration requires in-depth knowledge of edge device hardware.Therefore,a DNN acceleration and deployment method based on Tensor Virtual Machine(TVM)was presented to accelerate the Convolutional Neural Network(CNN)model on Field-Programmable Gate Array(FPGA),and the feasibility of this method was verified in the application scenarios of distracted driving classification.Specifically,in the proposed method,the computational graph optimization method was utilized to reduce the memory access and computational overhead of the model,the model quantization method was used to reduce the model size,and the computational graph packing method was adopted to offload the convolution calculation to the FPGA in order to speed up the model inference.Compared with MPU(MicroProcessor Unit),the proposed method can reduce the inference time of ResNet50 and ResNet18 on MPU+FPGA by 88.63%and 77.53%respectively.On AUC(American University in Cairo)dataset,compared to MPU,the top1 inference accuracies of the two models on MPU+FPGA are only reduced by 0.26 and 0.16 percentage points respectively.It can be seen that the proposed method can reduce the deployment difficulty of different models on FPGA.
关 键 词:张量虚拟机 深度神经网络 现场可编程门阵列 边缘设备 模型部署 模型加速
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.79