检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:高琛 张帆[1] GAO Chen;ZHANG Fan(National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China)
机构地区:[1]国家数字交换系统工程技术研究中心
出 处:《网络与信息安全学报》2019年第4期1-13,共13页Chinese Journal of Network and Information Security
基 金:国家自然科学基金资助项目(No.61572520);国家自然科学基金创新研究群体资助项目(No.61521003)~~
摘 要:递归神经网络(RNN)近些年来被越来越多地应用在机器学习领域,尤其是在处理序列学习任务中,相比CNN等神经网络性能更为优异。但是RNN及其变体,如LSTM、GRU等全连接网络的计算及存储复杂性较高,导致其推理计算慢,很难被应用在产品中。一方面,传统的计算平台CPU不适合处理RNN的大规模矩阵运算;另一方面,硬件加速平台GPU的共享内存和全局内存使基于GPU的RNN加速器的功耗比较高。FPGA由于其并行计算及低功耗的特性,近些年来被越来越多地用来做RNN加速器的硬件平台。对近些年基于FPGA的RNN加速器进行了研究,将其中用到的数据优化算法及硬件架构设计技术进行了总结介绍,并进一步提出了未来研究的方向。Recurrent neural network(RNN) has been used wildly used in machine learning field in recent years, especially in dealing with sequential learning tasks compared with other neural network like CNN. However, RNN and its variants, such as LSTM, GRU and other fully connected networks, have high computational and storage complexity, which makes its inference calculation slow and difficult to be applied in products. On the one hand, traditional computing platforms such as CPU are not suitable for large-scale matrix operation of RNN. On the other hand, the shared memory and global memory of hardware acceleration platform GPU make the power consumption of GPU-based RNN accelerator higher. More and more research has been done on the RNN accelerator of the FPGA in recent years because of its parallel computing and low power consumption performance. An overview of the researches on RNN accelerator based on FPGA in recent years is given. The optimization algorithm of software level and the architecture design of hardware level used in these accelerator are summarized and some future research directions are proposed.
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.21.104.216