FPGA-based acceleration for binary neural networks in edge computing  被引量:1

在线阅读下载全文

作  者:Jin-Yu Zhan An-Tai Yu Wei Jiang Yong-Jia Yang Xiao-Na Xie Zheng-Wei Chang Jun-Huan Yang 

机构地区:[1]School of Information and Software Engineering,University of Electronic Science and Technology of China,Chengdu,610054,China [2]School of Automation,Chengdu University of Information Technology,Chengdu,610225,China [3]State Grid Sichuan Electric Power Research Institute,Chengdu,610095,China [4]Department of Information Sciences and Technology,George Mason University,Fairfax,VA22030,USA

出  处:《Journal of Electronic Science and Technology》2023年第2期65-77,共13页电子科技学刊(英文版)

基  金:supported by the Natural Science Foundation of Sichuan Province of China under Grant No.2022NSFSC0500;the National Natural Science Foundation of China under Grant No.62072076.

摘  要:As a core component in intelligent edge computing,deep neural networks(DNNs)will increasingly play a critically important role in addressing the intelligence-related issues in the industry domain,like smart factories and autonomous driving.Due to the requirement for a large amount of storage space and computing resources,DNNs are unfavorable for resource-constrained edge computing devices,especially for mobile terminals with scarce energy supply.Binarization of DNN has become a promising technology to achieve a high performance with low resource consumption in edge computing.Field-programmable gate array(FPGA)-based acceleration can further improve the computation efficiency to several times higher compared with the central processing unit(CPU)and graphics processing unit(GPU).This paper gives a brief overview of binary neural networks(BNNs)and the corresponding hardware accelerator designs on edge computing environments,and analyzes some significant studies in detail.The performances of some methods are evaluated through the experiment results,and the latest binarization technologies and hardware acceleration methods are tracked.We first give the background of designing BNNs and present the typical types of BNNs.The FPGA implementation technologies of BNNs are then reviewed.Detailed comparison with experimental evaluation on typical BNNs and their FPGA implementation is further conducted.Finally,certain interesting directions are also illustrated as future work.

关 键 词:ACCELERATOR BINARIZATION Field-programmable gate array(FPGA) Neural networks Quantification 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象