面向权重可变的存算一体加速器的卷积算子调度  

Convolutional Operator Scheduling for a Variable-Weighted In-Memory Computing Accelerator

在线阅读下载全文

作  者:师紧想 SHI Jinxiang(School of Electronic Information and Electrical Engineering,Shanghai Jiao Tong University,Shanghai 200240)

机构地区:[1]上海交通大学电子信息与电气工程学院,上海200240

出  处:《现代计算机》2021年第10期24-28,共5页Modern Computer

摘  要:存算一体加速器因其高能效和低功耗的特点受到广泛关注,但在神经网络模型与存算一体加速平台之间缺乏高效的自动化映射工具。为了改善这一局面,本文提供一个面向忆阻器存算一体加速平台的自动代码生成软件栈,以将训练后的神经网络模型映射在存算一体加速器上。为了提高映射的效率,本文借助Halide调度卷积算子的循环空间,并设计优化连续内存访问、优化忆阻器阵列权重更新次数等优化策略。相比与默认映射方案,本文在典型卷积层中权重更新次数减少85%~96%。In-memory computing accelerators get a lot of attention for their energy efficiency and low power consumption.However,there is a lack of an efficient automated mapping tool between neural network models and the In-memory computing accelerator.To improve this situation,this paper provides an automated code generation tool for In-memory computing acceleration,which can map the trained neural network model to the in-memory computing backend.In order to improve the efficiency of the mapping,this paper uses Halide to schedule the loop space of the convolution operators,and designs optimization strategies such as optimizing the continuous memory accesses and optimizing the number of updates of the crossbar array weights.Compared with the default mapping scheme,this paper reduces the number of weight updates in a typical network convolutional layer by 85%to 96%.

关 键 词:存算一体 神经网络 自动映射 调度 

分 类 号:TN60[电子电信—电路与系统] TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象