Self-compensation tensor multiplication unit for adaptive approximate computing in low-power CNN processing  被引量:3

在线阅读下载全文

作  者:Bo LIU Zilong ZHANG Hao CAI Reyuan ZHANG Zhen WANG Jun YANG 

机构地区:[1]National ASIC System Engineering Center,Southeast University,Nanjing 210096,China [2]Nanjing Prochip Electronic Technology Co.Ltd.,Nanjing 210001,China

出  处:《Science China(Information Sciences)》2022年第4期283-284,共2页中国科学(信息科学)(英文版)

基  金:supported by the National Science and Technology Major Project(Grant No.2018ZX01031101-005);National Natural Science Foundation of China(Grant No.61904028).

摘  要:Dear editor,Approximate multiplication is an emerging circuit design technique for AI-based IoT devices using deep neural networks which can reduce energy consumption with acceptable accuracy loss.Digital signal processing with low power consumption is very important for battery-powered devices,such as real-time speech recognition[1]and cuffless blood pressure monitoring[2].We propose a hybrid self-compensation approximate tensor multiplication unit for low-power convolutional neural network(CNN)processing.The proposed architecture encompasses three advantages:(a)a positive-negative hybrid compensation encoding scheme is utilized in partial products generation and the partial products are reorganized;(b)a self-compensation addition tree structure with a staged configuration method is proposed to process the accumulation of reorganized partial products;(c)the proposed unit is further optimized by using a lower supply voltage in the imprecision parts of the addition tree.

关 键 词:APPROXIMATE power EDITOR 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象