A leap forward in compute-in-memory system for neural network inference  

在线阅读下载全文

作  者:Liang Chu Wenjun Li 

机构地区:[1]School of Electronics and Information,Hangzhou Dianzi University,Hangzhou 310018,China

出  处:《Journal of Semiconductors》2025年第4期5-7,共3页半导体学报(英文版)

摘  要:Developing efficient neural network(NN)computing systems is crucial in the era of artificial intelligence(AI).Traditional von Neumann architectures have both the issues of"memory wall"and"power wall",limiting the data transfer between memory and processing units[1,2].Compute-in-memory(CIM)technologies,particularly analogue CIM with memristor crossbars,are promising because of their high energy efficiency,computational parallelism,and integration density for NN computations[3].In practical applications,analogue CIM excels in tasks like speech recognition and image classification,revealing its unique advantages.For instance,it efficiently processes vast amounts of audio data in speech recognition,achieving high accuracy with minimal power consumption.In image classification,the high parallelism of analogue CIM significantly speeds up feature extraction and reduces processing time.With the boosting development of AI applications,the demands for computational accuracy and task complexity are rising continually.However,analogue CIM systems are limited in handling complex regression tasks with needs of precise floating-point(FP)calculations.They are primarily suited for the classification tasks with low data precision and a limited dynamic range[4].

关 键 词:neural network von neumann architectures compute memory INFERENCE MEMRISTOR artificial intelligence ai traditional memristor crossbarsare analogue cim 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象