检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:牛德姣[1] 周时颉 蔡涛[1] 杨乐 李雷[1] NIU De-jiao;ZHOU Shi-jie;CAI Tao;YANG Le;LI Lei(School of Computer Science and Communication Engineering,Jiangsu University,Zhenjiang 212013,China)
机构地区:[1]江苏大学计算机科学与通信工程学院,江苏镇江212013
出 处:《小型微型计算机系统》2022年第9期1886-1893,共8页Journal of Chinese Computer Systems
基 金:国家自然科学基金项目(6180608)资助;科技部国家重点研发计划项目(2018YFB0804204)资助;国家重点研发计划项目(2019YFB1600500)资助.
摘 要:层级时序记忆(Hierarchical Temporal Memory,HTM)是一种模拟生物大脑皮层结构的神经形态机器学习算法.由于HTM空间池(Spatial Pooler,SP)训练时需要搜索整个模型空间查找活跃微柱,算法时间复杂度高且不适用现有方法进行加速.针对此,本文提出了面向多核的并发HTM空间池算法,利用多核处理器的并发计算能力将空间池的训练分布在多个计算核心上并行完成,以加快查找速度,减少训练所需的时间开销.所提出的空间池训练方法包括基于分区的微柱激活策略和并发的近端树突调整算法.在多核大数据平台Phoenix上实现了面向多核的并发HTM(Multicore Concurrent Hierarchical Temporal Memory,MCHTM)空间池算法原型,并使用NYC-Taxi、NAB和MNIST数据集进行了测试.实验结果表明,MCHTM相较于HTM,在NYC-Taxi、NAB和MNIST数据集上空间池的训练时间开销分别降低97.29%、97.25%和96.29%,预测准确率分别提高3.28%、1.83%和0.91%.相同训练时间开销下,相较于长短期记忆网络(Long Short-Term Memory,LSTM),MCHTM在NYC-Taxi和NAB数据集上均方根误差分别降低0.1266和0.089,在MNIST数据集上准确率提高0.42%.Hierarchical Temporal Memory(HTM)is a neuromorphic machine learning algorithm that emulates the structure and function of the human brain neocortex.In Spatial Pooler(SP),the first phase of HTM algorithm,a significant amount of time is used in searching for the active columns,which leads to higher time complexity and becomes a bottleneck for HTM training.In this work,a concurrent HTM spatial pooler algorithm for multi-core system is proposed,where the parallel computing capability of multi-core processor is exploited and the training of SP is distributed on multiple cores simultaneously.The proposed SP algorithm consists of a partition-driven column activation strategy and a concurrent synaptic learning algorithm.The partition of the HTM region greatly reduces the time overhead to retrieve the most representative columns,and forms a basis for parallel HTM training on the multi-core system.The prototype of the concurrent SP is implemented on Phoenix,a multi-core MapReduce application platform.The experimental results show that compared to HTM,MCHTM reduces the training time overhead by 97.29%,97.25%and 96.29%on NYC-Taxi,NAB and MNIST datasets,respectively.The prediction accuracy is increased by 3.28%,1.83%and 0.91%on the three datasets.Under the same training time,MCHTM outperforms the Long Short-Term Memory(LSTM)by reducing the root mean square error by 0.1266 on NYC-Taxi dataset and 0.089 on NAB dataset,the classification accuracy is increased by 0.42%on MNIST dataset.
关 键 词:层级时序记忆 空间池 并发 多核 PHOENIX
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.145.68.176