检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张青[1,2] 刘成 刘波[3] 黄海同 王颖[1,2] 李华伟 李晓维[1,2] Zhang Qing;Liu Cheng;Liu Bo;Huang Haitong;Wang Ying;Li Huawei;Li Xiaowei(State Key Lab of Processors(Institute of Computing Technology,Chinese Academy of Sciences),Beijing 100190;University of Chinese Academy of Sciences,Beijing 100049;Institute of Beijing Control Engineering,Beijing 100094)
机构地区:[1]处理器芯片全国重点实验室(中国科学院计算技术研究所),北京100190 [2]中国科学院大学,北京100049 [3]北京控制工程研究所,北京100094
出 处:《计算机研究与发展》2024年第6期1370-1387,共18页Journal of Computer Research and Development
基 金:国家重点研发计划(2022YFB4500405);国家自然科学基金项目(62174162);空间可信计算与电子信息技术实验室开放基金资助(OBCandETL-2022-07)。
摘 要:容错深度学习加速器是保障高可靠深度学习的基石,也是深度学习应用于安全关键领域如宇航、机器人等面临的一个关键环节.然而,深度学习计算和访存都非常密集,传统基于冗余计算的容错方法直接应用于深度学习加速器的容错设计会导致严重的功耗、芯片面积等硬件资源开销.为此,从神经元计算任务和神经元的数据位宽2个维度挖掘深度学习模型对于故障的敏感度差异,并利用这些差异从架构和电路层分别对于敏感的部分提供更多的保护以降低容错代价.同时,利用深度学习自身的容错特性,通过限制量化缩小电路层需要保护的电路逻辑规模.最后,利用贝叶斯优化协同优化算法、架构和电路的跨层设计参数,在保障深度学习可靠性、精度以及性能的前提下,最小化硬件资源开销.Fault-tolerant deep learning accelerator is the basis for highly reliable deep learning processing,and is also critical to deploy deep learning in safety-critical applications such as avionics and robotics.Since deep learning is known to be both computing-intensive and memory-intensive,traditional fault-tolerant approaches based on redundant computing will incur substantial overhead including power consumption and chip area.To this end,we propose to characterize deep learning vulnerability difference across both neurons and bits of each neuron,and leverage the vulnerability difference to enable selective protection of the deep learning processing components from the perspective of architecture layer and circuit layer respectively for the sake of lower fault-tolerant design overhead.At the same time,we observe the correlation between model quantization and bit protection overhead of the underlying processing elements of deep learning accelerators,and propose to reduce the bit protection overhead by adding additional quantization constrain without compromising the model accuracy.Finally,we employ Bayesian optimization strategy to co-optimize the correlated cross-layer design parameters at algorithm layer,architecture layer,and circuit layer to minimize the hardware resource consumption while fulfilling multiple user constraints including reliability,accuracy,and performance of the deep learning processing at the same time.
关 键 词:跨层优化 容错深度学习加速器 脆弱因子 异构架构 选择性冗余
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222