检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:曹博钧 钱入意 徐远超[1] CAO Bo-jun;QIAN Ru-yi;XU Yuan-chao(College of Information Engineering,Capital Normal University,Beijing 100048,China)
出 处:《计算机工程与科学》2024年第9期1539-1546,共8页Computer Engineering & Science
基 金:北京市自然科学基金(4212017)。
摘 要:有限的设备内存容量制约了深度神经网络模型的进一步发展,内存重用是少有的在不引入额外开销的前提下节省内存使用的方法之一。计算图中的中间张量占据着主要的内存空间,是内存重用算法的主要优化对象。现有的典型内存重用算法,包括大张量优先算法和短生命周期优先算法,仅从单一特征出发,只考虑张量之间的生命周期是否重叠,忽略了邻近张量之间的生命周期相对位置关系,计算图越复杂,对内存重用的挖掘越不够充分。针对该问题,提出一种新的内存重用算法——UMR,通过深入分析图中邻近张量的生命周期相对位置关系,并及时进行重用,从而获得了更多的内存重用机会。基于MLPerf中的真实推理模型对算法进行评估,结果显示UMR算法的内存重用率不低于现有的主流算法,且能达到该模型内存重用的理论最优。基于相对复杂的计算图对算法进行的评估表明,与大张量优先与短生命周期优先2种算法相比,UMR算法最高节省了21.6%和18.7%的内存占用,平均分别节省了6.5%与13.2%的内存占用。The limited device memory capacity restricts the further expansion of deep neural network models,and memory reuse is one of the few methods to save memory usage without introducing additional overhead.Intermediate tensors in the computational graph occupy the majority of memory space and are the primary optimization targets for memory reuse algorithms.Existing typical memory reuse algorithms,including the large tensor priority algorithm and the short lifetime priority algorithm,only consider a single characteristic,focusing solely on whether the lifetimes of tensors overlap,while ignoring the relative positional relationship between the lifetimes of adjacent tensors.As the computational graph becomes more complex,the exploitation of memory reuse becomes less sufficient.To address this issue,a new memory reuse algorithm,UMR,is proposed.By deeply analyzing the relative positional relationship between the lifetimes of adjacent tensors in the graph and promptly reusing them,UMR obtains more opportunities for memory reuse.The algorithm is evaluated based on real inference models in MLPerf,and the results show that the memory reuse rate of the UMR algorithm is not lower than that of existing mainstream algorithms and can achieve the theoretical optimum for memory reuse in the model.Evaluations of the algorithm based on relatively complex computational graphs indicate that compared to the large tensor priority and short lifetime priority algorithms,UMR saves up to 21.6%and 18.7%of memory usage,with average savings of 6.5%and 13.2%,respectively.
分 类 号:TP393.02[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49