检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:钟震宇 林勇良 王昊天 李东闻 孙羽菲 张玉志 ZHONG Zhenyu;LIN Yongliang;WANG Haotian;LI Dongwen;SUN Yufei;ZHANG Yuzhi(College of Software,Nankai University,Tianjin 300350,China)
机构地区:[1]南开大学软件学院,天津300350
出 处:《计算机科学》2024年第12期129-136,共8页Computer Science
基 金:国家重点研发计划(2021YFB0300104)。
摘 要:训练大规模神经网络通常会出现单个计算节点的内存和计算能力不足的情况,需要通过多个节点分布式训练来实现。现有的分布式深度学习框架主要针对特定的硬件环境设计,不能够有效适应各类通用计算设备。为支持大规模深度神经网络的高效训练,实现了一种通用的自动流水线并行分布式训练框架。本框架通过结合基于流水线并行的模型并行策略与神经网络模型自动拆分算法,实现了在包括国内新一代超级计算机在内的通用计算机集群上,对大规模神经网络模型与训练数据进行自动并行化处理和训练,显著减轻单个计算节点的内存和计算压力。该框架无需人工调整,可以自动高效地在多节点分布式环境中部署深度神经网络,不仅适用于超级计算机等高性能计算机集群,还可以部署到其他通用的分布式计算环境中,为大规模神经网络的自动化分布式训练提供支持。Training large-scale neural networks usually exceeds the memory and computing capacity of a single computing node,which requires distributed training using multiple nodes.Existing distributed deep learning frameworks are mainly designed for specific hardware environments and cannot effectively adapt to various general-purpose computing devices.To support the efficient training of large-scale deep neural networks,this paper implements a general-purpose automatic pipeline parallel distributed training framework.This framework combines the model parallel strategy based on pipeline parallelism with the algorithm that automatically splits the neural network model,and realizes the automatic parallelization and training of large-scale neural network models and training data on general computer clusters,including the new generation of supercomputers in China,significantly reducing the memory and computing pressure of a single computing node.The framework does not require manual adjustment,and can automatically and efficiently deploy deep neural networks to multi-node distributed environments.It is not only suitable for supercomputers and other high-performance computer clusters,but also can be deployed to other general distributed computing environments,providing support for the automatic distributed training of large-scale neural networks.
关 键 词:流水线并行 深度神经网络 超级计算机 MPI 并行计算
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.128.173.223