分布式训练系统及其优化算法综述  被引量:8

A Survey of Distributed Training System and Its Optimization Algorithms

在线阅读下载全文

作  者:王恩东[3] 闫瑞栋 郭振华 赵雅倩 WANG En-Dong;YAN Rui-Dong;GUO Zhen-Hua;ZHAO Ya-Qian(Shandong Massive Information Technology Research Institute,Jinan 250101;Inspur(Beijing)Electronic Information Industry Co.,Ltd,Beijing 100875;Inspur Eleetronic Information Industry Co.,Ltd,Jinan 250101)

机构地区:[1]山东海量信息技术研究院,济南250101 [2]浪潮(北京)电子信息产业有限公司,北京100875 [3]浪潮电子信息产业股份有限公司,济南250101

出  处:《计算机学报》2024年第1期1-28,共28页Chinese Journal of Computers

基  金:山东省自然科学基金项目(ZR2021QF073)资助。

摘  要:人工智能利用各种优化技术从海量训练样本中学习关键特征或知识以提高解的质量,这对训练方法提出了更高要求.然而,传统单机训练无法满足存储与计算性能等方面的需求.因此,利用多个计算节点协同的分布式训练系统成为热点研究方向之一.本文首先阐述了单机训练面临的主要挑战.其次,分析了分布式训练系统亟需解决的三个关键问题.基于上述问题归纳了分布式训练系统的通用框架与四个核心组件.围绕各个组件涉及的技术,梳理了代表性研究成果.在此基础之上,总结了基于并行随机梯度下降算法的中心化与去中心化架构研究分支,并对各研究分支优化算法与应用进行综述.最后,提出了未来可能的研究方向.Artificial intelligence employs a variety of optimization techniques to learn key features or knowledge from massive samples to improve the quality of solutions,which puts forward higher requirements for training methods.However,traditional single-machine training cannot meet the requirements of storage and computing performance,especially since the size of datasets and models continue to increase in recent years.Therefore,a distributed training system with the cooperation of multiple computing nodes has become one of the hot topics in computation-intensive and storage-intensive applications such as deep learning.Firstly,this survey introduces the main challenges(e.g.,dataset/model size,computing performance,storage capacity,system stability,and privacy protection)of single-machine training.Secondly,three key problems including partition,communication,and aggregation are proposed.To address these problems,a general framework of a distributed training system including four components(e.g.,partition component,commu-nication component,optimization component,aggregation component)is summarized.This paper pays attention to the core technologies in each component and reviews the existing representative research progress.Furthermore,this survey focuses on the parallel stochastic gradient descent algorithm and its variants,and categorizes them into the branches of centralized and decentralized architecture respectively.In each branch,a line of synchronous and asynchronous optimization algorithms has been revisited.Furthermore,it introduces three representative applications which consist of heterogeneous environment training,federated learning,and large model training in distributed systems.Finally,the following two future research directions are proposed.For one thing,an efficient distributed second-order optimization algorithm will be designed,and for another,a theoretical analysis method in federated learning will be explored.

关 键 词:分布式训练系统 (去)中心化架构 中心化架构算法 (异)同步算法 并行随机梯度下降 收敛速率 

分 类 号:TP301[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象