检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:窦勇敢 袁晓彤[1,2] DOU Yonggan;YUAN Xiaotong(School of Automation,Nanjing University of Information Science and Technology,Nanjing 210044,China;Jiangsu Key Laboratory of Big Data Analysis Technology,Nanjing 210044,China)
机构地区:[1]南京信息工程大学自动化学院,江苏南京210044 [2]江苏省大数据分析技术重点实验室,江苏南京210044
出 处:《智能系统学报》2022年第3期488-495,共8页CAAI Transactions on Intelligent Systems
基 金:国家自然科学基金项目(61876090,61936005);科技创新2030-“新一代人工智能”重大项目(2018AAA0100400).
摘 要:联邦学习是一种分布式机器学习范式,中央服务器通过协作大量远程设备训练一个最优的全局模型。目前联邦学习主要存在系统异构性和数据异构性这两个关键挑战。本文主要针对异构性导致的全局模型收敛慢甚至无法收敛的问题,提出基于隐式随机梯度下降优化的联邦学习算法。与传统联邦学习更新方式不同,本文利用本地上传的模型参数近似求出平均全局梯度,同时避免求解一阶导数,通过梯度下降来更新全局模型参数,使全局模型能够在较少的通信轮数下达到更快更稳定的收敛结果。在实验中,模拟了不同等级的异构环境,本文提出的算法比FedProx和FedAvg均表现出更快更稳定的收敛结果。在相同收敛结果的前提下,本文的方法在高度异构的合成数据集上比FedProx通信轮数减少近50%,显著提升了联邦学习的稳定性和鲁棒性。Federated learning is a distributed machine learning paradigm.The central server trains an optimal global model by collaborating with numerous remote devices.Presently,there are two key challenges faced by federated learning:system and statistical heterogeneities.Herein,we mainly focus on the slow convergence of the global model or when it even fails to converge due to system and statistical heterogeneities.We propose a federated learning optimization algorithm based on implicit stochastic gradient descent optimization,which is different from the traditional method of updating in federated learning.We use the locally uploaded model parameters to approximate the average global gradient and to avoid solving the first-order and update the global model parameter via gradient descent.This is performed so that the global model can achieve faster and more stable convergence results with fewer communication rounds.In the experiment,different levels of heterogeneous settings were simulated.The proposed algorithm shows considerably faster and more stable convergence behavior than FedAvg and FedProx.In the premise of the same convergence results,the experimental results show that the proposed method reduces the number of communication rounds by approximately 50%compared with Fedprox in highly heterogeneous synthetic datasets.This considerably improves the stability and robustness of federated learning.
关 键 词:联邦学习 分布式机器学习 中央服务器 全局模型 隐式随机梯度下降 数据异构 系统异构 优化算法 快速收敛
分 类 号:TP8[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.90