异构边缘计算环境下异步联邦学习的节点分组与分时调度策略  被引量:4

Client grouping and time-sharing scheduling for asynchronousfederated learning in heterogeneous edge computing environment

在线阅读下载全文

作  者:马千飘 贾庆民 刘建春 徐宏力[2,3] 谢人超 黄韬 MA Qianpiao;JIA Qingmin;LIU Jianchun;XU Hongli;XIE Renchao;HUANG Tao(Future Network Research Center,Purple Mountain Laboratories,Nanjing 211111,China;School of Computer Science and Technology,University of Science and Technology of China,Hefei 230026,China;Suzhou Institute for Advanced Research,University of Science and Technology of China,Suzhou 215123,China;State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,Beijing 100876,China)

机构地区:[1]网络通信与安全紫金山实验室未来网络研究中心,江苏南京211111 [2]中国科学技术大学计算机科学与技术学院,安徽合肥230026 [3]中国科学技术大学苏州高等研究院,江苏苏州215123 [4]北京邮电大学网络与交换技术国家重点实验室,北京100876

出  处:《通信学报》2023年第11期79-93,共15页Journal on Communications

基  金:国家自然科学基金资助项目(No.U1709217,No.61936015,No.92267301)。

摘  要:为了克服异构边缘计算环境下联邦学习的3个关键挑战,边缘异构性、非独立同分布数据及通信资源约束,提出了一种分组异步联邦学习(FedGA)机制,将边缘节点分为多个组,各个分组间通过异步方式与全局模型聚合进行全局更新,每个分组内部节点通过分时方式与参数服务器通信。理论分析建立了FedGA的收敛界与分组间数据分布之间的定量关系。针对分组内节点的通信提出了分时调度策略魔镜法(MMM)优化模型单轮更新的完成时间。基于FedGA的理论分析和MMM,设计了一种有效的分组算法来最小化整体训练的完成时间。实验结果表明,FedGA和MMM相对于现有最先进的方法能降低30.1%~87.4%的模型训练时间。To overcome the three key challenges of federated learning in heterogeneous edge computing,i.e.,edge heterogeneity,data Non-IID,and communication resource constraints,a grouping asynchronous federated learning(FedGA)mechanism was proposed.Edge nodes were divided into multiple groups,each of which performed global updated asynchronously with the global model,while edge nodes within a group communicate with the parameter server through time-sharing communication.Theoretical analysis established a quantitative relationship between the convergence bound of FedGA and the data distribution among the groups.A time-sharing scheduling magic mirror method(MMM)was proposed to optimize the completion time of a single round of model updating within a group.Based on both the theoretical analysis for FedGA and MMM,an effective grouping algorithm was designed for minimizing the overall training completion time.Experimental results demonstrate that the proposed FedGA and MMM can reduce model training time by 30.1%~87.4%compared to the existing state-of-the-art methods.

关 键 词:边缘计算 联邦学习 非独立同分布数据 异构性 收敛分析 

分 类 号:TP301[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象