基于主从博弈的分层联邦学习激励机制研究  被引量:3

Research on Hierarchical Federated Learning Incentive Mechanism Based on Master-Slave Game

在线阅读下载全文

作  者:贾云健[1] 黄宇[1] 梁靓[1] 万杨亮 周继华 JIA Yunjian;HUANG Yu;LIANG Liang;WAN Yangliang;ZHOU Jihua(School of Microelectronics and Communication Engineering,Chongqing University,Chongqing 400044,China;95696 Troops,Chongqing 400030,China;Chongqing Key Laboratory of Complex Environment Communication,Chongqing Jinmei Communication Co.Ltd.Chongqing 400030,China)

机构地区:[1]重庆大学微电子与通信工程学院,重庆400044 [2]95696部队,重庆400030 [3]重庆金美通信有限责任公司复杂环境通信重庆市重点实验室,重庆400030

出  处:《电子与信息学报》2023年第4期1366-1373,共8页Journal of Electronics & Information Technology

基  金:国家自然科学基金(62071075,61971077);重庆市自然科学基金(cstc2020jcyj-msxmX0704)。

摘  要:为了优化分层联邦学习(FL)全局模型的训练时延,针对实际场景中终端设备存在自私性的问题,该文提出一种基于博弈论的激励机制。在激励预算有限的条件下,得到了终端设备和边缘服务器之间的均衡解和最小的边缘模型训练时延。考虑终端设备数量不同,设计了基于主从博弈的可变激励训练加速算法,使得一次全局模型训练时延达到最小。仿真结果显示,所提出的算法能够有效降低终端设备自私性带来的影响,提高分层联邦学习全局模型的训练速度。In order to optimize the training delay of the hierarchical Federated Learning(FL)global model,focusing on the selfishness of the terminal devices in the actual scene,an incentive mechanism based on game theory is proposed.Under the condition of limited incentive budget,the equilibrium solution between terminal devices and edge servers and the minimum edge model training delay are obtained.Considering the different number of terminal devices,a variable incentive training acceleration algorithm based on Stackelberg game is designed to minimize the training delay of a global model.Simulation results demonstrate that the proposed algorithm can effectively reduce the impact of terminal devices selfishness and improve the training speed of hierarchical federated learning global model.

关 键 词:分层联邦学习 博弈论 激励机制 

分 类 号:TN92[电子电信—通信与信息系统]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象