A Federated Learning Incentive Mechanism for Dynamic Client Participation:Unbiased Deep Learning Models  

在线阅读下载全文

作  者:Jianfeng Lu Tao Huang Yuanai Xie Shuqin Cao Bing Li 

机构地区:[1]School of Computer Science and Technology,Wuhan University of Science and Technology,Wuhan,430065,China [2]College of Computer Science,South-Central Minzu University,Wuhan,430074,China [3]School of Computer Science and Technology,Zhejiang Normal University,Jinhua,321004,China

出  处:《Computers, Materials & Continua》2025年第4期619-634,共16页计算机、材料和连续体(英文)

基  金:supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500);the Key Research and Development Program of Hubei Province(No.2023BEB024);the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).

摘  要:The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.

关 键 词:Federated learning deep learning non-IID data dynamic client participation non-convex optimization CONTRACT 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象