基于本地差分隐私的异步横向联邦安全梯度聚合方案  

A Secure Gradient Aggregation Scheme Based on Local Differential Privacy in Asynchronous Horizontal Federated Learning

在线阅读下载全文

作  者:魏立斐[1,2] 张无忌 张蕾 胡雪晖[3] 王绪安 WEI Lifei;ZHANG Wuji;ZHANG Lei;HU Xuehui;WANG Xuan(College of Information Technology,Shanghai Ocean University,Shanghai 201306,China;College of Information Engineering,Shanghai Maritime University,Shanghai 201306,China;Shanghai Tongtai Information Technology Co.,Ltd.,Shanghai 200235,China;Engineering University of PAP,Xi’an Shaanxi 710086,China)

机构地区:[1]上海海洋大学信息学院,上海201306 [2]上海海事大学信息工程学院,上海201306 [3]上海同态信息科技有限责任公司,上海200235 [4]武警工程大学,陕西西安710086

出  处:《电子与信息学报》2024年第7期3010-3018,共9页Journal of Electronics & Information Technology

基  金:国家自然科学基金(61972241,62172436);上海市自然科学基金(22ZR1427100);陕西省自然科学基金(2023-JC-YB-584);上海市软科学研究项目(23692106700)。

摘  要:联邦学习作为一种新兴的分布式机器学习框架,通过在用户私有数据不出域的情况下进行联合建模训练,有效地解决了传统机器学习中的数据孤岛和隐私泄露问题。然而,联邦学习存在着训练滞后的客户端拖累全局训练速度的问题,异步联邦学习允许用户在本地完成模型更新后立即上传到服务端并参与到聚合任务中,而无需等待其他用户训练完成。然而,异步联邦学习也存在着无法识别恶意用户上传的错误模型,以及泄露用户隐私的问题。针对这些问题,该文设计一种面向隐私保护的异步联邦的安全梯度聚合方案(SAFL)。用户采用本地差分隐私策略,对本地训练的模型添加扰动并上传到服务端,服务端通过投毒检测算法剔除恶意用户,以实现安全聚合(SA)。最后,理论分析和实验表明在异步联邦学习的场景下,提出的方案能够有效识别出恶意用户,保护用户的本地模型隐私,减少隐私泄露风险,并相对于其他方案在模型的准确率上有较大的提升。Federated learning is an emerging distributed machine learning framework that effectively solves the problems of data silos and privacy leakage in traditional machine learning by performing joint modeling training without leaving the user’s private data out of the domain.However,federated learning suffers from the problem of training-lagged clients dragging down the global training speed.Related research has proposed asynchronous federated learning,which allows the users to upload to the server and participate in the aggregation task as soon as they finish updating their models locally,without waiting for the other users.However,asynchronous federated learning also suffers from the inability to recognize malicious models uploaded by malicious users and the problem of leaking user’s privacy.To address these issues,a privacy-preserving Secure Aggregation scheme for asynchronous Federated Learning(SAFL)is designed.The users add perturbations to locally trained models and upload the perturbed models to the server.The server detects and rejects the malicious users through a poisoning detection algorithm to achieve Secure Aggregation(SA).Finally,theoretical analysis and experiments show that in the scenario of asynchronous federated learning,the proposed scheme can effectively detect malicious users while protecting the privacy of users’local models and reducing the risk of privacy leakage.The proposed scheme has also a significant improvement in the accuracy of the model compared with other schemes.

关 键 词:安全聚合 本地差分隐私 隐私保护 恶意投毒攻击 异步联邦学习 

分 类 号:TN919[电子电信—通信与信息系统] TP309[电子电信—信息与通信工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象