基于GAN的联邦学习成员推理攻击与防御方法  被引量:4

Membership inference attack and defense method in federated learning based on GAN

在线阅读下载全文

作  者:张佳乐 朱诚诚 孙小兵[1,2] 陈兵 ZHANG Jiale;ZHU Chengcheng;SUN Xiaobing;CHEN Bing(School of Information Engineering,Yangzhou University,Yangzhou 225127,China;Jiangsu Engineering Research Center Knowledge Management and Intelligent Service,Yangzhou 225127,China;College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China)

机构地区:[1]扬州大学信息工程学院,江苏扬州225127 [2]江苏省知识管理与智能服务工程研究中心,江苏扬州225127 [3]南京航空航天大学计算机科学与技术学院,江苏南京211106

出  处:《通信学报》2023年第5期193-205,共13页Journal on Communications

基  金:国家自然科学基金资助项目(No.62206238);江苏省自然科学基金资助项目(No.BK20220562);江苏省高等学校基础科学(自然科学)研究基金资助项目(No.22KJB520010);扬州市科技计划项目-市校合作专项基金资助项目(No.YZ2021157,No.YZ2021158)。

摘  要:针对联邦学习系统极易遭受由恶意参与方在预测阶段发起的成员推理攻击行为,以及现有的防御方法在隐私保护和模型损失之间难以达到平衡的问题,探索了联邦学习中的成员推理攻击及其防御方法。首先提出2种基于生成对抗网络(GAN)的成员推理攻击方法:类级和用户级成员推理攻击,其中,类级成员推理攻击旨在泄露所有参与方的训练数据隐私,用户级成员推理攻击可以指定某一个特定的参与方;此外,进一步提出一种基于对抗样本的联邦学习成员推理防御方法(Def MIA),通过设计针对全局模型参数的对抗样本噪声添加方法,能够在保证联邦学习准确率的同时,有效防御成员推理攻击。实验结果表明,类级和用户级成员推理攻击可以在联邦学习中获得超过90%的攻击精度,而在使用Def MIA方法后,其攻击精度明显降低,接近于随机猜测(50%)。Aiming at the problem that the federated learning system was extremely vulnerable to membership inference attacks initiated by malicious parties in the prediction stage,and the existing defense methods were difficult to achieve a balance between privacy protection and model loss.Membership inference attacks and their defense methods were explored in the context of federated learning.Firstly,two membership inference attack methods called class-level attack and user-level attack based on generative adversarial network(GAN)were proposed,where the former was aimed at leaking the training data privacy of all participants,while the latter could specify a specific participant.In addition,a membership inference defense method in federated learning based on adversarial sample(DefMIA)was further proposed,which could effectively defend against membership inference attacks by designing adversarial sample noise addition methods for global model parameters while ensuring the accuracy of federated learning.The experimental results show that class-level and user-level membership inference attack can achieve over 90%attack accuracy in federated learning,while after using the DefMIA method,their attack accuracy is significantly reduced,approaching random guessing(50%).

关 键 词:联邦学习 成员推理攻击 生成对抗网络 对抗样本 隐私泄露 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象