检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:康海燕[1] 冀源蕊 KANG Haiyan;JI Yuanrui(School of Information Management,Beijing Information Science and Technology University,Beijing 100192,China)
机构地区:[1]北京信息科技大学信息管理学院,北京100192
出 处:《通信学报》2022年第10期94-105,共12页Journal on Communications
基 金:国家社会科学基金资助项目(No.21BTQ079);国家自然科学基金资助项目(No.61370139);教育部人文社科基金资助项目(No.20YJAZH046);北京未来区块链与隐私计算高精尖创新中心基金资助项目。
摘 要:联邦学习作为一种协作式机器学习方法,允许用户通过共享模型而不是原始数据进行多方模型训练,在实现隐私保护的同时充分利用用户数据,然而攻击者仍有可能通过窃听联邦学习参与方共享模型来窃取用户信息。为了解决联邦学习训练过程中存在的推理攻击问题,提出一种基于本地化差分隐私的联邦学习(LDP-FL)方法。首先,设计一种本地化差分隐私机制,作用在联邦学习参数的传递过程中,保证联邦模型训练过程免受推理攻击的影响。其次,提出并设计一种适用于联邦学习的性能损失约束机制,通过优化损失函数的约束范围来降低本地化差分隐私联邦模型的性能损失。最后,在MNIST和FashionMNIST数据集上通过对比实验验证了所提方法的有效性。As a type of collaborative machine learning framework,federated learning is capable of preserving private data from participants while training the data into useful models.Nevertheless,from a viewpoint of information theory,it is still vulnerable for a curious server to infer private information from the shared models uploaded by participants.To solve the inference attack problem in federated learning training,a local differential privacy federated learning(LDP-FL)approach was proposed.Firstly,to ensure the federated model training process was protected from inference attacks,a local differential privacy mechanism was designed for transmission of parameters in federated learning.Secondly,a performance loss constraint mechanism for federated learning was proposed and designed to reduce the performance loss of local differential privacy federated model by optimizing the constraint range of the loss function.Finally,the effectiveness of proposed LDP-FL approach was verified by comparative experiments on MNIST and Fashion MNIST datasets.
分 类 号:TP309.2[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.137.41.2