检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:徐晨阳 葛丽娜[1,2,4] 王哲 周永权[1,4] 秦霞 田蕾[2,3] Xu Chenyang;Ge Lina;Wang Zhe;Zhou Yongquan;Qin Xia;Tian Lei(School of Artificial Intelligence,Guangxi Minzu University,Nanning 530006,China;Key Laboratory of Network Communication Engineering,Guangxi Minzu University,Nanning 530006,China;School of Electronic Information,Guangxi Minzu University,Nanning 530006,China;Guangxi Key Laboratory of Hybrid Computation&IC Design Analysis,Nanning 530006,China)
机构地区:[1]广西民族大学人工智能学院,南宁530006 [2]广西民族大学网络通信工程重点实验室,南宁530006 [3]广西民族大学电子信息学院,南宁530006 [4]广西混杂计算与集成电路设计分析重点实验室,南宁530006
出 处:《计算机应用研究》2023年第8期2473-2480,共8页Application Research of Computers
基 金:国家自然科学基金资助项目(61862007);广西自然科学基金资助项目(2020GXNSFBA297103)。
摘 要:联邦学习解决了机器学习的数据孤岛问题,然而,各方的数据集在数据样本空间和特征空间上可能存在较大差异,导致联邦模型的预测精度下降。针对上述问题,提出了一种基于差分隐私保护知识迁移的联邦学习方法。该方法使用边界扩展局部敏感散列计算各方实例之间的相似度,根据相似度对实例进行加权训练,实现基于实例的联邦迁移学习。在此过程中,实例本身无须透露给其他方,防止了隐私的直接泄露。同时,为了减少知识迁移过程的隐私间接泄露,在知识迁移过程中引入差分隐私机制,对需要在各方之间传输的梯度数据进行扰动,实现知识迁移过程的隐私保护。理论分析表明,知识迁移过程满足ε-差分隐私保护。在XGBoost梯度提升树模型上实现了所提方法,实验结果表明,与无知识迁移方法相比,所提方法使联邦模型测试误差平均下降6%以上。Federated learning solves the data silo problem of machine learning.However,the dataset of each party may have large differences in the instance space and feature space,which led to the degradation of prediction accuracy of the federated model.To address the above problems,this paper proposed a federated learning method based on differential privacy protection knowledge transfer.The method used boundary-expanding locality-sensitive hashing to calculate the similarity between instances of each party,and carried out weighted training of instances according to the similarity to achieve instance-based fede-rated transfer learning.In the above process,each party didn’t need to disclose their instances to other parties,which could prevent the direct leakage of privacy.Meanwhile,to reduce the indirect privacy leakage in the knowledge transfer process,the proposed method introduced differential privacy mechanism to perturb the gradient data transmitted between all parties,so as to achieve privacy protection in the process of knowledge transfer.Theoretical analysis shows that the knowledge transfer process satisfiesε-differential privacy protection.This paper implemented the proposed method based on the XGBoost model.The experimental results show that,compared with the other methods without knowledge transfer,the proposed method reduces the test error of the federated model by more than 6%on average.
关 键 词:联邦学习 迁移学习 局部敏感散列 差分隐私 梯度提升树
分 类 号:TP309[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222