检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:周景贤 韩威 张德栋 李志平 Zhou Jingxian;Han Wei;Zhang Dedong;Li Zhiping(School of Computer Science and Technology,Civil Aviation University of China,Tianjin 300300;Institute of Electronic Computing Technology,China Academy of Railway Sciences Group Co.,Ltd.,Beijing 100081)
机构地区:[1]中国民航大学计算机科学与技术学院,天津300300 [2]中国铁道科学研究院集团有限公司电子计算技术研究所,北京100081
出 处:《信息安全研究》2025年第3期205-213,共9页Journal of Information Security Research
基 金:国家自然科学基金项目(U2333201);民航安全能力建设项目(PESA2022093,PESA2023101);中央高校基本科研业务费资金项目(3122022058);中国高校产学研创新基金项目(2023IT277)。
摘 要:由于联邦学习参与训练的用户自主性较高且身份难以辨别,从而易遭受标签翻转攻击,使模型从错误的标签中学习到错误的规律,降低模型整体性能.为有效抵抗标签翻转攻击,提出了一种多阶段训练模型的稀释防护联邦学习方法.该方法通过对训练数据集进行随机划分,采用稀释防护联邦学习算法将部分数据分发给参与训练的客户端,以限制客户端所拥有的数据量,避免拥有大量数据的恶意参与者对模型造成较大影响.在每次训练结束后,对该阶段中所有训练轮次的梯度通过降维算法进行梯度聚类,以便识别潜在的恶意参与者,并在下一阶段中限制其训练.同时,在每个阶段训练结束后保存全局模型参数,确保每个阶段的训练都基于上一个阶段的模型基础.在数据集上的实验结果表明,该方法在降低攻击影响的同时不损害模型准确率,并且模型收敛速度平均提升了25.2%~32.3%.Since users participating in federated learning training have high autonomy and their identities are difficult to identify,they are vulnerable to label flip attacks,causing the model to learn wrong rules from wrong labels and reducing the overall performance of the model.In order to effectively resist label flip attacks,a dilution-protected federated learning method for multi-stage training models is proposed.This method randomly divides the training data set and uses a dilution protection federated learning algorithm to distribute part of the data to clients participating in the training to limit the amount of data owned by the client and avoid malicious participants with large amounts of data from causing major damage to the model.After each training session,the gradients of all training epochs in that phase are gradient clustered by a dimensionality reduction algorithm in order to identify potentially malicious actors and restrict their training in the next phase.At the same time,the global model parameters are saved after each stage of training to ensure that the training of each stage is based on the model foundation of the previous stage.Experimental results on the data set show that this method reduces the impact of attacks without damaging the model accuracy,and helps improve the convergence speed of the model.
关 键 词:联邦学习 数据安全 恶意行为 标签翻转攻击 防御
分 类 号:TP309.2[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7