检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
出 处:《Computers, Materials & Continua》2025年第4期239-258,共20页计算机、材料和连续体(英文)
基 金:supported in part by the National Social Science Foundation of China under Grant 20BTQ058;in part by the Natural Science Foundation of Hunan Province under Grant 2023JJ50033.
摘 要:Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks.Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force.By altering the local model during routine machine learning training,attackers can easily contaminate the global model.Traditional detection and aggregation solutions mitigate certain threats,but they are still insufficient to completely eliminate the influence generated by attackers.Therefore,federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution.Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses.Hence,we propose SlideFU,an efficient anti-poisoning attack federated unlearning framework.The primary concept of SlideFU is to employ sliding window to construct the training process,where all operations are confined within the window.We design a malicious detection scheme based on principal component analysis(PCA),which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models.After confirming that the global model is under attack,the system activates the federated unlearning process,calibrates the gradients based on the updated direction of the calibration gradients.Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.
关 键 词:Federated learning malicious client detection model recovery machine unlearning
分 类 号:TP309[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7