Defending Federated Learning System from Poisoning Attacks via Efficient Unlearning  

在线阅读下载全文

作  者:Long Cai Ke Gu Jiaqi Lei 

机构地区:[1]School of Computer and Communication Engineering,Changsha University of Science and Technology,Changsha,410114,China

出  处:《Computers, Materials & Continua》2025年第4期239-258,共20页计算机、材料和连续体(英文)

基  金:supported in part by the National Social Science Foundation of China under Grant 20BTQ058;in part by the Natural Science Foundation of Hunan Province under Grant 2023JJ50033.

摘  要:Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks.Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force.By altering the local model during routine machine learning training,attackers can easily contaminate the global model.Traditional detection and aggregation solutions mitigate certain threats,but they are still insufficient to completely eliminate the influence generated by attackers.Therefore,federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution.Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses.Hence,we propose SlideFU,an efficient anti-poisoning attack federated unlearning framework.The primary concept of SlideFU is to employ sliding window to construct the training process,where all operations are confined within the window.We design a malicious detection scheme based on principal component analysis(PCA),which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models.After confirming that the global model is under attack,the system activates the federated unlearning process,calibrates the gradients based on the updated direction of the calibration gradients.Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.

关 键 词:Federated learning malicious client detection model recovery machine unlearning 

分 类 号:TP309[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象