FedDAA:a robust federated learning framework to protect privacy and defend against adversarial attack  被引量:1

在线阅读下载全文

作  者:Shiwei LU Ruihu LI Wenbin LIU 

机构地区:[1]Fundamentals Department,Air Force Engineering University,Xi’an 710051,China [2]Institute of Advanced Computational Science and Technology,Guangzhou University,Guangzhou 510006,China

出  处:《Frontiers of Computer Science》2024年第2期107-122,共16页中国计算机科学前沿(英文版)

基  金:supported by the National Natural Science Foundation of China (Grand Nos.62072128,11901579,11801564);the Natural Science Foundation of Shaanxi (2022JQ-046,2021JQ-335,2021JM-216).

摘  要:Federated learning(FL)has emerged to break data-silo and protect clients’privacy in the field of artificial intelligence.However,deep leakage from gradient(DLG)attack can fully reconstruct clients’data from the submitted gradient,which threatens the fundamental privacy of FL.Although cryptology and differential privacy prevent privacy leakage from gradient,they bring negative effect on communication overhead or model performance.Moreover,the original distribution of local gradient has been changed in these schemes,which makes it difficult to defend against adversarial attack.In this paper,we propose a novel federated learning framework with model decomposition,aggregation and assembling(FedDAA),along with a training algorithm,to train federated model,where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation.To bring better privacy protection performance to FedDAA,an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers.In addition,we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results.Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952,thus having the best privacy protection performance and model training effect.More importantly,defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL.Moreover,verification algorithm of aggregation results brings about negligible overhead to FedDAA.

关 键 词:federated learning privacy protection adversarial attacks aggregated rule correctness verification 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程] TP309[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象