VAEFL: Integrating variational autoencoders for privacy preservation and performance retention in federated learning  

在线阅读下载全文

作  者:Zhixin Li Yicun Liu Jiale Li Guangnan Ye Hongfeng Chai Zhihui Lu Jie Wu 

机构地区:[1]School of Computer Science,Fudan University,Shanghai 200438,China [2]Institute of FinTech,Fudan University,Shanghai 200438,China

出  处:《Security and Safety》2024年第4期44-60,共17页一体化安全(英文)

基  金:supported by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project (2022CSJGG0800);the Shanghai Science and Technology Project (22510761000)

摘  要:Federated Learning(FL) heralds a paradigm shift in the training of artificial intelligence(AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity and AI model security are of paramount importance, such as fintech and biomedicine, maintaining the utility of models without compromising privacy is crucial with the growing application of AI technologies. Therefore, the adoption of FL is attracting significant attention. However, traditional FL methods are susceptible to Deep Leakage from Gradients(DLG) attacks, and typical defensive strategies in current research, such as secure multi-party computation and diferential privacy, often lead to excessive computational costs or significant decreases in model accuracy. To address DLG attacks in FL, this study introduces VAEFL, an innovative FL framework that incorporates Variational Autoencoders(VAEs) to enhance privacy protection without undermining the predictive prowess of the models. VAEFL strategically partitions the model into a private encoder and a public decoder. The private encoder, remaining local, transmutes sensitive data into a latent space fortified for privacy, while the public decoder and classifier, through collaborative training across clients, learn to derive precise predictions from the encoded data. This bifurcation ensures that sensitive data attributes are not disclosed, circumventing gradient leakage attacks and simultaneously allowing the global model to benefit from the diverse knowledge of client datasets. Comprehensive experiments demonstrate that VAEFL not only surpasses standard FL benchmarks in privacy preservation but also maintains competitive performance in predictive tasks. VAEFL thus establishes a novel equilibrium between data privacy and model utility, ofering a secure and efficient FL approach for the sensitive application of FL in the financial domain.

关 键 词:Federated learning variational autoencoders deep leakage from gradients AI model security privacy preservation 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程] TP309[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象