联邦学习下对抗训练样本表示的研究  被引量:8

Towards training time attacks for federated machine learning systems

在线阅读下载全文

作  者:冯霁 蔡其志 姜远[1] Ji FENG;Qi-Zhi CAI;Yuan JIANG(National Key Lab for Novel Software Technology,Nanjing University,Nanjing 210023,China;Sinovation Ventures AI Institute,Beijing 100080,China)

机构地区:[1]南京大学软件新技术国家重点实验室,南京210023 [2]创新工场人工智能工程院,北京100080

出  处:《中国科学:信息科学》2021年第6期900-911,共12页Scientia Sinica(Informationis)

摘  要:联邦机器学习系统由于能够在多方之间训练联合模型而无需各方共享训练数据,因此在学术界和工业界都获得了越来越多的关注和应用.与传统的机器学习框架相比,这类系统被认为具有保护数据隐私的良好潜力.另一方面,训练阶段攻击是一种通过故意扰动训练数据,从而希望在测试时操纵相应的学习系统预测行为的攻击方法.例如,DeepConfuse是最近的一种高效生成对抗训练数据的方法,展示了传统监督学习范式在此类攻击下的脆弱性.在本文中,作者扩展了DeepConfuse方法,将其应用在联邦机器学习框架中.这是首次针对联邦学习系统的训练阶段攻击.实验结果表明,在δ–准确率损失的衡量标准下,相比于传统的机器学习框架,联邦学习系统在DeepConfuse攻击下更加脆弱.Federated machine learning systems have gained more and more attention and popularity in both academia and industry because they can obtain a shared model among multiple parties without explicitly sharing the training data.Such a system is believed to have a good potential of protecting data privacy compared with the traditional machine learning frameworks.On the other hand,training time attacks are a procedure of purposefully modifying training data,hoping to manipulate the behavior of the corresponding trained system during test time.DeepConfuse,for instance,is one recent advance in generating adversarial training data with high efficiency.In this work,we extend the DeepConfuse framework so that it can be used in federated machine learning.This is the first training time attack for a federated learning system.The empirical results showed that the federated learning system is even more vulnerable under the DeepConfuse attack in terms ofδ-accuracy loss.

关 键 词:联邦学习 学件 表示学习 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象