检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:王波[1] 代晓蕊 王伟[2] 于菲 魏飞 赵梦楠 Bo WANG;Xiaorui DAI;Wei WANG;Fei YU;Fei WEI;Mengnan ZHAO(School of Information and Communication Engineering,Dalian University of Technology,Dalian 116024,China;Intelligent Perception and Computing Research Center,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;Department of Electrical Engineering,Arizona State University,Tempe AZ85281,USA)
机构地区:[1]大连理工大学信息与通信工程学院,中国大连116024 [2]中国科学院自动化研究所智能感知与计算研究中心,中国北京100190 [3]Department of Electrical Engineering,Arizona State University,Tempe Az85281,USA
出 处:《中国科学:信息科学》2023年第3期470-484,共15页Scientia Sinica(Informationis)
基 金:国家自然科学基金(批准号:U1936117,62106037,62076052);大连市科技创新基金应用基础研究项目(批准号:2021JJ12GX018);模式识别国家重点实验室开放课题基金(批准号:202100032);中央高校基本科研业务费(批准号:DUT21GF303)资助项目。
摘 要:为了解决传统的机器学习中数据隐私和数据孤岛问题,联邦学习技术应运而生.现有的联邦学习方法采用多个不共享私有数据的参与方联合训练得到了更优的全局模型.然而研究表明,联邦学习仍然存在很多安全问题.典型地,如在训练阶段受到恶意参与方的攻击,导致联邦学习全局模型失效和参与方隐私泄露.本文通过研究对抗样本在训练阶段对联邦学习系统进行投毒攻击的有效性,以发现联邦学习系统的潜在安全问题.尽管对抗样本常用于在测试阶段对机器学习模型进行攻击,但本文中,恶意参与方将对抗样本用于本地模型训练,旨在使得本地模型学习混乱的样本分类特征,从而生成恶意的本地模型参数.为了让恶意参与方主导联邦学习训练过程,本文进一步使用了“学习率放大”的策略.实验表明,相比于Fed-Deepconfuse攻击方法,本文的攻击在CIFAR10数据集和MNIST数据集上均获得了更优的攻击性能.Federated learning was developed to solve the data privacy and data island in traditional machine learning.Existing federated learning methods use multiple participants who do not share private data to jointly train a better global model.However,research shows that security problems in federated learning remain numerous.Typically,federated learning is attacked by malicious participants during training,resulting in the failure of the global model and the leakage of the private data of the participants.This paper studies the effectiveness of adversarial example poisoning attacks on federated learning and further finds potential security problems in federated learning.Although adversarial examples are often used to attack machine learning models during testing,in this paper,malicious participants use adversarial examples for training the local models,aiming to make the local model learn chaotic sample classification features,thereby generating malicious local model parameters.To let the malicious participants dominate the federal learning and training process,we further use a strategy of“learning rate amplification.”Experiments show that compared with the Fed-Deepconfuse attack method,the attacks in this paper achieve better attack performance on the CIFAR10 and MNIST datasets.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.198