检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:钱亚冠[1] 张锡敏 王滨[2] 顾钊铨 李蔚[1] 云本胜[1] QIAN Yaguan;ZHANG Ximin;WANG Bin;GU Zhaoquan;LI Wei;YUN Bensheng(School of Science/School of Big-data Science,Zhejiang University of Science and Technology,Hangzhou 310023,China;Network and Information Security Laboratory of Hangzhou Hikvision Digital Technology Co.,Ltd.Hangzhou 310052,China;Cyberspace Institute of Advanced Technology(CIAT),Guangzhou University,Guangzhou 510006,China)
机构地区:[1]浙江科技学院理学院/大数据学院,杭州310023 [2]杭州海康威视网络与信息安全实验室,杭州310052 [3]广州大学网络空间先进技术研究院,广州510006
出 处:《电子与信息学报》2021年第11期3367-3373,共7页Journal of Electronics & Information Technology
基 金:国家重点研发计划项目(2018YFB2100400);国家自然科学基金(61902082)。
摘 要:深度神经网络(DNN)应用于图像识别具有很高的准确率,但容易遭到对抗样本的攻击。对抗训练是目前抵御对抗样本攻击的有效方法之一。生成更强大的对抗样本可以更好地解决对抗训练的内部最大化问题,是提高对抗训练有效性的关键。该文针对内部最大化问题,提出一种基于2阶对抗样本的对抗训练,在输入邻域内进行2次多项式逼近,生成更强的对抗样本,从理论上分析了2阶对抗样本的强度优于1阶对抗样本。在MNIST和CIFAR10数据集上的实验表明,2阶对抗样本具有更高的攻击成功率和隐蔽性。与PGD对抗训练相比,2阶对抗训练防御对当前典型的对抗样本均具有鲁棒性。Although Deep Neural Networks(DNN)achieves high accuracy in image recognition,it is significantly vulnerable to adversarial examples.Adversarial training is one of the effective methods to resist adversarial examples empirically.Generating more powerful adversarial examples can solve the inner maximization problem of adversarial training better,which is the key to improve the effectiveness of adversarial training.In this paper,to solve the inner maximization problem,an adversarial training based on second-order adversarial examples is proposed to generate more powerful adversarial examples through quadratic polynomial approximation in a tiny input neighborhood.Through theoretical analysis,second-order adversarial examples are shown to outperform first-order adversarial examples.Experiments on MNIST and CIFAR10 data sets show that second-order adversarial examples have high attack success rate and high concealment.Compared with PGD adversarial training,adversarial training based on second-order adversarial examples is robust to all the existing typical attacks.
分 类 号:TN915.08[电子电信—通信与信息系统] TP309.2[电子电信—信息与通信工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.3