Neighbor2Neighbor去噪的对抗样本防御方法  

Adversarial Examples Defense Method Based on Neighbor2Neighbor Denoising

在线阅读下载全文

作  者:王飞宇 张帆[1] 郭威[1] WANG Feiyu;ZHANG Fan;GUO Wei(Information Engineering University,Zhengzhou 450001,China)

机构地区:[1]信息工程大学,河南郑州450001

出  处:《信息工程大学学报》2024年第4期466-471,共6页Journal of Information Engineering University

摘  要:在图像分类任务中,对抗样本可导致深度学习模型以高置信度输出错误的结果,而目前防御对抗样本的主要方法——改进分类模型的成本较高或难以防御新的攻击算法。为解决上述问题,提出一种新的基于图像去噪的对抗样本防御方法。通过向输入样本中添加高斯噪声来破坏攻击者精心设计的对抗扰动,利用Neighbor2Neighbor去噪网络来减少该样本中的噪声。实验结果表明,在ImageNet数据集上,所提方法能够对基本迭代法(Basic Iterative Method,BIM)、C&W(Carlini and Wagner)攻击和DeepFool等经典攻击进行有效防御,且其防御效果优于Com‐Defend和JPEG压缩。In image classification tasks,adversarial examples can fool the deep learning models with high confidence.At present,improving the classification model is the main method of defending adversarial examples,however,which is expensive or difficult to defend against new adversarial attack algorithms.To solve the above problems,a defense method against adversarial attacks is proposed based on image denoising.By adding Gaussian noise to the input examples,the adversarial perturbations elaborately designed by the attackers can be destroyed.Then the noise in the input examples could be decreased by using the Neighbor2Neighbor denoising network.The experiment results show that,on ImageNet dataset,the proposed method can effectively defend against the classical adversarial attacks such as basic itrative method(BIM),C&W(Carlini and Wagner)attacks,and DeepFool.And its defense effect is better than those of ComDefend and JPEG compression.

关 键 词:深度学习 对抗样本 对抗样本防御 图像去噪 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象