AMS-FGSM:一种对抗样本生成的梯度参数更新方法  

AMS-FGSM:A gradient parameter update method for adversarial sample generation

在线阅读下载全文

作  者:诸云[1] 吴祎楠 郭佳 王建宇[1] Zhu Yun;Wu Yinan;Guo Jia;Wang Jianyu(School of Automation,Nanjing University of Science and Technology,Nanjing 210094,China)

机构地区:[1]南京理工大学自动化学院,江苏南京210094

出  处:《南京理工大学学报》2024年第5期635-641,共7页Journal of Nanjing University of Science and Technology

基  金:复杂零部件智能检测与识别湖北省工程研究中心开放课题(IDICP-KF-2024-23);湖州市城市多维感知与智能计算重点实验室开放基金(UMPIC202401);厦门市智慧渔业重点实验室2023年开放基金(XMKLIF-OP-202301);智能机器人湖北省重点实验室2024年开放基金(HBIR202304)。

摘  要:深度神经网络在多种模式识别任务上均取得卓越表现,然而相关研究表明深度神经网络非常脆弱,极易受到对抗样本的攻击。人眼不易察觉的对抗样本还具有迁移性,即针对某个模型生成的对抗样本会使其他不同的深度模型产生误判。该文针对对抗样本的迁移性,提出了基于Adam优化算法的快速梯度符号方法(AMS-FGSM),可替代原有的迭代梯度符号方法(I-FGSM)。不同于I-FGSM,AMS-FGSM结合了动量与AMSGrad算法的优势。在手写数据集MNIST上的实验表明,结合了AMS-FGSM的对抗样本生成方法能更快速地生成攻击成功率更高的对抗样本,在训练模型上的平均成功率达到98.1%,对模型的攻击成功率随扰动次数的增加而保持稳定,表现较好。Deep neural networks have achieved outstanding performance in various pattern recognition tasks,but related researches show that deep neural networks are very vulnerable and easily attacked by adversarial samples.Moreover,adversarial samples that are difficult for the human eye to detect have the property of transferability,meaning that samples generated to attack one specific model can also cause misclassifications in other different deep models.This paper addresses the transferability of adversarial samples and proposes the AMSGrad fast gradient sign method(AMS-FGSM)based on the Adam optimization algorithm,which can be used to replace the original iterative fast gradient sign method(I-FGSM).Unlike the I-FGSM,AMS-FGSM combines the advantages of momentum and AMSGrad algorithm.After experiments on the MNIST dataset,it was found that the adversarial sample generation method combined with AMS-FGSM can generate adversarial samples more quickly with a higher attack success rate.The average success rate of attacking the training model is 98.1%,and the attack success rate on the model remains stable with the increase of the number of perturbation times,which performs well.

关 键 词:对抗样本 梯度更新 黑盒攻击 深度神经网络 人工智能 

分 类 号:TP29[自动化与计算机技术—检测技术与自动化装置]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象