基于不同算法对抗样本的设计  

Design of Adversarial Sample Based on Different Algorithms

在线阅读下载全文

作  者:许晗 XU Han(College of Information and Electronic Engineering,Liming Vocational University,Quanzhou 362000,China)

机构地区:[1]黎明职业大学信息与电子工程学院,福建泉州362000

出  处:《黎明职业大学学报》2024年第2期93-102,共10页Journal of LiMing Vocational University

摘  要:为了提高神经网络模型对样本攻击的防御能力,基于DeepFool,BIM,I-FGSM 3种算法设计了不同的对抗样本,并对其进行模型训练。经实验测试得到,DeepFool算法设计的对抗样本将准确率由91%下降至88%,BIM算法将准确率由80%下降至3%,I-FGSM算法将准确率由94%下降至40.78%和58.58%。实验结果表明,基于3种算法设计的对抗样本均能实现有效攻击。Recent research showed that some adversarial networks alter the underlying characteristics of neural networks,resulting in misleading neural networks and reducing accuracy of deep learning models.In order to improve the defense capability of neural network models against adversarial attacks,varied adversarial examples were designed based on DeepFool,BIM,and I-FGSM algorithms,and were trained on the models.After testing,it was found that the DeepFool-based adversarial examples decreased the accuracy from 91%to 88%,the BIM-based from 80%to 3%,and the I-FGSM-based from 94%to 40.78%and 58.58%,which proved that the adversarial samples designed by the three algorithms can achieve effective attacks.

关 键 词:对抗样本 DeepFool算法 BIM算法 I-FGSM算法 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象