检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:许晗 XU Han(College of Information and Electronic Engineering,Liming Vocational University,Quanzhou 362000,China)
机构地区:[1]黎明职业大学信息与电子工程学院,福建泉州362000
出 处:《黎明职业大学学报》2024年第2期93-102,共10页Journal of LiMing Vocational University
摘 要:为了提高神经网络模型对样本攻击的防御能力,基于DeepFool,BIM,I-FGSM 3种算法设计了不同的对抗样本,并对其进行模型训练。经实验测试得到,DeepFool算法设计的对抗样本将准确率由91%下降至88%,BIM算法将准确率由80%下降至3%,I-FGSM算法将准确率由94%下降至40.78%和58.58%。实验结果表明,基于3种算法设计的对抗样本均能实现有效攻击。Recent research showed that some adversarial networks alter the underlying characteristics of neural networks,resulting in misleading neural networks and reducing accuracy of deep learning models.In order to improve the defense capability of neural network models against adversarial attacks,varied adversarial examples were designed based on DeepFool,BIM,and I-FGSM algorithms,and were trained on the models.After testing,it was found that the DeepFool-based adversarial examples decreased the accuracy from 91%to 88%,the BIM-based from 80%to 3%,and the I-FGSM-based from 94%to 40.78%and 58.58%,which proved that the adversarial samples designed by the three algorithms can achieve effective attacks.
关 键 词:对抗样本 DeepFool算法 BIM算法 I-FGSM算法
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15