检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Lei Xu Junhai Zhai
机构地区:[1]College of Mathematics and Information Science,Hebei University,Baoding 071002,China
出 处:《Tsinghua Science and Technology》2024年第2期430-446,共17页清华大学学报(自然科学版(英文版)
基 金:supported by the Key R&D Program of Science and Technology Foundation of Hebei Province(No.19210310D);the Natural Science Foundation of Hebei Province(No.F2021201020).
摘 要:Deep neural network(DNN)has strong representation learning ability,but it is vulnerable and easy to be fooled by adversarial examples.In order to handle the vulnerability of DNN,many methods have been proposed.The general idea of existing methods is to reduce the chance of DNN models being fooled by observing some designed adversarial examples,which are generated by adding perturbations to the original images.In this paper,we propose a novel adversarial example generation method,called DCVAE-adv.Different from the existing methods,DCVAE-adv constructs adversarial examples by mixing both explicit and implicit perturbations without using original images.Furthermore,the proposed method can be applied to both white box and black box attacks.In addition,in the inference stage,the adversarial examples can be generated without loading the original images into memory,which greatly reduces the memory overhead.We compared DCVAE-adv with three most advanced adversarial attack algorithms:FGSM,AdvGAN,and AdvGAN++.The experimental results demonstrate that DCVAE-adv is superior to these state-of-the-art methods in terms of attack success rate and transfer ability for targeted attack.Our code is available at https://github.com/xzforeverlove/DCVAE-adv.
关 键 词:deep neural network adversarial examples white box attack black box attack robustness
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49