检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:杨武[1] 刘依然 冯欣[1] 明镝 YANG Wu;LIU Yiran;FENG Xin;MING Di(College of Computer Science and Engineering,Chongqing University of Technology,Chongqing 400054,China)
机构地区:[1]重庆理工大学计算机科学与工程学院,重庆400054
出 处:《重庆理工大学学报(自然科学)》2023年第10期220-228,共9页Journal of Chongqing University of Technology:Natural Science
基 金:重庆市技术创新与应用发展重点项目(cstc2021jscx-dxwtBX0018);重庆市自然科学基金面上项目(CSTB2022NSCQ-MSX0493);重庆理工大学科研启动基金资助项目(2022ZDZ026)。
摘 要:对抗攻击产生的对抗样本能够影响神经网络在图像分类任务中的预测结果。由于对抗样本难以察觉,具有迁移性,即同一个对抗样本能干扰不同结构模型的判断,因此制作对抗扰动,生成对抗样本在检测模型缺陷等方面有重大意义。近几年提出的无数据通用对抗攻击在无数据条件下仅通过最大化激活所有卷积层的激活值来制作对抗扰动,更加接近模型真实应用场景,但忽略了不同的卷积层提取的特征差异,导致对抗样本迁移性较差。现提出一种加权最大化激活的无数据通用攻击方法,为每个卷积层赋予相应的权重,利用不同卷积层激活值对通用扰动的影响,提高对抗样本的迁移性。在ImageNet验证集上的实验表明,加权最大化激活攻击方法相比于其他方法具有良好的攻击效果;消融实验表明,通用对抗扰动能够从浅层卷积层学习泛化特征,具有更好的迁移性。Adversarial examples generated from adversarial attacks can seriously influence the prediction of convolutional neural networks in image classification tasks.Due to the difficult detection of adversarial samples and their transferability(an adversarial sample can undermine the prediction of models with different architectures),crafting adversarial perturbations and generating adversarial samples are of great importance in detecting model defects.However,existing data-free universal adversarial attacks only maximize the activation values of all the convolutional layers to craft adversarial perturbations without any data,which is practical in real-world applications,but adversarial examples are poor in transferability since the difference of features extracted by different convolutional layers is rarely considered.In this paper,a data-free universal adversarial attack method with Weighted Maximization Activation(WAM)is proposed,which assigns the corresponding weight to each convolution layer and increases the weight of activation value from the shallow convolutional layer that can extract generalized features.Experiments on the ImageNet validation set show that the weighted maximization activation attack performs better than other data-free universal methods.Additionally,the ablation experiment verifies that the universal adversarial perturbation can learn generic features from shallow convolutional layers and achieve better transferability.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222