检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yunyi ZHOU Haichang GAO Jianping HE Shudong ZHANG Zihui WU
机构地区:[1]School of Computer Science and Technology,Xidian University,Xi’an 710071,China
出 处:《Chinese Journal of Electronics》2024年第4期979-988,共10页电子学报(英文版)
基 金:National Natural Science Foundation of China (Grant No. 61972306);Song Shan Laboratory (Grant No. YYJC012022005);Zhejiang Laboratory (Grant No. 2021KD0AB03)。
摘 要:Adversarial examples(AEs) are an additive amalgamation of clean examples and artificially malicious perturbations. Attackers often leverage random noise and multiple random restarts to initialize perturbation starting points, thereby increasing the diversity of AEs. Given the non-convex nature of the loss function, employing randomness to augment the attack's success rate may lead to considerable computational overhead. To overcome this challenge,we introduce the one-hot mean square error loss to guide the initialization. This loss is combined with the strongest first-order attack, the projected gradient descent, alongside a dynamic attack step size adjustment strategy to form a comprehensive attack process. Through experimental validation, we demonstrate that our method outperforms baseline attacks in constrained attack budget scenarios and regular experimental settings. This establishes it as a reliable measure for assessing the robustness of deep learning models. We explore the broader application of this initialization strategy in enhancing the defense impact of few-shot classification models. We aspire to provide valuable insights for the community in designing attack and defense mechanisms.
关 键 词:Adversarial examples White-box attacks Image classification
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.148.194.168