检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:黄立峰[1,2] 庄文梓 廖泳贤 刘宁 HUANG Li-Feng;ZHUANG Wen-Zi;LIAO Yong-Xian;LIU Ning(School of Computer Science and Engineering,Sun Yat-Sen University,Guangzhou 510006,China;Guangdong Key Laboratory of Information Security Technology,Guangzhou 510006,China)
机构地区:[1]中山大学计算机学院(软件学院),广东广州510006 [2]广东省信息安全技术重点实验室,广东广州510006
出 处:《软件学报》2021年第11期3512-3529,共18页Journal of Software
基 金:国家自然科学基金(61772567);中央高校基本科研业务费专项资金(19lgjc11)。
摘 要:深度神经网络在许多计算机视觉任务中都取得了优异的结果,并在不同领域中得到了广泛应用.然而研究发现,在面临对抗样本攻击时,深度神经网络表现得较为脆弱,严重威胁着各类系统的安全性.在现有的对抗样本攻击中,由于黑盒攻击具有模型不可知性质和查询限制等约束,更接近实际的攻击场景.但现有的黑盒攻击方法存在攻击效率较低与隐蔽性弱的缺陷,因此提出了一种基于进化策略的黑盒对抗攻击方法.该方法充分考虑了攻击过程中梯度更新方向的分布关系,自适应学习较优的搜索路径,提升攻击的效率.在成功攻击的基础上,结合注意力机制,基于类间激活热力图将扰动向量分组和压缩优化,减少在黑盒攻击过程中积累的冗余扰动,增强优化后的对抗样本的不可感知性.通过与其他4种最新的黑盒对抗攻击方法(AutoZOOM、QL-attack、FD-attak、D-based attack)在7种深度神经网络上进行对比,验证了该方法的有效性与鲁棒性.Since deep neural networks(DNNs)have provided state-of-the-art results for different computer vision tasks,they are utilized as the basic backbones to be employed in many domains.Nevertheless,DNNs have been demonstrated to be vulnerable to adversarial attacks in recent researches,which will threaten the security of different DNN-based systems.Compared with white-box adversarial attacks,black-box attacks are more similar to the realistic scenarios under the constraints like lacking knowledge of model and limited queries.However,existing methods under black-box scenarios not only require a large amount of model queries,but also are perceptible from human vision system.To address these issues,this study proposes a novel method based on evolution strategy,which improves the attack performance by considering the inherent distribution of updated gradient direction.It helps the proposed method in sampling effective solutions with higher probabilities as well as learning better searching paths.In order to make generated adversarial example less perceptible and reduce the redundant perturbations after a successful attacking,the proposed method utilizes class activation mapping to group the perturbations by introducing the attention mechanism,and then compresses the noise group by group while ensure that the generated images can still fool the target model.Extensive experiments on seven DNNs with different structures suggest the superiority of the proposed method compared with the state-of-the-art black-box adversarial attack approaches(i.e.,AutoZOOM,QL-attack,FD-attack,and D-based attack).
关 键 词:对抗样本 黑盒攻击 进化策略 注意力机制 压缩优化
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15