检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:XU Keyizhi LU Yajuan WANG Zhongyuan LIANG Chao 徐可易之;陆雅娟;王中元;梁超(武汉大学计算机学院,湖北武汉430072;武汉大学国家多媒体软件工程研究中心(NERCMS),湖北武汉430072;武汉大学湖北省多媒体与网络通信工程重点实验室,湖北武汉430072;武汉大学国家网络安全学院,湖北武汉430072)
机构地区:[1]School of Computer Science,Wuhan University,Wuhan 430072,Hubei,China [2]National Engineering Research Center for Multimedia Software(NERCMS),Wuhan University,Wuhan 430072,Hubei,China [3]Key Laboratory of Multimedia and Network Communication Engineering,Hubei Province,Wuhan University,Wuhan 430072,Hubei,China [4]School of Cyber Science and Engineering,Wuhan University,Wuhan 430072,Hubei,China
出 处:《Wuhan University Journal of Natural Sciences》2025年第1期1-20,共20页武汉大学学报(自然科学英文版)
基 金:Supported by the National Natural Science Foundation of China(U1903214,62372339,62371350,61876135);the Ministry of Education Industry University Cooperative Education Project(202102246004,220800006041043,202002142012);the Fundamental Research Funds for the Central Universities(2042023kf1033)。
摘 要:Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.近年来,深度神经网络(DNNs)在计算机视觉任务中的表现日益出色。然而,研究人员发现了一个潜在的漏洞:精心设计的对抗样本很容易通过对输入数据进行不可察觉的修改而误导DNNs做出错误行为。在本综述中,我们重点关注(1)对抗攻击算法以生成对抗样本,(2)对抗防御技术以保护DNNs免受对抗样本攻击,以及(3)对抗样本领域中除攻击和防御之外的重要问题,包括对抗样本的理论解释、折中问题以及良性攻击。此外,我们对最近发表的对抗样本综述进行了简要比较,并确定了对抗样本研究的未来方向,如方法的跨域泛化、对可迁移性的理解等,这些方向可能会解决该领域的未解决问题。
关 键 词:computer vision adversarial examples adversarial attack adversarial defense
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7