检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:闫嘉乐 徐洋[1] 张思聪[1] 李克资 YAN Jiale;XU Yang;ZHANG Sicong;LI Kezi(Key Laboratory of Information and Computing Science of Guizhou Province,Guizhou Normal University,Guiyang 550001,China)
机构地区:[1]贵州师范大学贵州省信息与计算科学重点实验室,贵阳550001
出 处:《计算机工程与应用》2022年第23期24-41,共18页Computer Engineering and Applications
基 金:国家自然科学基金(U1831131);中央引导地方科技发展专项资金(黔科中引地[2018]4008);贵州省科技计划项目(黔科合支撑[2020]2Y013号);贵州省研究生科研基金(黔教合YJSKYJJ[2021]102)。
摘 要:深度学习模型在图像分类领域的能力已经超越了人类,但不幸的是,研究发现深度学习模型在对抗样本面前非常脆弱,这给它在安全敏感的系统中的应用带来了巨大挑战。图像分类领域对抗样本的研究工作被梳理和总结,以期为进一步地研究该领域建立基本的知识体系,介绍了对抗样本的形式化定义和相关术语,介绍了对抗样本的攻击和防御方法,特别是新兴的可验证鲁棒性的防御,并且讨论了对抗样本存在可能的原因。为了强调在现实世界中对抗攻击的可能性,回顾了相关的工作。在梳理和总结文献的基础上,分析了对抗样本的总体发展趋势和存在的挑战以及未来的研究展望。Deep learning models have surpassed human capabilities in the field of image classification,but unfortunately,research has found that deep learning models are very vulnerable to adversarial examples attacks,which poses a great challenge for its application in security-sensitive systems.The research work on adversarial examples in the field of image classification is sorted out and summarized in order to establish a basic knowledge system to further study the field.Firstly,the formal definition of adversarial examples and related terms are introduced.Then,the methods of adversarial examples attack and defense are introduced,especially the emerging defense of certified robustness,and the possible reasons for the existence of adversarial examples are discussed.To highlight the possibility of adversarial attacks in the real world,related work is reviewed.Finally,based on summarizing and combing the literature,the general trends and challenges of adversarial examples and future research outlook are analyzed.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7