检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:戴扬 冯旸赫 黄金才[1] DAI Yang;FENG Yanghe;HUANG Jincai(College of Systems Engineering,National University of Defense Technology,Changsha 410073,China)
出 处:《工程科学学报》2024年第9期1630-1637,共8页Chinese Journal of Engineering
基 金:国家自然科学基金资助项目(62276272)。
摘 要:基于深度神经网络的视频分类模型目前应用广泛,然而最近的研究表明,深度神经网络极易受到对抗样本的欺骗.这类对抗样本含有对人类来说难以察觉的噪声,而其存在对深度神经网络的安全性构成严重威胁.尽管目前已经针对图像的对抗样本产生了相当多的研究,针对视频的对抗攻击仍存在复杂性.通常的对抗攻击采用快速梯度符号方法(FGSM),然而该方法生成的对抗样本攻击成功率低,以及易被察觉,隐蔽性不足.为解决这两个问题,本文受非线性共轭梯度下降法(FR–CG)启发,提出一种针对视频模型的非线性共轭梯度攻击方法.该方法通过松弛约束条件,令搜索步长满足强Wolfe条件,保证了每次迭代的搜索方向与目标函数损失值上升的方向一致.针对UCF-101的实验结果表明,在扰动上界设置为3/255时,本文攻击方法具有91%的攻击成功率.同时本文方法在各个扰动上界下的攻击成功率均比FGSM方法高,且具有更强的隐蔽性,在攻击成功率与运行时间之间实现了良好的平衡.Deep neural network-based video classification models enjoy widespread use because of their superior performance on visual tasks.However,with its broad-based application comes a deep-rooted concern about its security aspect.Recent research signals highlight these models’high susceptibility to deception by adversarial examples.These adversarial examples,subtly laced with humanly imperceptible noise,escape the scope of human detection while posing a substantial risk to the integrity and security of these deep neural network constructs.Considerable research has been directed toward image-based adversarial examples,resulting in notable advances in understanding and combating such adversarial attacks within that scope.However,video-based adversarial attacks highlight a different landscape of complexities and challenges.The nuances of motion information,temporal coherence,and frame-to-frame correlation introduce a multidimensional battlefield,necessitating purpose-built solutions.The most straightforward implementation of adversarial attacks uses the fast gradient sign method(FGSM).Unfortunately,FGSM attacks lack several respects:the attack success rates are far from satisfactory,they are frequently easily identifiable,and their stealth measures do not pass muster in rigorous environments.Therefore,this study introduces a novel nonlinear conjugate gradient attack method inspired by the nonlinear conjugate gradient descent method.By relaxing the search step size constraints to comply with the strong Wolfe conditions,we aimed to maintain pace with the increasing loss value of our objective function.This critical enhancement helps maintain the trajectory of each iteration’s search direction and the simultaneous increase in the loss value,thereby yielding more consistent results,which ensures that our attack method can achieve a high attack success rate and concealment after each iteration.Further invigorating testament to our approach’s efficacy came through experimental results on the UCF101 dataset,underlining an
关 键 词:对抗样本 深度学习安全性 视频攻击 白盒攻击 共轭梯度算法
分 类 号:TG142.71[一般工业技术—材料科学与工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.135.209.242