机构地区:[1]中国科学院软件研究所智能软件研究中心,北京100190 [2]中国科学院大学,北京100049 [3]计算机科学国家重点实验室(中国科学院软件研究所),北京100190
出 处:《软件学报》2024年第8期3647-3667,共21页Journal of Software
基 金:中国科学院战略性先导科技专项(XDA0320400);国家自然科学基金(62202457);源图重大基础设施资助。
摘 要:基于深度学习的代码漏洞检测模型因其检测效率高和精度准的优势,逐步成为检测软件漏洞的重要方法,并在代码托管平台GitHub的代码审计服务中发挥重要作用.然而,深度神经网络已被证明容易受到对抗攻击的干扰,这导致基于深度学习的漏洞检测模型存在遭受攻击、降低检测准确率的风险.因此,构建针对漏洞检测模型的对抗攻击不仅可以发掘此类模型的安全缺陷,而且有助于评估模型的鲁棒性,进而通过相应的方法提升模型性能.但现有的面向漏洞检测模型的对抗攻击方法依赖于通用的代码转换工具,并未提出针对性的代码扰动操作和决策算法,因此难以生成有效的对抗样本,且对抗样本的合法性依赖于人工检查.针对上述问题,提出了一种面向漏洞检测模型的强化学习式对抗攻击方法.该方法首先设计了一系列语义约束且漏洞保留的代码扰动操作作为扰动集合;其次,将具备漏洞的代码样本作为输入,利用强化学习模型选取具体的扰动操作序列;最后,根据代码样本的语法树节点类型寻找扰动的潜在位置,进行代码转换,从而生成对抗样本.基于SARD和NVD构建了两个实验数据集,共14278个代码样本,并以此训练了4个具备不同特点的漏洞检测模型作为攻击目标.针对每个目标模型,训练了一个强化学习网络进行对抗攻击.结果显示,该攻击方法导致模型的召回率降低了74.34%,攻击成功率达到96.71%,相较基线方法,攻击成功率平均提升了68.76%.实验证明了当前的漏洞检测模型存在被攻击的风险,需要进一步研究提升模型的鲁棒性.Deep learning-based code vulnerability detection models have gradually become an important method for detecting software vulnerabilities due to their advantages of high detection efficiency and accuracy,and play an important role in the code auditing service of the code hosting platform GitHub.However,deep neural networks have been proved to be susceptible to the interference of adversarial attacks,which leads to the risk of deep learning-based vulnerability detection models being attacked and reducing the detection accuracy.Therefore,building adversarial attacks against vulnerability detection models can not only uncover the security flaws of such models,but also help to evaluate the robustness of the models,and then improve the performance of the models through corresponding methods.However,the existing counter-attack methods for vulnerability detection models rely on generalized code transformation tools,and do not propose targeted code perturbation operations and decision algorithms,so it is difficult to generate effective counter-attack samples,and the legitimacy of the counter-attack samples relies on manual checking.To address the above problems,a reinforcement learning adversarial attack method for vulnerability detection model is proposed.The method firstly designs a series of semantically constrained and vulnerability-preserving code perturbation operations as a set of perturbations;secondly,the code samples with vulnerabilities are used as inputs,and the reinforcement learning model is used to select specific sequences of perturbation operations;finally,the code samples are used to search for potential locations of perturbations according to the types of nodes in the syntax tree,and then code transformations are carried out,thus generating the counteracting samples.Based on SARD and NVD,two experimental datasets with a total of 14278 code samples are constructed,and four vulnerability detection models with different characteristics are trained as attack targets.For each target model,a reinforcement lea
分 类 号:TP311[自动化与计算机技术—计算机软件与理论]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...