面向交通场景的运动模糊伪装对抗样本生成方法  

Camouflaged Adversarial Example Generation Method for the Form of Motion Blur in Traffic Scenes

作  者:张肇鑫 黄世泽[2] 张兵杰 沈拓 ZHANG Zhaoxin;HUANG Shize;ZHANG Bingjie;SHEN Tuo(Shanghai Key Laboratory of Rail Infrastructure Durability and System Safety,Shanghai 201804,China;Key Laboratory of Road and Traffic Engineering of the Ministry of Education,Tongji University,Shanghai 201804,China;School of Optical-Electrical and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China)

机构地区:[1]上海市轨道交通结构耐久与系统安全重点实验室,上海201804 [2]同济大学道路与交通工程教育部重点实验室,上海201804 [3]上海理工大学光电与计算机工程学院,上海200093

出  处:《计算机工程》2025年第3期45-53,共9页Computer Engineering

基  金:国家重点研发计划(2022YFB4300501);重庆市自然科学基金(CSTB2022NSCQ-MSX1454)。

摘  要:在自动驾驶感知系统中,卷积神经网络(CNN)作为关键技术在车辆感知和决策中发挥着重要作用。然而,其面临的对抗样本攻击威胁对自动驾驶系统的安全性和稳定性产生了严重影响。现有的对抗样本生成方法通常直接在图像中添加对抗扰动,导致对抗样本视觉质量下降,伪装性不足,易被人类观察者识别。针对这一挑战,引入交通场景中车辆运动引起的图像模糊先验知识,提出一种运动模糊伪装对抗样本生成方法。通过模拟车辆和行人在移动过程中产生的模糊效应,生成具有运动模糊特征的对抗样本。为了保持图像的运动模糊同时有效实现对抗攻击,设计一种目标隐身的对抗样本损失函数。实验结果显示,在ICDAR公共数据集上,图像检测框数量为0,通过Brenner梯度函数得到的图像模糊度指标为69.28,证明了该方法可以生成运动模糊伪装对抗样本。In the domain of autonomous driving perception systems,Convolutional Neural Network(CNN)plays a pivotal role as a fundamental technology in vehicle perception and decision making.However,adversarial attacks pose a substantial threat to the safety and robustness of autonomous driving systems.Contemporary approaches for adversarial example generation often directly inject adversarial perturbations into images,resulting in a degradation of visual fidelity and inadequate concealment,rendering them readily discernible to human observers.To address this challenge,this study leverages prior knowledge of motion blur induced by vehicular motion within traffic scenes and proposes a camouflaged adversarial example generation method.Adversarial examples featuring motion blur characteristics are synthesized by simulating the blurring effects inherent to vehicular and pedestrian motions.To preserve the motion blur within images while effectively executing adversarial attacks,an object-invisible adversarial loss function is formulated.The experimental findings obtained from the ICDAR public dataset demonstrate the efficacy of the proposed method in generating adversarial examples with motion blur,as evidenced by a detection box count of 0 and an image blur index of 69.28,obtained through the Brenner gradient function.These results validate the efficacy of the proposed method in producing motion blur camouflaged adversarial examples.

关 键 词:自动驾驶感知 对抗样本 运动模糊 目标检测 卷积神经网络 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象