针对人脸检测对抗攻击风险的安全测评方法  被引量:2

Security Evaluation Method for Risk of Adversarial Attack on Face Detection

在线阅读下载全文

作  者:景慧昀 周川[2,3] 贺欣 JING Hui-yun;ZHOU Chuan;HE Xin(China Academy of Information and Communications Technology,Beijing 100083,China;Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100097,China;School of Cyber Security,University of Chinese Academy of Sciences,Beijing 100049,China;National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 102209,China)

机构地区:[1]中国信息通信研究院,北京100083 [2]中国科学院信息工程研究所,北京100097 [3]中国科学院大学网络空间安全学院,北京100049 [4]国家计算机网络应急技术处理协调中心,北京102209

出  处:《计算机科学》2021年第7期17-24,共8页Computer Science

基  金:国家242信息安全计划(2018Q39)。

摘  要:人脸检测是计算机视觉领域的一个经典问题,其在人工智能大数据驱动的赋能下焕发出崭新生机,在刷脸支付、身份认证、摄像美颜、智能安防等领域均体现出重要的应用价值与广阔的应用前景。然而,随着人脸检测部署应用进程的全面加速,其安全风险与隐患也日益凸显。因此,文中分析总结了现行人脸检测模型在全生命周期的各阶段所面临的安全风险,其中对抗攻击因对人脸检测的可用性和可靠性构成严重威胁,并可能使人脸检测模块丧失基本功能性而受到了广泛关注。目前,面向人脸检测的对抗攻击算法主要集中于白盒攻击。但是,由于白盒对抗攻击需要充分理解特定人脸检测模型的内部结构和全部参数,而出于对保护商业机密和企业利益的考虑,现实物理世界中商业部署的人脸检测模型的结构与参数通常是不可访问的,这使得使用白盒攻击方法在现实世界中攻破商业人脸检测模型几乎不可能。针对上述问题,提出了一种面向人脸检测的黑盒物理域对抗攻击方法。通过集成学习的思想,提取众多人脸检测模型的公共注意力热力图,并针对获取到的公共注意力热力图发起攻击。实验结果表明,该方法能够成功逃逸部署于移动终端的黑盒人脸检测模型,包括移动终端自带相机软件、刷脸支付软件、美颜相机软件的人脸检测模块。这说明所提出的方法对评测人脸检测模型的安全性能够提供有益帮助。Face detection is a classic problem in the field of computer vision.With the power-driven by artificial intelligence and big data,it has displayed a new vitality.Face detection shows its important application value and great application prospect in the fields of face payment,identity authentication,beauty camera,intelligent security,and so on.However,with the overall acceleration of face detection deployment and application process,its security risks and hidden dangers have become increasingly prominent.Therefore,this paper analyzes and summarizes the security risks which the current face detection models face in each stage of their life cycle.Among them,adversarial attack has received extensive attention because it poses a serious threat to the availability and reliability of face detection,and may cause the dysfunction of the face detection module.The current adversarial attacks on face detection mainly focus on white-box adversarial attacks.However,because white-box adversarial attacks require a full understanding of the internal structure and all parameters of a specific face detection model,and for the protection of business secrets and corporate interests,the structure and parameters of a commercially deployed face detection model in the real physical world are usually inaccessible.This makes it almost impossible to use white-box adversarial methods to attack commercial face detection models in the real world.To solve the above problems,this paper proposes a black-box physical adversarial attack me-thod for face detection.Through the idea of ensemble learning,the public attention heat map of many face detection models is extracted,then the obtained public attention heat map is attacked.Experiments show that our method realizes the successful escape of the black-box face detection model deployed on mobile terminals,including the face detection module of mobile terminal’s built-in camera software,face payment software,and beauty camera software.This demonstrates that our method will be helpful to evaluate t

关 键 词:人工智能安全 对抗攻击 人脸检测 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象