Towards the transferable audio adversarial attack via ensemble methods  

作  者:Feng Guo Zheng Sun Yuxuan Chen Lei Ju 

机构地区:[1]School of Cyber Science and Technology,Shandong University,Qingdao,China [2]Quancheng Laboratory,QCL,Jinan,China

出  处:《Cybersecurity》2025年第1期86-102,共17页网络空间安全科学与技术(英文)

基  金:supported in part by NSFC No.62202275 and Shandong-SF No.ZR2022QF012 projects.

摘  要:In recent years,deep learning(DL)models have achieved signifcant progress in many domains,such as autonomous driving,facial recognition,and speech recognition.However,the vulnerability of deep learning models to adversarial attacks has raised serious concerns in the community because of their insufcient robustness and generalization.Also,transferable attacks have become a prominent method for black-box attacks.In this work,we explore the potential factors that impact adversarial examples(AEs)transferability in DL-based speech recognition.We also discuss the vulnerability of diferent DL systems and the irregular nature of decision boundaries.Our results show a remarkable diference in the transferability of AEs between speech and images,with the data relevance being low in images but opposite in speech recognition.Motivated by dropout-based ensemble approaches,we propose random gradient ensembles and dynamic gradient-weighted ensembles,and we evaluate the impact of ensembles on the transferability of AEs.The results show that the AEs created by both approaches are valid for transfer to the black box API.

关 键 词:Adversarial attacks Dynamic gradient weighting Transferability Ensemble methods 

分 类 号:TN9[电子电信—信息与通信工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象