基于自监督的深度神经网络稳健性提高方法  

Method for improving the robustness of deep neural networks based on self-supervised reconstruction

在线阅读下载全文

作  者:李佳文 方堃 黄晓霖 杨杰[1] LI Jiawen;FANG Kun;HUANG Xiaolin;YANG Jie(The Lab of Pattern Analysis and Machine Learning,Shanghai Jiao Tong University,Shanghai 200240,China)

机构地区:[1]上海交通大学图像处理与模式识别研究所,上海200240

出  处:《厦门大学学报(自然科学版)》2022年第6期1010-1020,共11页Journal of Xiamen University:Natural Science

摘  要:深度神经网络在多种人工智能任务中有广泛的应用,然而研究表明深度神经网络在对抗样本的攻击下会输出完全错误的预测结果,整体模型的准确性受到很大影响.如何提高网络针对对抗样本的稳健性,是研究者亟需解决的问题.本文提出了一种基于自监督重建的提高网络稳健性的方法,以图像去噪为基础,设计一种去噪模型,与网络联合训练,并加入自监督重建信号辅助训练.在对抗样本进入网络前,利用去噪模型去除对抗噪声,避免其对网络的干扰.公开数据集上的实验表明,本模型在多种攻击方法下都有较高的分类准确率,表明模型针对对抗样本具有很强的稳健性.Deep neural networks are widely applied in various artificial intelligence tasks.However,some studies have shown that deep neural networks will yield completely wrong prediction results under adversarial attacks,when the accuracy of the model is seriously affected.Therefore,how to improve the robustness of the network against adversarial examples constitutes an urgent research problem.In this paper,we propose a method to improve the robustness of the network based on self-supervised reconstruction.Inspired by image denoising,we design a denoising model and train it jointly with the target neural network.Then we add self-supervised reconstruction signals to assist the training.Before the adversarial example enters the network,the denoising model is used to remove the adversarial noise,thus avoiding its interference with the network.Experiments on public datasets demonstrate that the model secures high classification accuracies under a variety of attacks,suggesting strong robustness of the model against adversarial examples.

关 键 词:深度神经网络 稳健性 自监督 对抗攻击 图像去噪 

分 类 号:G304[文化科学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象