工业加氢裂化过程深度学习模型的对抗样本攻击研究  

Research on Adversarial Sample Attack of Deep Learning Model in Industrial Hydrocracking Process

在线阅读下载全文

作  者:王晨 WANG Chen(CNOOC Huizhou Petrochemical Company Limited,Huizhou 516086,China)

机构地区:[1]中海油惠州石化有限公司,广东惠州516086

出  处:《石油学报(石油加工)》2025年第2期352-361,共10页Acta Petrolei Sinica(Petroleum Processing Section)

基  金:中海油惠州石化有限公司科研项目(E-2421E002)基金资助。

摘  要:基于神经网络的深度学习技术已广泛应用于炼油化工工业软测量,其安全性日益引起重视。以用于加氢裂化过程异常监测的深度学习模型为研究对象,提出了基于梯度的TRI-FGSM白盒对抗样本攻击算法,并首次系统考察了加氢裂化深度学习模型的对抗样本攻击效果的影响因素。结果表明:对抗攻击效果随着攻击迭代轮次而收敛至0.98,说明加氢裂化深度学习模型在面对对抗样本时易被误导、普遍存在安全性问题;对抗攻击算法中范数、扰动阈值的选择可提高攻击效率,但不能提高攻击质量,而加氢裂化回归任务中变量的输入维度和预测标签可显著影响生成的对抗样本的扰动程度,其中重石脑油收率预测案例中攻击收敛时对抗样本扰动度不足0.02,扰动最小,而2个换热器结垢预测案例相应扰动度则分别为0.04和0.12,扰动较大;在变量维度越高、异常监测初期实际标签值偏离模型理论预测值越小时,所生成的对抗样本扰动越小,隐蔽性越好且越不易被检测,攻击质量越高。通过揭露加氢裂化深度学习模型在炼油化工过程初期异常监测中的潜在脆弱性,强调安全性和可靠性,为构建炼油化工高鲁棒深度学习模型提供了有效见解。Neural network-based deep learning techniques have been extensively applied in soft sensing within the refining industry,with a growing emphasis on their security.This study targets at a deep learning model employed for abnormality monitoring in the hydrocracking process,and proposes a gradient-based TRI-FGSM white-box adversarial sample attack algorithm;moreover,the influential factors of adversarial sample attacks on the hydrocracking deep learning model are systematically investigated for the first time.Results indicate that the adversarial attack effect converges to 0.98 with increasing attack iterations,demonstrating that hydrocracking deep learning models are susceptible to misguidance and general security performance issues when facing adversarial samples.The selection of norm and disturbance threshold in the adversarial attack algorithm can enhance attack efficiency,but cannot improve attack quality.In hydrocracking regression tasks,the input dimension of variables and predicted labels can significantly affect the degree of disturbance in the generated adversarial samples.In a case study of prediction on the heavy naphtha yield,the adversarial sample disturbance is below 0.02 under attack convergence,indicating the minimal disturbance;in two case studies of heat exchanger fouling prediction,the corresponding disturbances are 0.04 and 0.12,indicating larger disturbances.Findings suggest that the higher the variable dimension and the smaller the deviation between the actual label value and the theoretical predicted value in the early stage of abnormality monitoring,the smaller the disturbance in the generated adversarial samples,the better the concealment effect,the lower the detectability,and the higher the attack quality.These results reveal the potential vulnerabilities of hydrocracking deep learning model in initial abnormality detection during the refining process,aiming to emphasize the importance of considering security and reliability,which provides valuable insights for developing robust deep learn

关 键 词:加氢裂化 神经网络 深度学习 对抗样本攻击 安全 异常监测 脆弱性 

分 类 号:TE624[石油与天然气工程—油气加工工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象