HEN:a novel hybrid explainable neural network based framework for robust network intrusion detection  

在线阅读下载全文

作  者:Wei WEI Sijin CHEN Cen CHEN Heshi WANG Jing LIU Zhongyao CHENG Xiaofeng ZOU 

机构地区:[1]School of Computer Science and Engineering,Xi’an University of Technology,Shaanxi Key Laboratory for Network Computing and Security Technology,Xi’an 710048,China [2]School of Computer Science and Technology,Wuhan University of Science and Technology,Wuhan 430065,China [3]School of Future Technology,South China University of Technology,Guangzhou 510641,China [4]Shenzhen Research Institute of Hunan University,Shenzhen 518052,China [5]School of Computer Science,Hunan University of Technology and Business,Changsha 410205,China [6]Institute for Infocomm Research(I2R),Agency for Science,Technology and Reseach(A*STAR),Singapore 138632,Singapore

出  处:《Science China(Information Sciences)》2024年第7期68-86,共19页中国科学(信息科学)(英文版)

基  金:supported in part by Fundamental Research Funds for the Central Universities (Grant No. x2wj D2230230);Natural Science Foundation of Guangdong Province of China, CCF-Phytium Fund;Cultivation of Shenzhen Excellent Technological and Innovative Talents (Ph.D. Basic Research Started) (Grant No. RCBS20200714114943014);Basic Research of Shenzhen Science and Technology Plan (Grant No. JCYJ20210324123802006)

摘  要:With the rapid development of network technology and the automation process for 5G,cyberattacks have become increasingly complex and threatening.In response to these threats,researchers have developed various network intrusion detection systems(NIDS)to monitor network traffic.However,the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS.To address these issues,this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence(XAI)method.We effectively introduce the Shapley additive explanations(SHAP)method to explain a light gradient boosting machine(LightGBM)model.Additionally,we propose an autoencoder long-term short-term memory(AE-LSTM)network to reconstruct SHAP values previously generated.Furthermore,we define a threshold based on reconstruction errors observed during the training phase.Any network flow that surpasses the specified threshold is classified as an attack flow.This approach enhances the framework’s ability to accurately identify attacks.We achieve an accuracy of 92.65%,a recall of 95.26%,a precision of 92.57%,and an F1-score of 93.90%on the dataset NSL-KDD.Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.

关 键 词:explainable artificial intelligence light gradient boosting machine machine learning network intrusion detection Shapley additive explanation hybrid explainable neural network(HEN) 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程] TP393.08[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象