基于改进ViT的网络流量分类方法  

Network traffic classification method based on improved ViT

在线阅读下载全文

作  者:李道全 高洁 聂若琳 胡一帆 LI Dao-quan;GAO Jie;NIE Ruo-lin;HU Yi-fan(School of Information and Control Engineering,Qingdao University of Technology,Qingdao 266520,China)

机构地区:[1]青岛理工大学信息与控制工程学院,山东青岛266520

出  处:《计算机工程与设计》2025年第2期431-437,共7页Computer Engineering and Design

基  金:山东省自然科学基金项目(ZR2023MF052)。

摘  要:目前网络流量分类方法中存在模型结构复杂、特征提取不足等问题,提出一种基于稀疏注意力的改进ViT(SA-ViT)网络流量分类模型。去除数据集中无关字段并转化为灰度图,划分为块序列输入编码器提取特征;引入Longformer稀疏注意力对Self-attention进行优化,使其具有更高的局部与全局特征表达能力;通过对比图像相似度实现流量分类。通过网络公开数据集进行检测,其结果表明,所提算法在分类准确率、精确率以及F1分数等方面有较大提升,验证了该模型的科学性与可行性。At present,there are problems with complex model structures and insufficient feature extraction in network traffic classification methods.A sparse attention based improved ViT(SA ViT)network traffic classification model was proposed.Irrelevant fields were removed from the dataset and they were converted into grayscale images,which were divided into block sequences,and used as the encoder input to extract features.Longformer sparse attention was introduced to optimize Self-attention,enabling it to have higher local and global feature expression capabilities.Traffic classification was achieved by comparing image similarity.Through testing on publicly available online datasets,it is verified that the proposed algorithm has significantly improved classification accuracy,precision,and F1 score,demonstrating the scientific nature and feasibility of this model.

关 键 词:流量分类 Vision Transformer(ViT) 稀疏注意力 Longformer 编解码器 样本不均衡 灰度图 

分 类 号:TP393.0[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象