检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李道全 高洁 聂若琳 胡一帆 LI Dao-quan;GAO Jie;NIE Ruo-lin;HU Yi-fan(School of Information and Control Engineering,Qingdao University of Technology,Qingdao 266520,China)
机构地区:[1]青岛理工大学信息与控制工程学院,山东青岛266520
出 处:《计算机工程与设计》2025年第2期431-437,共7页Computer Engineering and Design
基 金:山东省自然科学基金项目(ZR2023MF052)。
摘 要:目前网络流量分类方法中存在模型结构复杂、特征提取不足等问题,提出一种基于稀疏注意力的改进ViT(SA-ViT)网络流量分类模型。去除数据集中无关字段并转化为灰度图,划分为块序列输入编码器提取特征;引入Longformer稀疏注意力对Self-attention进行优化,使其具有更高的局部与全局特征表达能力;通过对比图像相似度实现流量分类。通过网络公开数据集进行检测,其结果表明,所提算法在分类准确率、精确率以及F1分数等方面有较大提升,验证了该模型的科学性与可行性。At present,there are problems with complex model structures and insufficient feature extraction in network traffic classification methods.A sparse attention based improved ViT(SA ViT)network traffic classification model was proposed.Irrelevant fields were removed from the dataset and they were converted into grayscale images,which were divided into block sequences,and used as the encoder input to extract features.Longformer sparse attention was introduced to optimize Self-attention,enabling it to have higher local and global feature expression capabilities.Traffic classification was achieved by comparing image similarity.Through testing on publicly available online datasets,it is verified that the proposed algorithm has significantly improved classification accuracy,precision,and F1 score,demonstrating the scientific nature and feasibility of this model.
关 键 词:流量分类 Vision Transformer(ViT) 稀疏注意力 Longformer 编解码器 样本不均衡 灰度图
分 类 号:TP393.0[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7