基于IFS-LCT-ViT的时间序列分类方法  

Time Series Classification Method Based on IFS-LCT-ViT

在线阅读下载全文

作  者:杨思栋 王珂[1] 刘兵[1] 苏冰[2] Yang Sidong;Wang Ke;Liu Bing;Su Bing(School of Computer Science and Technology,China University of Mining and Technology,Xuzhou 221000,China;Hebei Television Broadcasts and Information Network Group Qinhuangdao Co,Ltd,Qinhuangdao 066000,China)

机构地区:[1]中国矿业大学计算机科学与技术学院,江苏徐州221000 [2]河北广电网络集团秦皇岛有限公司,河北秦皇岛066000

出  处:《南京师大学报(自然科学版)》2025年第2期91-101,共11页Journal of Nanjing Normal University(Natural Science Edition)

基  金:国家自然科学基金项目(62276266).

摘  要:目前针对时间序列分类问题,大多采用一维视角进行分析.二维视角下的时间序列具有更高量级的数据,但相关的研究较少且基本为格拉姆角场法(Gramian Angular Field,GAF)和卷积神经网络模型的组合.文中将对图像视角下的时间序列分类进行深入研究,对目前方法存在的相关问题进行优化.首先解决GAF算法的计算冗余问题,提出不平衡因子法(imbalance factor subtraction,IFS),以基础运算替换GAF的三角运算,在不损失分类精度的情况下,减少了图像生成过程的运算.其次针对卷积类模型存在局部偏好的问题,文中将图像识别的任务交给视觉全自注意力网络(Vision Transformer,ViT),通过对时序转换图分割,再对分割后的子块以全局并行计算的方式分配注意力权重,得到图像的整体特征.最后,提出适配ViT模型的轻量卷积令牌(lightweight convolutional token,LCT),通过一维卷积提取原始序列的局部特征,来弥补ViT模型对图像简单硬分割所带来的信息损失.结合以上所有提出了IFS-LCT-ViT模型,为了验证模型的有效性,在UCR官网中的11个数据集上进行了实验.结果表明,该模型与GRU-FCN、TST、GAF-CNN、XCM、OSCNN、MultiRocket相比,在6个数据集上获得了85.9%、80.2%、68.2%、63.0、85.3%和84.0%的最高准确率,证明了该模型在时间序列分类任务上的有效性.Currently,most of time series classification problems are analyzed from a one-dimensional perspective.Time series in a two-dimensional perspective have higher levels of data,which brings more space for research exploration,but there are fewer related studies.And the studies are basically a combination of Gramian Angular Field(GAF)and convolutional neural network models.In this paper,time series classification from the image perspective will be studied in depth.The related problems existing in current method will be optimized.Firstly,the Imbalance Factor Subtraction(IFS)method is proposed to solve the computational redundancy problem of the GAF algorithm.It replaces the trigonometric operation of GAF with the basic operation,which reduces the operation of the image generation process without losing the classification accuracy.Secondly,aiming at the problem of local preference in convolutional class models,the task of image recognition is given to the Vision Transformer(ViT).The overall features of the image are obtained by segmenting the temporal transition graph and then assigning the attention weight in the same way to all segmented subblocks.Finally,a lightweight convolutional token(LCT)adapted to ViT is proposed to extract the local features of the original sequence through one-dimensional convolution to compensate for the information loss caused by the simple hard segmentation of the image by ViT.Combining all of the above,IFS-LCT-ViT is proposed,and to verify the validity of the model,experiments were carried out on 11 datasets on the UCR website.Experimental results show that compared with GRU-FCN,TST,GADF-CNN,XCM,OSCNN and MultiRocket,the model obtains the highest accuracy of 85.9%,80.2%,68.2%,63.0,85.3%and 84.0%on six datasets,which proves the effectiveness of the model in time series classification tasks.

关 键 词:时间序列分类 图像视角 不平衡因子 视觉自注意力网络 轻量卷积令牌 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象