检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:吴量 张方方 程超[1] 宋诗楠 WU Liang;ZHANG Fangfang;CHENG Chao;SONG Shinan(College of Computer Science and Engineering,Changchun University of Technology,Changchun 130012,China)
机构地区:[1]长春工业大学计算机科学与工程学院,长春130012
出 处:《吉林大学学报(理学版)》2024年第5期1179-1187,共9页Journal of Jilin University:Science Edition
基 金:吉林省发展和改革委员会项目(批准号:2022C047-7);长春市科技发展计划项目(批准号:21GD05)。
摘 要:针对DoubleMix算法在数据增强时的非选择性扩充及训练方式的不足,提出一种基于双层数据增强的监督对比学习文本分类模型,有效提高了在训练数据稀缺时文本分类的准确率.首先,对原始数据在输入层进行基于关键词的数据增强,不考虑句子结构的同时对数据进行有选择增强;其次,在BERT隐藏层对原始数据与增强后的数据进行插值,然后送入TextCNN进一步提取特征;最后,使用Wasserstein距离和双重对比损失对模型进行训练,进而提高文本分类的准确率.对比实验结果表明,该方法在数据集SST-2,CR,TREC和PC上分类准确率分别达93.41%,93.55%,97.61%和95.27%,优于经典算法.Aiming at the non-selective expansion and training deficiencies of the DoubleMix algorithm during data augmentation,we proposed a supervised contrastive learning text classification model based on double-layer data augmentation,which effectively improved the accuracy of text classification when training data was scarce.Firstly,keyword-based data augmentation was applied to the original data at the input layer,while selectively enhancing the data without considering sentence structure.Secondly,we interpolated the original and augmented data in the BERT hidden layers,and then send them to the TextCNN for further feature extraction.Finally,the model was trained by using Wasserstein distance and double contrastive loss to enhance text classification accuracy.The comparative experimental results on SST-2,CR,TREC,and PC datasets show that the classification accuracy of the proposed method is 93.41%,93.55%,97.61%,and 95.27%respectively,which is superior to classical algorithms.
分 类 号:TP39[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.135.64.92