检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:马雯琦 何跃 MA Wen-Qi;HE Yue(School of Management,University of Science and Technology of China,Hefei 230026,China;Business School,Sichuan University,Chengdu 610065,China)
机构地区:[1]中国科学技术大学管理学院,合肥230026 [2]四川大学商学院,成都610065
出 处:《计算机系统应用》2021年第11期54-62,共9页Computer Systems & Applications
基 金:国家自然科学基金(71571174)。
摘 要:自然语言处理中的文档分类任务需要模型从低层级词向量中抽取高层级特征.通常,深度神经网络的特征抽取会利用文档中所有词语,这种做法不能很好适应内容较长的文档.此外,训练深度神经网络需要大量标记数据,在弱监督情况下往往不能取得良好效果.为迎接这些挑战,本研究提出应对弱监督长文档分类的方法.一方面,利用少量种子信息生成伪文档以增强训练数据,应对缺乏标记数据造成的精度难以提升的局面.另一方面,使用循环局部注意力学习,仅基于若干文档片段抽取出摘要特征,就足以支撑后续类别预测,提高模型的速度和精度.实验表明,本研究提出的伪文档生成模型确实能够增强训练数据,对预测精度的提升在弱监督情况下尤为显著;同时,基于局部注意力机制的长文档分类模型在预测精度上显著高于基准模型,处理速度也表现优异,具有实际应用价值.The task of document classification in natural language processing requires the model to extract high-level features from low-level word vectors. Generally, the feature extraction of deep neural networks uses all the words in the document, which is not well suited for documents with long content. In addition, training deep neural networks requires massive labeled data, which often fails to achieve satisfied results under weak supervision. To meet these challenges, this research proposes a method to deal with weakly-supervised long document classification. On the one hand, a small amount of seed information is used to generate pseudo-documents to enhance training data to deal with the situation where accuracy is difficult to improve due to the lack of labeled data. On the other hand, using recurrent local attention learning to extract summary features based on only a few document fragments is sufficient to support subsequent category prediction and improve the model’s speed and accuracy. Experiments show that the pseudo-document generation model can indeed enhance the training data, and the improvement in prediction accuracy is particularly significant under weak supervision. At the same time, the long document classification model based on the local attention mechanism performs significantly better than benchmark models in prediction accuracy and processing speed, with practical application value.
关 键 词:文档分类 深度学习 弱监督学习 伪文档 局部注意力机制
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.17.9.170