检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:周宇[1] 蔡都 ZHOU Yu;CAI Du(Jiangsu Provincial Public Security Department,Nanjing 210024,China)
机构地区:[1]江苏省公安厅,江苏南京210024
出 处:《现代信息科技》2024年第23期165-169,174,共6页Modern Information Technology
摘 要:实现暗网违法犯罪情报的规模化产出是打击暗网违法犯罪的一项重要前置任务。当前研究较难解决暗网数据量不足的问题,且主要针对西文暗网数据进行。为实现中文暗网文本的针对性分析,提出了一种基于多任务学习的BERT-BiLSTM违法犯罪分类和命名实体识别多任务学习模型,其在文本分类和命名实体识别任务间共享BERT-BiLSTM层,并分别采用全连接层和条件随机场(CRF)层作为文本分类和实体识别的输出层,以加强不同任务间的知识共享。在自建的中文暗网数据集上的实验结果表明,该多任务学习模型相比基线模型在两类任务上均有一定性能提升。Achieving the large-scale production of illegal and criminal intelligence on the dark web is a crucial preliminary task for combating illegal and criminal activities on the dark web.Current research struggles to address the issue of insufficient dark web data and primarily focuses on Western language dark web data.In order to achieve targeted analysis of Chinese dark web texts,this paper proposes a multi-task learning model for BERT-BiLSTM illegal and criminal classification and named entity recognition based on multi-task learning.It shares the BERT-BiLSTM layer between the text classification and named entity recognition tasks,and adopts the fully connected layer and the Conditional Random Field(CRF)layer as the output layers for text classification and entity recognition respectively,so as to strengthen knowledge sharing between different tasks.The experimental results on the self-constructed Chinese dark web dataset show that,compared with the baseline model,this multi-task learning model achieves certain performance improvements in both types of tasks.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222