检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Jipeng QIANG Feng ZHANG Yun LI Yunhao YUAN Yi ZHU Xindong WU
机构地区:[1]Department of Computer Science,Yangzhou University,Yangzhou 225127,China [2]Key Laboratory of Knowledge Engineering with Big Data(Hefei University of Technology),Ministry of Education,Hefei 23009,China [3]Mininglamp Academy of Sciences,Mininglamp,Beijing 100089,China
出 处:《Frontiers of Computer Science》2023年第1期81-90,共10页中国计算机科学前沿(英文版)
基 金:supported by the National Natural Science Foundation of China(Grant Nos.62076217 and 61906060);and the Program for Changjiang Scholars and Innovative Research Team in University(PCSIRT)of the Ministry of Education,China(IRT17R32).
摘 要:Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines.
关 键 词:text simplification pre-trained language modeling BERT word embeddings
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.220.70.192