检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]苏州大学计算机科学与技术学院,自然语言处理实验室,苏州215006 [2]苏州科技学院计算机科学与工程系,苏州215009
出 处:《北京大学学报(自然科学版)》2014年第1期100-110,共11页Acta Scientiarum Naturalium Universitatis Pekinensis
基 金:国家自然科学基金(61273320;61003153;61272257);863计划(2012AA011102)资助
摘 要:针对指代消解一直是自然语言处理中的核心问题,提出一种利用DBN(deep belief nets)模型的Deep Learning学习机制进行基于语义特征的指代消解方法。DBN模型由多层无监督的RBM(restricted Boltzmann machine)网络和一层有监督的BP(back-propagation)网络组成,RBM网络确保特征向量映射达到最优,最后一层BP网络可以对RBM网络的输出特征向量进行分类,从而训练指代消解分类器。在ACE04英文语料及ACE05中文语料上进行测试,实验结果表明,增加RBM训练层数可以提高系统性能。此外,引入对特征集合的抽象分层因素,也对系统性能的提升产生积极作用。Because coreference resolution is a fundamental task in natural language process, a coreference resolution system based on Deep Learning model via the deep belief nets (DBN), which is a classifier of a combination of several unsupervised learning networks, named RBM (restricted Boltzmann machine) and a supervised learning network named BP (back-propagation), is proposed to detect and classify the coreference relationships between the anaphor and antecedent. The RBM layers maintain as much information as possible when feature vectors are transferred to next layer. The BP layer is trained to classify the features generated by the last RBM layer. The experiments are conducted on the ACE 2004 English NWlRE corpus and the ACE 2005 Chinese NWIRE corpus. The results show that increasing the number of layers RBM training and joining of abstract layer for feature set are able to improve the performance of coreference resolution system.
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15