检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]厦门大学信息科学与技术学院,福建厦门361005
出 处:《厦门大学学报(自然科学版)》2017年第4期576-583,共8页Journal of Xiamen University:Natural Science
基 金:国家自然科学基金(60803078);福建省自然科学基金(2010J01351);教育部海外留学回国人员科研启动基金
摘 要:隐式篇章关系识别的主要挑战是如何表示两个文本单元的语义信息.由于句子的语义信息往往由语法树中的信息焦点(谓词部分)所决定,所以关注信息焦点可以提升篇章关系识别的效果.为了增强信息焦点的作用,引入树状长短时记忆(tree-structured long short-term memory,Tree-LSTM)网络,使用其遗忘门的特性区别对待不同孩子节点的信息.最后利用神经张量网络(neural tensor network,NTN)来计算两个句子语义向量之间的关系.基于PDTB2.0(Penn Discourse Treebank)语料数据进行实验,实验结果表明混合树结构神经网络比传统的RNN模型在大部分关系中的Fscore上提高了3.0%左右.The most critural challenge of implicit discourse relation recognition lies in how to represent the semantic information of each discourse argument.However,the semantic value of the sentence is mainly decided by its specific information focus in linguistics.Therefore,the discourse relation may mostly depend on links between information focuses.Intuitively,we cannot give equal treatment to every phrase branches during composition up the syntactic parse tree.To resolve the problem,we introduce the tree-structured long short-term memory(Tree-LSTM)network to selectively incorporate information from each child to compute the distributed semantic representation of two arguments.Consequently,it can emphasize those informative predicative branches that indicate the "focus" of a sentence.Then the neural tensor network(NTN)is used to predict the semantic correlation between these two discourse arguments across multiple dimensions.Experimental results on PDTB corpus show that our model has achieved some improvement on the task of discourse relation recognition.
关 键 词:隐式篇章关系识别 信息焦点 树状长短时记忆网络 神经张量网
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.33