检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]北京语言大学汉语国际教育技术研发中心,北京100083 [2]北京语言大学信息科学学院,北京100083
出 处:《中文信息学报》2014年第5期51-59,共9页Journal of Chinese Information Processing
基 金:国家自然科学基金(61300081;61170162);国家科技支撑项目(2012BAH16F00);北京语言大学中央高校基本科研业务专项资金(14YJ030005)
摘 要:该文提出基于Word Embedding的歧义词多个义项语义表示方法,实现基于知识库的无监督字母缩略术语消歧。方法分两步聚类,首先采用显著相似聚类获得高置信度类簇,构造带有语义标签的文档集作为训练数据。利用该数据训练多份Word Embedding模型,以余弦相似度均值表示两个词之间的语义关系。在第二步聚类时,提出使用特征词扩展和语义线性加权来提高歧义分辨能力,提高消歧性能。该方法根据语义相似度扩展待消歧文档的特征词集合,挖掘聚类文档中缺失的语义信息,并使用语义相似度对特征词权重进行线性加权。针对25个多义缩略术语的消歧实验显示,特征词扩展使系统F值提高约4%,使用语义线性加权后F值再提高约2%,达到89.40%。This paper introduces a knowledge based unsupervised method for acronym term disambiguation. Word embedding is used for acronym term semantic representation. In the first stage of disambiguation, significantly similar documents are clustered and used as training data. Each cluster corresponds to an interpretation of an acronym term, so it can be seen as a semantic tag. Then the word embedding is trained for several times and semantic relation between two words can be calculated by average of cosine similarity of their vectors. In the second stage, the paper proposes to use feature word expansion and linear weighted semantic similarity to improve system performance. By calculating semantic similarities between documents and interpretations, implicit semantics can be mined as new feature words; and the feature words are linearly weighted by their semantic similarities with specific interpretation. Experimental results on 25 acronym terms show that, feature word expansion improves system F score by 4% and semantic weight gains higher performance by 2%, which yielding a final system F score of 89.40%.
关 键 词:字母缩略术语 术语消歧 WORD EMBEDDING 语义相似度
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117