检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:尹陈 吴敏[1] YIN Chen;WU Min(School of Software Engineering,University of Science and Technology of China,Hefei 230051,China)
出 处:《计算机系统应用》2018年第10期33-38,共6页Computer Systems & Applications
摘 要:N-gram模型是自然语言处理中最常用的语言模型之一,广泛应用于语音识别、手写识别、拼写纠错、机器翻译和搜索引擎等众多任务.但是N-gram模型在训练和应用时经常会出现零概率问题,导致无法获得良好的语言模型,因此出现了拉普拉斯平滑、卡茨回退和Kneser-Ney平滑等平滑方法.在介绍了这些平滑方法的基本原理后,使用困惑度作为度量标准去比较了基于这几种平滑方法所训练出的语言模型.The N-gram model is one of the most commonly used language models in natural language processing and is widely used in many tasks such as speech recognition, handwriting recognition, spelling correction, machine translation and search engines. However, the N-gram model often presents zero-probability problems in training and application,resulting in failure to obtain a good language model. As a result, smoothing methods such as Laplace smoothing, Katz back-off, and Kneser-Ney smoothing appeared. After introducing the basic principles of these smoothing methods, we use the perplexity as a metric to compare the language models trained based on these types of smoothing methods.
关 键 词:N-GRAM模型 拉普拉斯平滑 卡茨回退 Kneser-Ney平滑 困惑度
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.45