感知器在语言模型训练中的应用  被引量:2

Perceptron for Language Modeling

在线阅读下载全文

作  者:于浩[1] 步丰林[1] 高剑峰[2] 

机构地区:[1]上海交通大学计算机科学与工程系,上海200030 [2]微软亚洲研究院,北京100080

出  处:《计算机研究与发展》2006年第2期260-267,共8页Journal of Computer Research and Development

基  金:浙江省重大科技攻关基金项目(2003C11009);上海市科委科技发展基金项目(025111051)

摘  要:感知器(perceptron)是神经网络模型中的一种,它可以通过监督学习(supervised learning)的方法建立模式识别的能力.将感知器应用到语言模型的训练中,实现了感知器的两种不同训练规则以及多种特征权值计算方法,讨论了不同的训练参数对训练效果的影响.在训练之前,使用了一种基于经验风险最小化(empirical risk minimization,ERM)的特征选择算法确定特征集合.感知器训练之后的语言模型在日文假名到汉字(kana-kanji)的转换中进行评估.通过实验对比了感知器的两种训练规则以及变形算法的性能,同时发现通过感知器训练的模型比传统模型(N-gram)在性能上有了很大的提高,使相对错误率下降了15%~20%.Perceptron is one type of neural networks (NN) which can acquire the ability of pattern recognition by supervised learning. In this paper, two perceptron training rules for language modeling (LM) are introduced as an alternative to the traditional training method such as maximum likelihood estimation (MLE). Variants of perceptron learning algorithms are presented and the impact of different training parameters on performance is discussed. Since there is a strict restriction on the language model size, feature selection is conducted based on the empirical risk minimization (ERM) principle before modeling. The model performance is evaluated in the task of Japanese kana-kanji conversion which converts phonetic strings into the appropriate word strings. An empirical study on the variants of perceptron learning algorithms is conducted based on the two training rules, and the results also show that perceptron methods outperform substantially the traditional methods for LM.

关 键 词:感知器 语言模型 经验风险最小化 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程] TP391.2[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象