检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:黄丽霞[1] 王亚楠[1] 张雪英[1] 王洪翠[2]
机构地区:[1]太原理工大学信息工程学院,太原030024 [2]天津大学计算机科学与技术学院,天津300072
出 处:《计算机工程与应用》2017年第13期49-54,共6页Computer Engineering and Applications
基 金:国家自然科学基金(No.61371193;No.61303109);山西省留学回国择优资助项目(晋人社厅函[2013]68号);山西省自然科学基金(No.2014021022-6)
摘 要:为了解决传统径向基(Radial basis function,RBF)神经网络在语音识别任务中基函数中心值和半径随机初始化的问题,从人脑对语音感知的分层处理机理出发,提出利用大量无标签数据初始化网络参数的无监督预训练方式代替传统随机初始化方法,使用深度自编码网络作为语音识别的声学模型,分析梅尔频率倒谱系数(Mel Frequency Cepstrum Coefficient,MFCC)和基于Gammatone听觉滤波器频率倒谱系数(Gammatone Frequency Cepstrum Coefficient,GFCC)下非特定人小词汇量孤立词的抗噪性能。实验结果表明,深度自编码网络在MFCC特征下较径向基神经网络表现出更优越的抗噪性能;而与经典的MFCC特征相比,GFCC特征在深度自编码网络下平均识别率相对提升1.87%。To solve the problem of the center and the radius determined by randomly in the speech recognition tasks based on traditional Radial Basis Function(RBF)neural network, an unsupervised pre-training method which uses a large number of unlabeled data to initialize the network parameters is proposed to replace the traditional random initialization method based on the layered mechanism of human brain on speech recognition. This paper introduces the Deep AutoEncoder(DAE) neural network as acoustical model and further analyzes robustness of speaker-independent isolated speech recognition on small size vocabulary database. The experimental results show that DAE outperforms RBF with MFCC(Mel Frequency Cepstrum Coefficient)feature extraction. In addition, compared to MFCC, GFCC(Gammatone Frequency Cepstrum Coefficient)gives more attribution on anti-noise property with a relative accuracy improvement of 1.87% in collaborate with DAE network.
关 键 词:语音识别 鲁棒性 深度自编码网络 GFCC特征 MFCC特征
分 类 号:TN391.42[电子电信—物理电子学]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222