检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:胡章芳[1] 蹇芳 唐珊珊 明子平 姜博文 HU Zhangfang;JIAN Fang;TANG Shanshan;MING Ziping;JIANG Bowen(School of Optoelectronic Engineering,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)
出 处:《计算机工程与应用》2022年第9期187-194,共8页Computer Engineering and Applications
基 金:国家自然科学基金(61801061);重庆市科委项目(cstc2017zdcy-zdzxX0011)。
摘 要:自动语音识别系统由声学模型和语言模型两部分构成,但传统语言模型N-gram存在忽略词条语义相似性、参数过大等问题,限制了语音识别字符错误率的进一步降低。针对上述问题,提出一种新型的语音识别系统,以中文音节(拼音)作为中间字符,以深度前馈序列记忆神经网络DFSMN作为声学模型,执行语音转中文音节任务,进而将拼音转汉字理解成翻译任务,引入Transformer作为语言模型;同时提出一种减少Transformer计算复杂度的简易方法,在计算注意力权值时引入Hadamard矩阵进行滤波,对低于阈值的参数进行丢弃,使得模型解码速度更快。在Aishell-1、Thchs30等数据集上的实验表明,相较于DFSMN结合3-gram模型,基于DFSMN和改进Transformer的语音识别系统在最优模型上的字符错误率相对下降了3.2%,达到了11.8%的字符错误率;相较于BLSTM模型语音识别系统,其字符错误率相对下降了7.1%。An automatic speech recognition system consists of two parts:an acoustic model and a language model,but the traditional language model N-gram has problems such as ignoring the semantic similarity of words and having too large parameters,which limit the further reduction of the error rate of speech recognition characters.To address the above problems,this paper proposes a novel speech recognition system that uses Chinese syllables(pinyin)as intermediate characters,uses a deep feed-forward sequence memory neural network DFSMN as the acoustic model,performs the speech-to-Chinese syllable task,and then understands pinyin-to-Chinese characters as a translation task,and introduces Transformer as the language model.At the same time,it proposes a method to reduce the computational complexity of Transformer by introducing Hadamard matrix for filtering when calculating attention weights and discarding parameters below a threshold value,thus resulting in faster model decoding.Experiments on datasets such as Aishell-1 and Thchs30show that compared to the DFSMN combined with the 3-gram model,the DFSMN and improved Transformer-based speech recognition system achieves a relative decrease of 3.2% in character error rate to 11.8% character error rate on the optimal model.Compared to the BLSTM model speech recognition system,its character error rate is relatively reduced by7.1%,compared to the BLSTM model speech recognition system.
关 键 词:语音识别 深度前馈序列记忆神经网络(DFSMN) TRANSFORMER 中文音节 HADAMARD矩阵
分 类 号:TP391[自动化与计算机技术—计算机应用技术] TP18[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49