检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:白李娟[1] 赵小蕾[1] 毛启容[1] 吴宝凤[2]
机构地区:[1]江苏大学计算机科学与通信工程学院,江苏镇江212013 [2]中国联合网络通信有限公司江西分公司,南昌330096
出 处:《小型微型计算机系统》2013年第6期1451-1456,共6页Journal of Chinese Computer Systems
基 金:国家自然科学基金项目(61003183)资助;江苏省自然科学基金项目(BK2011521)资助;江苏大学高级人(10JDG065)资助
摘 要:针对语句之间的情感存在相互关联的特性,本文从声学角度提出了上下文动态情感特征、上下文差分情感特征、上下文边缘动态情感特征和上下文边缘差分情感特征共四类268维语音情感上下文特征以及这四类情感特征的提取方法,该方法是从当前情感语句与其前面若干句的合并句中提取声学特征,建立上下文特征模型,以此辅助传统特征所建模型来提高识别率.最后,将该方法应用于语音情感识别,实验结果表明,加入新的上下文语音情感特征后,六类典型情感的平均识别率为82.78%,比原有特征模型的平均识别率提高了约8.89%.According to the emotional correlation among the adjective emotional sentences, this paper based on acoustic characteristics proposes four types of speech emotional contextual features including the contextual dynamic emotional feature, the contextual differ- ential emotional feature, the contextual edge dynamic emotional feature and the contextual edge differential emotional feature, totally 268-dimensions, and their extracted method. In this method, features are extracted from the combined emotional sentence, which is formed by jointing the current emotional sentence and the several sentences in front of it. Then use them to establish a Context Feature Model that assists the model which is formed by traditional features to improve recognition rate. Finally, the method is applied to speech emotion recognition. And the experimental result shows that average recognition rate of six typical emotions is 82.78% after adding the new contextual speech emotional features, which performs better than the original by 8.89%.
关 键 词:声学上下文语音情感特征 情感语音合并句 模糊密度 决策融合 语音情感识别
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.139