检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:胡德生 张雪英 张静 李宝芸 HU Desheng;ZHANG Xueying;ZHANG Jing;LI Baoyun(College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China)
机构地区:[1]太原理工大学信息与计算机学院,太原030024
出 处:《太原理工大学学报》2021年第5期769-774,共6页Journal of Taiyuan University of Technology
基 金:国家自然科学基金资助项目(61371193);山西省回国留学人员科研资助项目(HGKY2019025);山西省研究生教育创新计划项目(2020BY130)。
摘 要:为了有效特征提取与融合提高语音情感识别率,提出了一种使用主辅网络进行深度特征融合的语音情感识别算法。首先将段特征输入BLSTM-Attention网络作为主网络,其中注意力机制能够关注语音信号中的情感信息;然后,把Mel语谱图输入CNN-GAP网络作为辅助网络,GAP可以减轻全连接层带来的过拟合;最后,将两个网络提取的深度特征以主辅网络方式进行特征融合,解决不同类型特征直接融合带来的识别结果不理想的问题。在IEMOCAP数据集上对比4种模型的实验结果表明,使用主辅网络深度特征融合的WA和UA均有不同程度的提高。Speech emotion recognition is an important research direction of human-computer interaction.Effective feature extraction and fusion are among the key factors to improve the rate of speech emotion recognition.In this paper,a speech emotion recognition algorithm using Main-auxiliary networks for deep feature fusion was proposed.First,segment features are input into BLSTM-attention network as the main network.The attention mechanism can pay attention to the emotion information in speech signals.Then,the Mel spectrum features are input into Convolutional Neural Networks-Global Average Pooling(GAP)as auxiliary network.GAP can reduce the overfitting brought by the fully connected layer.Finally,the two are combined in the form of Main-auxiliary networks to solve the problem of unsatisfactory recognition results caused by direct fusion of different types of features.The experimental results of comparing four models on IEMOCAP dataset show that WA and UA using the depth feature fusion of the Main-Auxiliary network are improved to different degrees.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.14