检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:王涛[1] 全海燕[1] Wang Tao;Quan Haiyan(Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming,Yunnan 650500,China)
机构地区:[1]昆明理工大学信息工程与自动化学院,云南昆明650500
出 处:《信号处理》2020年第6期1013-1019,共7页Journal of Signal Processing
基 金:国家自然科学基金(41364002)。
摘 要:基于深度神经网络的语音分离方法大都在频域上进行训练,并且在训练过程中往往只关注目标语音特征,不考虑干扰语音特征。为此,提出了一种基于生成对抗网络联合训练的语音分离方法。该方法以时域波形作为网络输入,保留了信号时延导致的相位信息。同时,利用对抗机制,使生成模型和判别模型分别训练目标语音和干扰语音的特征,提高了语音分离的有效性。实验中,采用Aishell数据集进行对比测试。结果表明,本文所提方法在三种信噪比条件下都有良好的分离效果,能更好地恢复出目标语音中的高频频段信息。Most speech separation methods based on deep neural networks are trained in frequency domain,and in the process of training,they usually only focus on the features of target speech,without considering the features of interference speech.For this reason,a speech separation method based on cooperative training of generative adversarial network is proposed.This method takes the time-domain waveform as the network’s input and retains the phase information caused by the signal delay.At the same time,the generative model and discriminative model are used to train the features of the target speech and the interference speech respectively,which improves the effectiveness of speech separation.In the experiment,a comparative test is performed on the Aishell data set.The results show that the proposed method has a good separation effect under three SNR conditions,and can better recover the high frequency band information of the target speech.
分 类 号:TN912.3[电子电信—通信与信息系统]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.173