检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]中国科学技术大学电子科学与技术系,合肥230026
出 处:《电子与信息学报》2007年第2期469-472,共4页Journal of Electronics & Information Technology
基 金:国家自然科学基金(60272039)资助课题
摘 要:对于采用高斯混合模型(GMM)的与文本无关的说话人识别,出于模型参数数量和计算量的考虑GMM的协方差矩阵通常取为对角矩阵形式,并假设观察矢量各维之间是不相关的。然而,这种假设在大多情况下是不成立的。为了使观察矢量空间适合于采用对角协方差的GMM进行拟合,通常采用对参数空间或模型空间进行解相关变换。该文提出了一种改进模型空间解相关的PCA方法,通过直接对GMM的各高斯成分的协方差进行主成分分析,使参数空间分布更符合使用对角化协方差的混合高斯分布,并通过共享PCA变换阵的方法减少参数数量和计算量。在微软语音库上的说话人识别实验表明,该方法取得了比常规的对角协方差GMM系统的最优结果有相对35%的误识率下降。There is a basic choice in the form of covariance matrix to be used with Gaussian mixture model in text-independent speaker identification. In general, diagonal covariance matrix is chose, which implies strong assumption that elements of the feature vector are independent, because full covariance matrix suffers from too many parameters and large computational requirement. Unfortunately, in most application the assumption is not reasonable. In order to make feature vectors more suit to be modeled with diagonal covariance, features are usually de-correlated in feature space or model space. In this paper, an improved model-based PCA transformation algorithm is presented to de-correlate the elements of feature vectors. In this algorithln, principal component analysis is directly made for covariance of Gaussians. Also, the number of parameter is deduced through tying the PCA transformation between Gaussians, Experiments on the MSRA mandarin task show that the algorithm could achieve above 35% identification error reduction over the best diagonal covariance models.
分 类 号:TP391.42[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.144.132.48