检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]四川理工学院计算机学院,四川自贡643000
出 处:《电子科技大学学报》2013年第4期592-596,共5页Journal of University of Electronic Science and Technology of China
基 金:四川省教育厅重点项目(11ZA124);人工智能四川省重点实验室开放基金(2011RYJ02)
摘 要:针对核空间中大数据集的计算代价高问题,提出用NSVM方法减少分类器的训练数据。先用NSVM、核主成分分析(KPCA)和贪婪KPCA分别从全部训练数据中提取训练分类器的子集;再用子集训练分类器,并用训练和测试数据的错分率对分类结果进行评价。在两个数据集和两种分类器中,用KPCA提取的子集训练的分类器的分类性能弱于NSVM和贪婪KPCA,但用贪婪KPCA提取的子集训练的分类器的泛化能力弱于NSVM。仿真结果表明,用NSVM方法提取的子集训练的分类器,不仅保证了分类器的泛化能力,也降低了分类算法的计算复杂度。Aiming at the high computational cost issue for large data sets in kernel space, the non-linear support vector machine (NSVM) is proposed to reduce training data of classifier. First, a subset of training classifier is extracted from full training data by using NSVM, kernel principal component analysis (KPCA), and greedy kernel principal component analysis (GKPCA), respectively. Then, the classifier is trained by those subsets, respectively. Finally, the classification results are evaluated by the error rate of the training and test data. The classification performance of the classifier trained by the subsets from the KPCA method is inferior to those of from the NSVM and the GKPCA methods, but the generalization of the classifier trained by the subset from the GKPCA method is inferior to those of from the NSVM method for two data sets through two the classifiers. Simulation results indicate that the classifier trained by the subset from the NSVM method not only ensures the generalization ability of classifier, but also reduces the computational complexity of the classification algorithm.
关 键 词:分类器 贪婪核主成分分析 核主成分分析 非线性支持向量机 支持向量 训练数据
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49