检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]太原理工大学信息工程学院,太原030024 [2]太原理工大学数学学院,太原030024
出 处:《太原理工大学学报》2014年第5期609-613,共5页Journal of Taiyuan University of Technology
基 金:国家自然科学基金资助项目(61072087)
摘 要:解的稀疏性的丧失——所有的训练样本均作为支持向量,是最小二乘支持向量机的缺点之一,针对导致模型复杂度提高和模型训练、识别速度降低的问题,从数据挖掘和支持向量的几何分布含义两个方面出发,提出了一种新的支持向量预选取算法。一方面对原数据集的每类数据分别进行K均值聚类,将所有的类中心作为原始数据的表征集;另一方面对原数据集用K最近邻方法提取原数据集的边界样本;最后将这两种方法提取的所有样本点的并集作为预选支持向量进行训练和预测。UCI数据库的实验表明:该方法充分融合了K均值和K最近邻预选取算法的优点,能有效的预选取出支持向量,同时保持较高的识别率,而且稀疏效果更稳定,稀疏性能优于经典的迭代剪枝算法。The loss of sparseness of solutions,all training samples as support vectors,is one of the drawbacks of least squares support vector machine(LSSVM),causing increase of model complexity and reduction of model training and recognition speed.To solve the problem,this paper proposes an improved least squares support vector machine pre-extracting algorithm data mining and geometric distribution meaning of support vectors.On the one hand,K-means clustering is separately performed on each type of data in the original data set with all the cluster centers as representative set of the original data;on the other hand,the boundary samples of the original data set are extracted with the K-nearest neighbor method.Finally,the union set of all the sample points extracted with the two methods works as pre-extracted support vectors for LSSVM training and prediction.Experiments of UCI database indicate that:this method,fully integrating advantages of K-means and K-nearest neighbor pre-extracting algorithms,not only effectively pre-extracts the support vectors but also maintains a high recognition rate.As for sparse effect,this method is much better than the classical iterative pruning algorithm.
关 键 词:最小二乘支持向量机 K均值聚类 K最近邻 预选取算法 稀疏化
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.239