检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]上海交通大学电子信息与电气工程学院,上海200240
出 处:《上海交通大学学报》2016年第7期1054-1059,共6页Journal of Shanghai Jiaotong University
基 金:国家重点基础研究发展规划(973)项目(2013CB329603);国家自然科学基金项目(61472248;61171173)
摘 要:传统的支持向量机(Support Vector Machine,SVM)分类算法不具有增量学习能力,为了减少新增样本加入后重新训练的时间并能适应海量数据的准确分类,提出了一种基于组合保留集的SVM增量学习算法.该算法以构建保留集为基础,采用缩放平移选择法选择样本,且利用了组合保留的思想,对原训练集样本和增量样本集中满足KKT(Karush-Kuhn-Tucker)条件的样本分别进行部分保留,并赋予样本权重,再依据权重挑选部分保留样本与原支持向量集和增量样本中违背KKT条件的样本合并进行训练,从而实现原有样本知识的积累和新样本知识的学习.实验结果表明,该算法在加快分类速度的同时提高了分类精度.The traditional SVM classification algorithm has no incremental learning ability. In order to re- duce the re-training time and adapt to accurate classification of vast amounts of data, an incremental learn- ing algorithm for SVM based on a combination of reserved set was proposed. This algorithm not only uses the zoom pan selecting method to build the reserved set, but also adopts the idea of combining reserved set. Certain numbers of the original samples and the incremental samples which satisfy the KKT conditions are added to the reserved set, and sample weights are given. The original SV set, the incremental samples which violate the original KKT conditions, and a small number of samples selected from the reserved set according to the weight, are used to build the new training set for the next incremental learning. Thus, the original sample knowledge is retained and the new sample knowledge is learned. Experimental results demonstrate that this algorithm can accelerate the classification process and improve classification accuracy.
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117