检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]北方交通大学信息科学研究所,北京100044 [2]北京三星通信技术研究所,北京100081
出 处:《计算机学报》2003年第8期1015-1020,共6页Chinese Journal of Computers
基 金:国家自然科学重点基金 (697893 0 1);国家"九七三"重点基础研究发展规划项目 (G19980 3 0 5 0 11)资助
摘 要:支持向量机 (SVM )是一种较新的机器学习方法 ,它利用靠近边界的少数向量构造一个最优分类超平面 .在训练分类器时 ,SVM的着眼点在于两类的交界部分 ,那些混杂在另一类中的点往往无助于提高分类器的性能 ,反而会大大增加训练器的计算负担 ,同时它们的存在还可能造成过学习 ,使泛化能力减弱 .为了改善支持向量机的泛化能力 ,该文在其基础上提出了一种改进的SVM———NN SVM :它先对训练集进行修剪 ,根据每个样本与其最近邻类标的异同决定其取舍 ,然后再用SVM训练得到分类器 .实验表明 ,NN SVM相比SVM在分类正确率。A support vector machine constructs an optimal hyperplane from a small set of samples near the boundary. This makes it sensitive to these specific samples and tends to result in machines either too complex with poor generalization ability or too imprecise with high training error, depending on the kernel parameters. SVM focuses on the samples near the boundary in training time, and those samples intermixed in another class are usually no good to improve the classifier's performance, instead they may greatly increase the burden of computation and their existence may lead to overlearning and decrease the generalization ability. In order to improve the generalization ability we present an improved SVM: NN-SVM. It first prunes the training set, reserves or deletes a sample according to whether its nearest neighbor has same class label with itself or not, then trains the new set with SVM to obtain a classifier. Experiment results show that NN-SVM is better than SVM in speed and accuracy of classification.
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.117.172.251