检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]南京信息工程大学江苏省网络监控中心,南京210044 [2]南京信息工程大学计算机与软件学院,南京210044
出 处:《南京信息工程大学学报(自然科学版)》2012年第4期362-365,共4页Journal of Nanjing University of Information Science & Technology(Natural Science Edition)
基 金:国家自然科学基金(60702076);江苏高校优势学科建设工程资助项目
摘 要:核方法广泛应用于模式识别等领域,但其存在着特征抽取效率和样本集的大小成反比的瓶颈问题.因此提出一种基于数值逼近的方法确定虚拟样本矢量,以此代替训练样本,提高KPCA(Kernel Principle Component Analysis)特征抽取效率.在确定虚拟样本矢量时,只需将样本矢量的初值设定为随机变量,算法实现简单、高效.在基准数据集上的实验结果显示该算法优于同类算法.Though kernel methods have been widely used for pattern recognition, they suffer from the problem that the extraction efficiency is in inverse proportion to the size of the training sample set. To solve it,we propose a novel improvement to Kernel Principle Component Analysis (KPCA) based on numerical approximation. The method is on the base of the assumption that the discriminant vector in the feature space can be approximately expressed by a cer- tain linear combination of some constructed virtual sample vectors. We determine these virtual sample vectors one by one by using a very simple and eomputationally efficient iterative algorithm. When they are dissimilar to each other, this set is able to well replace the role of the whole training sample set in expressing the discriminant vector in the feature space. It is remarkable that the determined virtual sample vectors lead to a good improvement to KPCA, which allows an efficient feature extraction procedure to be obtained. Also, we need only to set the initial values of the virtual sample vectors to random values. The experiments on two benchmark datasets show that our method can achieve the goal of efficient feature extraction as well as good and stable classification accuracy.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.30