检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]中国科学院计算机语言信息工程研究中心,北京100097
出 处:《模式识别与人工智能》2009年第6期936-940,共5页Pattern Recognition and Artificial Intelligence
基 金:国家自然科学基金项目(No.60672149);国家863计划项目(No.2006AA010109)资助
摘 要:KNN算法稳定性好、准确率高,但由于其时间复杂度与样本数量成正比,导致其分类速度慢,难以在大规模海量信息处理中得到有效应用.文中提出一种改进的KNN文本分类方法.其基本思路是,通过文本聚类将样本中的若干相似文档合并成一个中心文档,并用这些中心文档代替原始样本建立分类模型,这样就减少了需要进行相似计算的文档数,从而达到提高分类速度的目的.实验表明,以分类准确率、召回率和F-score为评价指标,文中方法在与经典KNN算法相当的情况下,分类速度得到较大提高.k-Nearest Neighbor (KNN) algorithm has the advantage of high accuracy and stability. But the time complexity of KNN is directly proportional to the sample size, its classification speed is low and it is problematic to be put into practice in large-scale information processing. An improved KNN text categorization algorithm is proposed which classifies faster than the traditional KNN does. Firstly, some similar sample documents are combined into a center document through adopting automatic text clustering technology. Then, a large number of original samples are replaced with the small amount of sample cluster centers. Therefore, the calculation amount of KNN is reduced greatly and the classification is speeded up. The experimental results show that the time complexity of the proposed algorithm is decreased by one order of magnitude and its accuracy is approximately equal to those of the SVM and traditional KNN.
关 键 词:k-最近邻(KNN) 文本分类 文本聚类 聚类中心 自然语言处理
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.46