检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]安徽工业大学计算机科学与技术学院,安徽马鞍山243032
出 处:《计算机工程》2016年第8期194-198,205,共6页Computer Engineering
基 金:国家科技支撑计划基金资助项目"节能减排监测控制技术信息集成平台开发"(2012BAK30B04-02)
摘 要:传统卡方特征选择方法没有考虑在不均衡数据集上词出现的类别数量、词的频度以及在类间与类内的分布情况等,以致不能为不同的类别选择出有效的特征词。为此,提出一种卡方特征选择方法。以词概率和文档概率衡量词文档频繁程度,并用来分别计算类别频数因子、词的类间集中因子、词在类内的均衡度因子、文档的类间集中因子。基于这些因子修正卡方值,利用同一个词不同类别的差异程度因子,使得改进的卡方能选出更高效的特征词。文本分类实验结果表明,与改进前的方法相比,该方法能使宏观F1值得到一定程度的提高,在不均衡数据集上具有更好的分类效果。Traditional CHI-square feature selection method does not take into account the category number of words in imbalanced data sets,the frequency of words, the intra-class and inter-class distribution of words, so that it fails to choose valid feature words for different categories. To solve this problem, a CHI-square feature selection method based on probability is proposed. It is used to measure the frequency of words and documents by probability of words and documents, and calculates the frequency factor of categories, the concentration factors of words between classes, equilibrium degree factors of words in the same classes and the concentration factors of documents between classes. The initial value of CHI-square is adjusted by these factors. The difference degree factor of different classes for the same word is used to make the improved CHI-square select more efficient words. Text classification experiment results show that, compared with the CHI-square feature selection method without improvement,the proposed method improves macroscopic F1 significantly,and has better classification performance on imbalanced datasets.
关 键 词:文本分类 卡方统计 特征选择 不均衡数据集 概率方法
分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.4