检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:姚岳松 张贤勇 陈帅[1,2] 邓切 YAO Yue-song;ZHANG Xian-yong;CHEN Shuai;DENG Qie(School of Mathematical Sciences,Sichuan Normal University,Chengdu 610066,China;Institute of Intelligent Information and Quantum Information,Sichuan Normal University,Chengdu 610066,China)
机构地区:[1]四川师范大学数学科学学院,四川成都610066 [2]四川师范大学智能信息与量子信息研究所,四川成都610066
出 处:《计算机工程与设计》2021年第1期142-149,共8页Computer Engineering and Design
基 金:国家自然科学基金项目(61673285、11671284);四川省科技计划基金项目(21YYJC1328、2019YJ0529);四川省青年科技基金项目(2017JQ0046)。
摘 要:基于粗糙集的决策树算法由于粒化冲突与噪声影响容易导致特征选择的失效。提出属性纯度并结合属性依赖度来构建决策树归纳算法。采用统计集成策略来建立属性纯度,表示决策分类关于条件分类的识别性,并用于相应的属性特征选择;分析属性纯度与属性依赖度的同质性和异态性,采用先属性依赖度后属性纯度选择节点的方法,改进基于粗糙集的决策树算法。决策表例分析与数据实验对比均表明所提算法的有效性与改进性。The decision-tree algorithm based on rough sets easily causes inefficiency of feature selection due to granulating conflict or noise effect.An attribute purity degree was proposed,and it was combined with the attribute dependency degree to construct an induction algorithm of decision-tree.The attribute purity degree was first mined using a strategy of statistics and integration.The recognition accuracy of decision classification was characterized for condition classification,and it was utilized for feature selection.The homogeneity and heterostasis between the attribute dependency and purity degrees were then analyzed.A decision-tree algorithm orderly considering the two attribute measures was established,and the classical decision-tree algorithm based on rough sets was improved.Both the analysis of table examples and the comparison of data experiments powerfully show the effectiveness and improvement of the proposed algorithm.
关 键 词:粗糙集 决策树 属性依赖度 属性纯度 特征选择 机器学习
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49