检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
出 处:《计算机工程》2008年第3期9-11,共3页Computer Engineering
基 金:国家自然科学基金资助重点项目(69835001);教育部科技基金资助重点项目([2000]175);北京市自然科学基金资助项目(4022008);广西教育厅基金资助项目(200626)
摘 要:在基于粗糙集模型的决策树生成算法中,由于分类的精确性,导致生成算法在对实例进行划分时往往过于细化,无法避免少数特殊实例对决策树造成的不良影响,使得生成的决策树过于庞大,不便于理解,同时也降低了其对未来数据的分类和预测能力。针对上述问题,该文给出一个新的基于粗糙集模型的决策树生成算法,引入了抑制因子。对即将扩张的结点,除了常用的终止条件外,再加入一个终止条件:若样本的抑制因子大于给定的阈值,便不再扩展该结点。有效地避免了划分过细的问题,也不会生成过于庞大的决策树,便于用户理解。Among the decision tree generation algorithms, which are based on rough set model, existing algorithms usually partition examples too detailedly to avoid the negative impact caused by a few special examples on decision tree because of the classification accuracy. This leads to that the generated decision tree seems too large to be understood. It also weakens its classification ability and predictable ability on data to be classified or predicted. In order to solve these problems, a new algorithm for generating decision tree based on rough set model is proposed. It introduces hold-down factor, which is an additional terminal condition for expanding nodes, besides traditional one. For generating node, if the hold-down factor of some sample is bigger than the given threshold, the node will not be expanded any more. Thus, the problem of too detailed partition is avoided. The size of decision tree generated by the proposed algorithm will not be too large to understand for the user.
关 键 词:决策树 ID3算法 粗糙集 抑制因子 上近似集 下近似集
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.40