同时,我们可以使用不同颜色标记不同的类,将该数据集绘制出来。由图可知,我们可以从 X1 轴(即图中的 X 轴)上挑出一个值来分割该数据集。
范例所有的代码整合如下: # Split a dataset based on an attribute and an attribute value def test_split(index, value, dataset): left, right = list(), list() for row in dataset: if row[index] < value: left.append(row) else: right.append(row) return left, right # Calculate the Gini index for a split dataset def gini_index(groups, class_values): gini = 0.0 for class_value in class_values: for group in groups: size = len(group) if size == 0: continue proportion = [row[-1] for row in group].count(class_value) / float(size) gini += (proportion * (1.0 - proportion)) return gini # Select the best split point for a dataset def get_split(dataset): class_values = list(set(row[-1] for row in dataset)) b_index, b_value, b_score, b_groups = 999, 999, 999, None for index in range(len(dataset[0])-1): for row in dataset: groups = test_split(index, row[index], dataset) gini = gini_index(groups, class_values) print('X%d < %.3f Gini=%.3f' % ((index+1), row[index], gini)) if gini < b_score: b_index, b_value, b_score, b_groups = index, row[index], gini, groups return {'index':b_index, 'value':b_value, 'groups':b_groups} dataset = [[2.771244718,1.784783929,0], [1.728571309,1.169761413,0], [3.678319846,2.81281357,0], [3.961043357,2.61995032,0], [2.999208922,2.209014212,0], [7.497545867,3.162953546,1], [9.00220326,3.339047188,1], [7.444542326,0.476683375,1], [10.12493903,3.234550982,1], [6.642287351,3.319983761,1]] split = get_split(dataset) print('Split: [X%d < %.3f]' % ((split['index']+1), split['value'])) 优化后的 get_split() 函数能够输出每个分割点及其对应的基尼系数。 运行如上的代码后,它将 print 所有的基尼系数及其选中的最优分割点。在此范例中,它选中了 X1<6.642 作为最终完美分割点(它对应的基尼系数为 0)。 X1 < 2.771 Gini=0.494 X1 < 1.729 Gini=0.500 X1 < 3.678 Gini=0.408 X1 < 3.961 Gini=0.278 X1 < 2.999 Gini=0.469 X1 < 7.498 Gini=0.408 X1 < 9.002 Gini=0.469 X1 < 7.445 Gini=0.278 X1 < 10.125 Gini=0.494 X1 < 6.642 Gini=0.000 X2 < 1.785 Gini=1.000 X2 < 1.170 Gini=0.494 X2 < 2.813 Gini=0.640 X2 < 2.620 Gini=0.819 X2 < 2.209 Gini=0.934 X2 < 3.163 Gini=0.278 X2 < 3.339 Gini=0.494 X2 < 0.477 Gini=0.500 X2 < 3.235 Gini=0.408 X2 < 3.320 Gini=0.469 Split: [X1 < 6.642] 既然我们现在已经能够找出数据集中最优的分割点,那我们现在就来看看我们能如何应用它来建立一个决策树。 2.3 生成树模型 (责任编辑:本港台直播) |