[飞桨机器学习]决策树算法

一、简介

1.概述

决策树算法是一种逼近离散函数值的方法。它是一种典型的分类方法,首先对数据进行处理,利用归纳算法生成可读的规则和决策树,然后使用决策对新数据进行分析。本质上决策树是通过一系列规则对数据进行分类的过程。

决策树方法最早产生于上世纪60年代,到70年代末。由J Ross Quinlan提出了ID3算法,此算法的目的在于减少树的深度。但是忽略了叶子数目的研究。C4.5算法在ID3算法的基础上进行了改进,对于预测变量的缺值处理、剪枝技术、派生规则等方面作了较大改进,既适合于分类问题,又适合于回归问题。

决策树算法构造决策树来发现数据中蕴涵的分类规则.如何构造精度高、规模小的决策树是决策树算法的核心内容。决策树构造可以分两步进行。第一步,决策树的生成:由训练样本集生成决策树的过程。一般情况下,训练样本数据集是根据实际需要有历史的、有一定综合程度的,用于数据分析处理的数据集。第二步,决策树的剪枝:决策树的剪枝是对上一阶段生成的决策树进行检验、校正和修下的过程,主要是用新的样本数据集(称为测试数据集)中的数据校验决策树生成过程中产生的初步规则,将那些影响预衡准确性的分枝剪除。

2.决策树学习

目标:根据给定的训练数据集构建一个决策树模型,使它能够对实例进行正确的分类。决策树学习本质上是从训练数据集中归纳出一组分类规则。能对训练数据进行正确分类的决策树可能有多个,可能没有。在选择决策树时,应选择一个与训练数据矛盾较小的决策树,同时具有很好的泛化能力;而且选择的条件概率模型应该不仅对训练数据有很好的拟合,而且对未知数据有很好的预测。

损失函数:通常是正则化的极大似然函数

策略:是以损失函数为目标函数的最小化

因为从所有可能的决策树中选取最优决策树是NP完全问题,所以现实中决策树学习通常采用启发式方法,近似求解这一最优化问题,得到的决策树是次最优(sub-optimal)的。

决策树学习的算法通常是一个递归地选择最优特征,并根据该特征对训练数据进行分割,使得对各个子数据集有一个最好的分类的过程。包含特征选择、决策树的生成和决策树的剪枝过程。

剪枝:

目的:将树变得更简单,从而使它具有更好的泛化能力。

步骤:去掉过于细分的叶结点,使其回退到父结点,甚至更高的结点,然后将父结点或更高的结点改为新的叶结点。

决策树的生成对应模型的局部选择,决策树的剪枝对应于模型的全局选择。决策树的生成只考虑局部最优,决策树的剪枝则考虑全局最优。

特征选择:

如果特征数量很多,在决策树学习开始时对特征进行选择,只留下对训练数据有足够分类能力的特征。(例如把名字不作为一个特征进行选择)

3.典型算法

决策树的典型算法有ID3,C4.5,CART等。

国际权威的学术组织,数据挖掘国际会议ICDM (the IEEE International Conference on Data Mining)在2006年12月评选出了数据挖掘领域的十大经典算法中,C4.5算法排名第一。C4.5算法是机器学习算法中的一种分类决策树算法,其核心算法是ID3算法。C4.5算法产生的分类规则易于理解,准确率较高。不过在构造树的过程中,需要对数据集进行多次的顺序扫描和排序,在实际应用中因而会导致算法的低效。

决策树算法的优点如下:

(1)分类精度高;

(2)生成的模式简单;

(3)对噪声数据有很好的健壮性。

因而是目前应用最为广泛的归纳推理算法之一,在数据挖掘中受到研究者的广泛关注。

4.基本思想

1)树以代表训练样本的单个结点开始。

2)如果样本都在同一个类.则该结点成为树叶,并用该类标记。

3)否则,算法选择最有分类能力的属性作为决策树的当前结点.

4)根据当前决策结点属性取值的不同,将训练样本数据集tlI分为若干子集,每个取值形成一个分枝,有几个取值形成几个分枝。匀针对上一步得到的一个子集,重复进行先前步骤,递4’I形成每个划分样本上的决策树。一旦一个属性出现在一个结点上,就不必在该结点的任何后代考虑它。

5)递归划分步骤仅当下列条件之一成立时停止:

①给定结点的所有样本属于同一类。

②没有剩余属性可以用来进一步划分样本.在这种情况下.使用多数表决,将给定的结点转换成树叶,并以样本中元组个数最多的类别作为类别标记,同时也可以存放该结点样本的类别分布,

③如果某一分枝tc,没有满足该分支中已有分类的样本,则以样本的多数类创建一个树叶。 [3]

5.构造方法

决策树构造的输入是一组带有类别标记的例子,构造的结果是一棵二叉树或多叉树。二叉树的内部节点(非叶子节点)一般表示为一个逻辑判断,如形式为a=aj的逻辑判断,其中a是属性,aj是该属性的所有取值:树的边是逻辑判断的分支结果。多叉树(ID3)的内部结点是属性,边是该属性的所有取值,有几个属性值就有几条边。树的叶子节点都是类别标记。 [3]

由于数据表示不当、有噪声或者由于决策树生成时产生重复的子树等原因,都会造成产生的决策树过大。因此,简化决策树是一个不可缺少的环节。寻找一棵最优决策树,主要应解决以下3个最优化问题:①生成最少数目的叶子节点;②生成的每个叶子节点的深度最小;③生成的决策树叶子节点最少且每个叶子节点的深度最小。

6.基本算法

二、ID3决策树

ID3算法是决策树的一种,它是基于奥卡姆剃刀原理的,即用尽量用较少的东西做更多的事。ID3算法,即Iterative Dichotomiser 3,迭代二叉树3代,是Ross Quinlan发明的一种决策树算法,这个算法的基础就是上面提到的奥卡姆剃刀原理,越是小型的决策树越优于大的决策树,尽管如此,也不总是生成最小的树型结构,而是一个启发式算法。

在信息论中,期望信息越小,那么信息增益就越大,从而纯度就越高。ID3算法的核心思想就是以信息增益来度量属性的选择,选择分裂后信息增益最大的属性进行分裂。该算法采用自顶向下的贪婪搜搜索历可能的决策空间。

1、信息熵

熵(entropy)表示随机变量不确定性的度量,也就是熵越大,变量的不确定性就越大。设是一个有限值的离散随机变量,其概率分布为:

P(X=xi)=pi,i=1,2,…,nP(X = x_i) = p_i , i = 1,2,…,n P(X=xi​)=pi​,i=1,2,…,n
则随机变量X的熵定义为:
H(X)=−∑i=1nlog2pi(若pi=0,定义0log0=0)H(X) = - \sum_{i = 1}^{n} {log_2 p_i} (若p_i = 0,定义 0log0 = 0) H(X)=−i=1∑n​log2​pi​(若pi​=0,定义0log0=0)

2、条件熵

条件熵H(Y|X)表示在已知随机变量X条件下随机变量Y的不确定性。随机变量X给定的条件下随机变量Y的条件熵为:
H(Y∣X)=∑i=1npiH(Y∣X=xi),pi=P(X=xi)H(Y|X) = \sum_{i = 1}^{n}{p_i H(Y|X = x_i)} , p_i = P(X = x_i) H(Y∣X)=i=1∑n​pi​H(Y∣X=xi​),pi​=P(X=xi​)

3、信息增益

特征A对训练数据集D的信息增益g(D,A),定义为集合D的经验熵H(D)与特征A给定条件下D的经验条件熵H(D|A)之差,即:
g(D,A)=H(D)−H(D∣A)g(D,A) = H(D) - H(D|A) g(D,A)=H(D)−H(D∣A)
信息增益大的特征具有更强的分类能力

4、总结

给定训练数据集D和特征A:

经验熵H(D)表示对数据集D进行分类的不确定性

经验条件熵H(D|A)表示在特征A给定的条件下对数据集D进行分类的不确定性

g(D,A)=H(D)−H(D∣A)g(D,A) = H(D) - H(D|A)g(D,A)=H(D)−H(D∣A)表示由于特征A而使得对数据 D的分类的不确定性减少的程度。

5、决策树进行分类的步骤

  1. 利用样本数据集构造一颗决策树,并通过构造的决策树建立相应的分类模型。这个过程实际上是从一个数据中获取知识,进行规制提炼的过程。

  2. 利用已经建立完成的决策树模型对数据集进行分类。即对未知的数据集元组从根节点依次进行决策树的游历,通过一定的路径游历至某叶子节点,从而找到该数据元组所在的类或类的分布。

三、ID3决策树示例

在编写代码之前,我们先对数据集进行属性标注。

  • 年龄:0代表青年,1代表中年,2代表老年;
  • 有工作:0代表否,1代表是;
  • 有自己的房子:0代表否,1代表是;
  • 信贷情况:0代表一般,1代表好,2代表非常好;
  • 类别(是否给贷款):no代表否,yes代表是。

1、数据集

 dataSet=[[0, 0, 0, 0, 'no'],[0, 0, 0, 1, 'no'],[0, 1, 0, 1, 'yes'],[0, 1, 1, 0, 'yes'],[0, 0, 0, 0, 'no'],[1, 0, 0, 0, 'no'],[1, 0, 0, 1, 'no'],[1, 1, 1, 1, 'yes'],[1, 0, 1, 2, 'yes'],[1, 0, 1, 2, 'yes'],[2, 0, 1, 2, 'yes'],[2, 0, 1, 1, 'yes'],[2, 1, 0, 1, 'yes'],[2, 1, 0, 2, 'yes'],[2, 0, 0, 0, 'no']]labels=['年龄','有工作','有自己的房子','信贷情况']

2、计算经验熵(香农熵)

P(X=xi)=pi,i=1,2,…,nP(X = x_i) = p_i , i = 1,2,…,n P(X=xi​)=pi​,i=1,2,…,n

H(X)=−∑i=1nlog2pi(若pi=0,定义0log0=0)H(X) = - \sum_{i = 1}^{n} {log_2 p_i} (若p_i = 0,定义 0log0 = 0) H(X)=−i=1∑n​log2​pi​(若pi​=0,定义0log0=0)

from math import log
def calcShannonEnt(dataSet):# 统计数据数量numEntries = len(dataSet)# 存储每个label出现次数label_counts = {}# 统计label出现次数for featVec in dataSet:current_label = featVec[-1]if current_label not in label_counts:  # 提取label信息label_counts[current_label] = 0  # 如果label未在dict中则加入label_counts[current_label] += 1  # label计数shannon_ent = 0  # 经验熵# 计算经验熵for key in label_counts:prob = float(label_counts[key]) / numEntriesshannon_ent -= prob * log(prob, 2)return shannon_ent
# 运行结果
# 0.9709505944546686

3、计算信息增益

g(D,A)=H(D)−H(D∣A)g(D,A) = H(D) - H(D|A) g(D,A)=H(D)−H(D∣A)

def splitDataSet(data_set, axis, value):ret_dataset = []for feat_vec in data_set:if feat_vec[axis] == value:reduced_feat_vec = feat_vec[:axis]reduced_feat_vec.extend(feat_vec[axis + 1:])ret_dataset.append(reduced_feat_vec)return ret_datasetdef chooseBestFeatureToSplit(dataSet):# 特征数量num_features = len(dataSet[0]) - 1# 计算数据香农熵base_entropy = calcShannonEnt(dataSet)# 信息增益best_info_gain = 0.0# 最优特征索引值best_feature = -1# 遍历所有特征for i in range(num_features):# 获取dataset第i个特征feat_list = [exampel[i] for exampel in dataSet]# 创建set集合,元素不可重合unique_val = set(feat_list)# 经验条件熵new_entropy = 0.0# 计算信息增益for value in unique_val:# sub_dataset划分后的子集sub_dataset = splitDataSet(dataSet, i, value)# 计算子集的概率prob = len(sub_dataset) / float(len(dataSet))# 计算经验条件熵new_entropy += prob * calcShannonEnt(sub_dataset)# 信息增益info_gain = base_entropy - new_entropy# 打印每个特征的信息增益print("第%d个特征的信息增益为%.3f" % (i, info_gain))# 计算信息增益if info_gain > best_info_gain:# 更新信息增益best_info_gain = info_gain# 记录信息增益最大的特征的索引值best_feature = iprint("最优索引值:" + str(best_feature))print()return best_feature
# 运行结果
# 第0个特征的信息增益为0.083
# 第1个特征的信息增益为0.324
# 第2个特征的信息增益为0.420
# 第3个特征的信息增益为0.363
# 最优索引值:2

4、树的生成

import operator
def majority_cnt(class_list):class_count = {}# 统计class_list中每个元素出现的次数for vote in class_list:if vote not in class_count:class_count[vote] = 0class_count[vote] += 1# 根据字典的值降序排列sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)return sorted_class_count[0][0]def creat_tree(dataSet, labels, featLabels):# 取分类标签(是否放贷:yes or no)class_list = [exampel[-1] for exampel in dataSet]# 如果类别完全相同则停止分类if class_list.count(class_list[0]) == len(class_list):return class_list[0]# 遍历完所有特征时返回出现次数最多的类标签if len(dataSet[0]) == 1:return majority_cnt(class_list)# 选择最优特征best_feature = chooseBestFeatureToSplit(dataSet)# 最优特征的标签best_feature_label = labels[best_feature]featLabels.append(best_feature_label)# 根据最优特征的标签生成树my_tree = {best_feature_label: {}}# 删除已使用标签del(labels[best_feature])# 得到训练集中所有最优特征的属性值feat_value = [exampel[best_feature] for exampel in dataSet]# 去掉重复属性值unique_vls = set(feat_value)for value in unique_vls:my_tree[best_feature_label][value] = creat_tree(splitDataSet(dataSet, best_feature, value), labels, featLabels)return my_tree
# 运行结果
# {'有自己的房子': {0: {'有工作': {0: 'no', 1: 'yes'}}, 1: 'yes'}}

5、树的深度和广度计算

def get_num_leaves(my_tree):num_leaves = 0first_str = next(iter(my_tree))second_dict = my_tree[first_str]for key in second_dict.keys():if type(second_dict[key]).__name__ == 'dict':num_leaves += get_num_leaves(second_dict[key])else:num_leaves += 1return num_leavesdef get_tree_depth(my_tree):max_depth = 0       # 初始化决策树深度firsr_str = next(iter(my_tree))     # python3中myTree.keys()返回的是dict_keys,不在是list,所以不能使用myTree.keys()[0]的方法获取结点属性,可以使用list(myTree.keys())[0]second_dict = my_tree[firsr_str]    # 获取下一个字典for key in second_dict.keys():if type(second_dict[key]).__name__ == 'dict':     # 测试该结点是否为字典,如果不是字典,代表此结点为叶子结点this_depth = 1 + get_tree_depth(second_dict[key])else:this_depth = 1if this_depth > max_depth:max_depth = this_depth      # 更新层数return max_depth
# 运行结果
# 2
# 3

6、未知数据的预测

def classify(input_tree, feat_labels, test_vec):# 获取决策树节点first_str = next(iter(input_tree))# 下一个字典second_dict = input_tree[first_str]feat_index = feat_labels.index(first_str)for key in second_dict.keys():if test_vec[feat_index] == key:if type(second_dict[key]).__name__ == 'dict':class_label = classify(second_dict[key], feat_labels, test_vec)else:class_label = second_dict[key]return class_label
# 测试
testVec = [0, 1, 1, 1]result = classify(myTree, featLabels, testVec)if result == 'yes':print('放贷')if result == 'no':print('不放贷')
# 运行结果
# 放贷

7、树的存储与读取(以二进制形式存储)

import pickle
def storeTree(input_tree, filename):# 存储树with open(filename, 'wb') as fw:pickle.dump(input_tree, fw)def grabTree(filename):# 读取树fr = open(filename, 'rb')return pickle.load(fr)

8、完整代码

from math import log
import operator
import pickledef calcShannonEnt(dataSet):# 统计数据数量numEntries = len(dataSet)# 存储每个label出现次数label_counts = {}# 统计label出现次数for featVec in dataSet:current_label = featVec[-1]if current_label not in label_counts:  # 提取label信息label_counts[current_label] = 0  # 如果label未在dict中则加入label_counts[current_label] += 1  # label计数shannon_ent = 0  # 经验熵# 计算经验熵for key in label_counts:prob = float(label_counts[key]) / numEntriesshannon_ent -= prob * log(prob, 2)return shannon_entdef splitDataSet(data_set, axis, value):ret_dataset = []for feat_vec in data_set:if feat_vec[axis] == value:reduced_feat_vec = feat_vec[:axis]reduced_feat_vec.extend(feat_vec[axis + 1:])ret_dataset.append(reduced_feat_vec)return ret_datasetdef chooseBestFeatureToSplit(dataSet):# 特征数量num_features = len(dataSet[0]) - 1# 计算数据香农熵base_entropy = calcShannonEnt(dataSet)# 信息增益best_info_gain = 0.0# 最优特征索引值best_feature = -1# 遍历所有特征for i in range(num_features):# 获取dataset第i个特征feat_list = [exampel[i] for exampel in dataSet]# 创建set集合,元素不可重合unique_val = set(feat_list)# 经验条件熵new_entropy = 0.0# 计算信息增益for value in unique_val:# sub_dataset划分后的子集sub_dataset = splitDataSet(dataSet, i, value)# 计算子集的概率prob = len(sub_dataset) / float(len(dataSet))# 计算经验条件熵new_entropy += prob * calcShannonEnt(sub_dataset)# 信息增益info_gain = base_entropy - new_entropy# 打印每个特征的信息增益print("第%d个特征的信息增益为%.3f" % (i, info_gain))# 计算信息增益if info_gain > best_info_gain:# 更新信息增益best_info_gain = info_gain# 记录信息增益最大的特征的索引值best_feature = iprint("最优索引值:" + str(best_feature))print()return best_featuredef majority_cnt(class_list):class_count = {}# 统计class_list中每个元素出现的次数for vote in class_list:if vote not in class_count:class_count[vote] = 0class_count[vote] += 1# 根据字典的值降序排列sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)return sorted_class_count[0][0]def creat_tree(dataSet, labels, featLabels):# 取分类标签(是否放贷:yes or no)class_list = [exampel[-1] for exampel in dataSet]# 如果类别完全相同则停止分类if class_list.count(class_list[0]) == len(class_list):return class_list[0]# 遍历完所有特征时返回出现次数最多的类标签if len(dataSet[0]) == 1:return majority_cnt(class_list)# 选择最优特征best_feature = chooseBestFeatureToSplit(dataSet)# 最优特征的标签best_feature_label = labels[best_feature]featLabels.append(best_feature_label)# 根据最优特征的标签生成树my_tree = {best_feature_label: {}}# 删除已使用标签del(labels[best_feature])# 得到训练集中所有最优特征的属性值feat_value = [exampel[best_feature] for exampel in dataSet]# 去掉重复属性值unique_vls = set(feat_value)for value in unique_vls:my_tree[best_feature_label][value] = creat_tree(splitDataSet(dataSet, best_feature, value), labels, featLabels)return my_treedef get_num_leaves(my_tree):num_leaves = 0first_str = next(iter(my_tree))second_dict = my_tree[first_str]for key in second_dict.keys():if type(second_dict[key]).__name__ == 'dict':num_leaves += get_num_leaves(second_dict[key])else:num_leaves += 1return num_leavesdef get_tree_depth(my_tree):max_depth = 0       # 初始化决策树深度firsr_str = next(iter(my_tree))     # python3中myTree.keys()返回的是dict_keys,不在是list,所以不能使用myTree.keys()[0]的方法获取结点属性,可以使用list(myTree.keys())[0]second_dict = my_tree[firsr_str]    # 获取下一个字典for key in second_dict.keys():if type(second_dict[key]).__name__ == 'dict':     # 测试该结点是否为字典,如果不是字典,代表此结点为叶子结点this_depth = 1 + get_tree_depth(second_dict[key])else:this_depth = 1if this_depth > max_depth:max_depth = this_depth      # 更新层数return max_depthdef classify(input_tree, feat_labels, test_vec):# 获取决策树节点first_str = next(iter(input_tree))# 下一个字典second_dict = input_tree[first_str]feat_index = feat_labels.index(first_str)for key in second_dict.keys():if test_vec[feat_index] == key:if type(second_dict[key]).__name__ == 'dict':class_label = classify(second_dict[key], feat_labels, test_vec)else:class_label = second_dict[key]return class_labeldef storeTree(input_tree, filename):# 存储树with open(filename, 'wb') as fw:pickle.dump(input_tree, fw)def grabTree(filename):# 读取树fr = open(filename, 'rb')return pickle.load(fr)if __name__ == "__main__":# 数据集dataSet = [[0, 0, 0, 0, 'no'],[0, 0, 0, 1, 'no'],[0, 1, 0, 1, 'yes'],[0, 1, 1, 0, 'yes'],[0, 0, 0, 0, 'no'],[1, 0, 0, 0, 'no'],# [1, 0, 0, 0, 'yes'],[1, 0, 0, 1, 'no'],[1, 1, 1, 1, 'yes'],[1, 0, 1, 2, 'yes'],[1, 0, 1, 2, 'yes'],[2, 0, 1, 2, 'yes'],[2, 0, 1, 1, 'yes'],[2, 1, 0, 1, 'yes'],[2, 1, 0, 2, 'yes'],[2, 0, 0, 0, 'no']]# 分类属性labels = ['年龄', '有工作', '有自己的房子', '信贷情况']print(dataSet)print()print(calcShannonEnt(dataSet))print()featLabels = []myTree = creat_tree(dataSet, labels, featLabels)print(myTree)print(get_tree_depth(myTree))print(get_num_leaves(myTree))#测试数据testVec = [0, 1, 1, 1]result = classify(myTree, featLabels, testVec)if result == 'yes':print('放贷')if result == 'no':print('不放贷')# 存储树storeTree(myTree,'classifierStorage.txt')# 读取树myTree2 = grabTree('classifierStorage.txt')print(myTree2)testVec2 = [1, 0]result2 = classify(myTree2, featLabels, testVec)if result2 == 'yes':print('放贷')if result2 == 'no':print('不放贷')
# 运行结果
# [[0, 0, 0, 0, 'no'], [0, 0, 0, 1, 'no'], [0, 1, 0, 1, 'yes'], [0, 1, 1, 0, 'yes'], [0, 0, 0, 0, 'no'], [1, 0, 0, 0, 'no'], [1, 0, 0, 1, 'no'], [1, 1, 1, 1, 'yes'], [1, 0, 1, 2, 'yes'], [1, 0, 1, 2, 'yes'], [2, 0, 1, 2, 'yes'], [2, 0, 1, 1, 'yes'], [2, 1, 0, 1, 'yes'], [2, 1, 0, 2, 'yes'], [2, 0, 0, 0, 'no']]# 0.9709505944546686# 第0个特征的信息增益为0.083
# 第1个特征的信息增益为0.324
# 第2个特征的信息增益为0.420
# 第3个特征的信息增益为0.363
# 最优索引值:2# 第0个特征的信息增益为0.252
# 第1个特征的信息增益为0.918
# 第2个特征的信息增益为0.474
# 最优索引值:1# {'有自己的房子': {0: {'有工作': {0: 'no', 1: 'yes'}}, 1: 'yes'}}
# 2
# 3
# 放贷
# {'有自己的房子': {0: {'有工作': {0: 'no', 1: 'yes'}}, 1: 'yes'}}
# 放贷

四、C4.5决策树

C4.5决策树实在ID3决策树的基础上进行了优化。将节点的划分标准替换为了信息增益率,能够处理连续值,并且可以处理缺失值,以及能够进行剪枝操作。

信息增益

信息增益率使用“分裂信息”值将信息增益规范化。分类信息类似于Info(D),定义如下:
SplitInfoA(D)=−∑j=1v∣Dj∣∣D∣∗log2(∣Dj∣∣D∣)SplitInfo_A(D)=-\sum^{v}_{j=1}\frac{|D_j|}{|D|}*log_2(\frac{|D_j|}{|D|}) SplitInfoA​(D)=−j=1∑v​∣D∣∣Dj​∣​∗log2​(∣D∣∣Dj​∣​)

这个值表示通过将训练数据集D划分成对应于属性A测试的v个输出的v个划分产生的信息。信息增益率定义:
GainRatio(A)=Gain(A)SplitInfo(a)GainRatio(A)=\frac{Gain(A)}{SplitInfo(a)} GainRatio(A)=SplitInfo(a)Gain(A)​
此处的Gain(A)即是前文介绍ID3时的g(D,A)

选择具有最大增益率的属性作为分裂属性。

当属性有很多值时,虽然信息增益变大了,但是相应的属性熵也会变大。所以最终计算的信息增益率并不是很大。在一定程度上可以避免ID3倾向于选择取值较多的属性作为节点的问题。

具体树的构造方法与前文的ID3相同,这里不再赘述

五、CART决策树

CART 树(分类回归树)分为分类树和回归树。顾名思义,分类树用于处理分类问题;回归树用来处理回归问题。我们知道分类和回归是机器学习领域两个重要的方向。分类问题输出特征向量对应的分类结果,回归问题输出特征向量对应的预测值。

分类树和 ID3、C4.5 决策树相似,都用来处理分类问题。不同之处是划分方法。分类树利用基尼指数进行二分。如图所示就是一个分类树。

回归树用来处理回归问题。回归将已知数据进行拟合,对于目标变量未知的数据可以预测目标变量的值。如图 所示就是一个回归树,其中 s 是切分点,x 是特征,y 是目标变量。可以看出图 2 利用切分点 s 将特征空间进行划分,y 是在划分单元上的输出值。回归树的关键是如何选择切分点、如何利用切分点划分数据集、如何预测 y 的取值。

基尼指数

数据集D的纯度可以用基尼值来度量:
Gini(D)=∑k=1∣y∣∑k′≠kpkpk′=1−∑k=1∣y∣pk2Gini(D)=\sum^{|y|}_{k=1}\sum_{k'\not=k}p_kp_k'\\ =1-\sum^{|y|}_{k=1}p_k^2 Gini(D)=k=1∑∣y∣​k′​=k∑​pk​pk′​=1−k=1∑∣y∣​pk2​

直观来说,Gini(D)反映了从数据集D中随机选取两个样本,其类别标记不一致的概率,因此Gini(D)越小,则数据集D的纯度越高。

属性a的基尼指数定义为
Gini_index(D,a)=∑V=1V∣Dv∣∣D∣Gini(Dv)Gini\_index(D,a)=\sum^{V}_{V=1}\frac{|D^v|}{|D|}Gini(D^v) Gini_index(D,a)=V=1∑V​∣D∣∣Dv∣​Gini(Dv)
于是,我们在候选属性集合A中,选择那个使得划分后基尼指数最小的属性作为最优划分属性,即a∗=argminGini_index(D,a),其中a∈Aa_*= arg\ min\ Gini\_index(D,a),其中a\in Aa∗​=arg min Gini_index(D,a),其中a∈A

六、连续值与缺失值的处理

1.连续值处理

由于连续属性的可取值数目不再有限,因此,不能直接根据连续属性的可取值来对结点进行划分.此时,连续属性离散化技术可派上用场.最简单的策略是采用二分法(bi-partition)对连续属性进行处理,这正是C4.5决策树算法中采用的机制[Quinlan, 1993].

后续在cart回归树案例中会有具体代码的展示

2.缺失值处理

由于我没有准备这方面的代码,就在这里分享一个写的极为详细的博客

https://blog.csdn.net/u012328159/article/details/79413610

感兴趣的也可以自己去看瓜书上的讲解

七、CART分类树示例

1.数据集

依旧为上一次用到的Titanic数据集,在此不做过多介绍分析了。

与上次的区别在于为了形成一个二叉树,所有特征离散化后在子叶点分类时只分为了1和非1两类

2.引入要用到的包

import csv
import operator
import copy
import numpy as np

3.读入数据集

def loadDataset(filename):with open(filename, 'r') as f:lines = csv.reader(f)data_set = list(lines)if filename != 'titanic.csv':for i in range(len(data_set)):del(data_set[i][0])# 整理数据for i in range(len(data_set)):del(data_set[i][0])del(data_set[i][2])data_set[i][4] += data_set[i][5]del(data_set[i][5])del(data_set[i][5])del(data_set[i][6])del(data_set[i][-1])category = data_set[0]del (data_set[0])# 转换数据格式for data in data_set:data[0] = int(data[0])data[1] = int(data[1])if data[3] != '':data[3] = float(data[3])else:data[3] = Nonedata[4] = float(data[4])data[5] = float(data[5])# 补全缺失值 转换记录方式 分类for data in data_set:if data[3] is None:data[3] = 28# male : 1, female : 0if data[2] == 'male':data[2] = 1else:data[2] = 0# age <25 为0, 25<=age<31为1,age>=31为2if data[3] < 60: # 但是测试得60分界准确率最高???!!!data[3] = 0else:data[3] = 1# sibsp&parcg以2为界限,小于为0,大于为1if data[4] < 2:data[4] = 0else:data[4] = 1# fare以64为界限if data[-1] < 64:data[-1] = 0else:data[-1] = 1return data_set, category

4.计算Gini指数

按照公式计算基尼指数

def gini(data, i):num = len(data)label_counts = [0, 0, 0, 0]p_count = [0, 0, 0, 0]gini_count = [0, 0, 0, 0]for d in data:label_counts[d[i]] += 1for l in range(len(label_counts)):for d in data:if label_counts[l] != 0 and d[0] == 1 and d[i] == l:p_count[l] += 1print(label_counts)print(p_count)for l in range(len(label_counts)):if label_counts[l] != 0:gini_count[l] = 2*(p_count[l]/label_counts[l])*(1 - p_count[l]/label_counts[l])gini_p = 0for l in range(len(gini_count)):gini_p += (label_counts[l]/num)*gini_count[l]print(gini_p)return gini_p

5.取得节点划分的属性

def get_best_feature(data, category):if len(category) == 2:return 1, category[1]feature_num = len(category) - 1data_num = len(data)feature_gini = []for i in range(1, feature_num+1):feature_gini.append(gini(data, i))min = 0for i in range(len(feature_gini)):if feature_gini[i] < feature_gini[min]:min = iprint(feature_gini)print(category)print(min+1)print(category[min+1])return min+1, category[min + 1]

6.树节点

这里不使用字典建立二叉树(前面ID3用过了),这里自己构建链表式的树

class Node(object):def __init__(self, item):self.name = itemself.lchild = Noneself.rchild = None

7.树的生成

其中涉及到了预剪枝,后文会再说这个问题

def majority_cnt(class_list):class_count = {}# 统计class_list中每个元素出现的次数for vote in class_list:if vote not in class_count:class_count[vote] = 0class_count[vote] += 1# 根据字典的值降序排列sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)return sorted_class_count[0][0]def creat_tree(data, labels, feature_labels=[]):
# 三种结束情况# 取分类标签(survivor or death)class_list = [exampel[0] for exampel in data]if class_list == []:return Node(0)# 如果类别完全相同则停止分类if class_list.count(class_list[0]) == len(class_list):return Node(class_list[0])# 遍历完所有特征时返回出现次数最多的类标签if len(data[0]) == 1:return Node(majority_cnt(class_list))# 最优特征的标签best_feature_num, best_feature_label = get_best_feature(data, labels)feature_labels.append(best_feature_label)node = Node(best_feature_label)ldata = []rdata = []for d in data:if d[best_feature_num] == 1:del(d[best_feature_num])ldata.append(d)else:del(d[best_feature_num])rdata.append(d)labels2 = copy.deepcopy(labels)del(labels2[best_feature_num])tree = nodetree.lchild = creat_tree(ldata, labels2, feature_labels)tree.rchild = creat_tree(rdata, labels2, feature_labels)return tree

8.树的遍历

def breadth_travel(tree):"""广度遍历"""queue = [tree]while queue:cur_node = queue.pop(0)print(cur_node.name, end=" ")if cur_node.lchild is not None:queue.append(cur_node.lchild)if cur_node.rchild is not None:queue.append(cur_node.rchild)print()

9.预测

def prediction(t_tree, test, labels):result = []for data in test:l = copy.deepcopy(labels)tree = t_treefor i in range(len(labels)):if tree.name == 1 or tree.name == 0:result.append(tree.name)breakif len(data) == 1:result.append(0)breakj = 1while j < len(data)-1:if tree.name == l[j]:breakj += 1if data[j] == 1:tree = tree.lchildelse:tree = tree.rchilddel(l[j])del(data[j])return result

10.计算准确率

result = prediction(my_tree, test_set, category)counts = 0for i in range(len(test_set)):if test_set[i][0] == result[i]:counts += 1
accurancy = counts/len(test_set)print(accurancy)

11.完整代码

import csv
import operator
import copy
import numpy as np# 0PassengerId:乘客的ID                               不重要
# 1Survived:乘客是否获救,Key:0=没获救,1=已获救
# 2Pclass:乘客船舱等级(1/2/3三个等级舱位)
# 3Name:乘客姓名                                       不重要
# 4Sex:性别
# 5Age:年龄
# 6SibSp:乘客在船上的兄弟姐妹/配偶数量
# 7Parch:乘客在船上的父母/孩子数量
# 8Ticket:船票号                                         不重要
# 9Fare:船票价
# 10Cabin:客舱号码                                        不重要
# 11Embarked:登船的港口                                   不重要def loadDataset(filename):with open(filename, 'r') as f:lines = csv.reader(f)data_set = list(lines)if filename != 'titanic.csv':for i in range(len(data_set)):del(data_set[i][0])# 整理数据for i in range(len(data_set)):del(data_set[i][0])del(data_set[i][2])data_set[i][4] += data_set[i][5]del(data_set[i][5])del(data_set[i][5])del(data_set[i][6])del(data_set[i][-1])category = data_set[0]del (data_set[0])# 转换数据格式for data in data_set:data[0] = int(data[0])data[1] = int(data[1])if data[3] != '':data[3] = float(data[3])else:data[3] = Nonedata[4] = float(data[4])data[5] = float(data[5])# 补全缺失值 转换记录方式 分类for data in data_set:if data[3] is None:data[3] = 28# male : 1, female : 0if data[2] == 'male':data[2] = 1else:data[2] = 0# age <25 为0, 25<=age<31为1,age>=31为2if data[3] < 60: # 但是测试得60分界准确率最高???!!!data[3] = 0else:data[3] = 1# sibsp&parcg以2为界限,小于为0,大于为1if data[4] < 2:data[4] = 0else:data[4] = 1# fare以64为界限if data[-1] < 64:data[-1] = 0else:data[-1] = 1return data_set, categorydef gini(data, i):num = len(data)label_counts = [0, 0, 0, 0]p_count = [0, 0, 0, 0]gini_count = [0, 0, 0, 0]for d in data:label_counts[d[i]] += 1for l in range(len(label_counts)):for d in data:if label_counts[l] != 0 and d[0] == 1 and d[i] == l:p_count[l] += 1print(label_counts)print(p_count)for l in range(len(label_counts)):if label_counts[l] != 0:gini_count[l] = 2*(p_count[l]/label_counts[l])*(1 - p_count[l]/label_counts[l])gini_p = 0for l in range(len(gini_count)):gini_p += (label_counts[l]/num)*gini_count[l]print(gini_p)return gini_pdef get_best_feature(data, category):if len(category) == 2:return 1, category[1]feature_num = len(category) - 1data_num = len(data)feature_gini = []for i in range(1, feature_num+1):feature_gini.append(gini(data, i))min = 0for i in range(len(feature_gini)):if feature_gini[i] < feature_gini[min]:min = iprint(feature_gini)print(category)print(min+1)print(category[min+1])return min+1, category[min + 1]def majority_cnt(class_list):class_count = {}# 统计class_list中每个元素出现的次数for vote in class_list:if vote not in class_count:class_count[vote] = 0class_count[vote] += 1# 根据字典的值降序排列sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)return sorted_class_count[0][0]class Node(object):def __init__(self, item):self.name = itemself.lchild = Noneself.rchild = Nonedef creat_tree(data, labels, feature_labels=[]):
# 三种结束情况# 取分类标签(survivor or death)class_list = [exampel[0] for exampel in data]if class_list == []:return Node(0)# 如果类别完全相同则停止分类if class_list.count(class_list[0]) == len(class_list):return Node(class_list[0])# 遍历完所有特征时返回出现次数最多的类标签if len(data[0]) == 1:return Node(majority_cnt(class_list))# 最优特征的标签best_feature_num, best_feature_label = get_best_feature(data, labels)feature_labels.append(best_feature_label)node = Node(best_feature_label)ldata = []rdata = []for d in data:if d[best_feature_num] == 1:del(d[best_feature_num])ldata.append(d)else:del(d[best_feature_num])rdata.append(d)labels2 = copy.deepcopy(labels)del(labels2[best_feature_num])tree = nodetree.lchild = creat_tree(ldata, labels2, feature_labels)tree.rchild = creat_tree(rdata, labels2, feature_labels)return treedef breadth_travel(tree):"""广度遍历"""queue = [tree]while queue:cur_node = queue.pop(0)print(cur_node.name, end=" ")if cur_node.lchild is not None:queue.append(cur_node.lchild)if cur_node.rchild is not None:queue.append(cur_node.rchild)print()def prediction(t_tree, test, labels):result = []for data in test:l = copy.deepcopy(labels)tree = t_treefor i in range(len(labels)):if tree.name == 1 or tree.name == 0:result.append(tree.name)breakif len(data) == 1:result.append(0)breakj = 1while j < len(data)-1:if tree.name == l[j]:breakj += 1if data[j] == 1:tree = tree.lchildelse:tree = tree.rchilddel(l[j])del(data[j])return resultif __name__ == "__main__":test_set, category = loadDataset('titanic_test.csv')train_set, category = loadDataset('titanic_train.csv')print(category)print(train_set)print()print(test_set)my_tree = creat_tree(train_set, category)print(my_tree)breadth_travel(my_tree)print(category)print(test_set)test = copy.deepcopy(test_set)result = prediction(my_tree, test_set, category)print(len(test_set))print(result)counts = 0for i in range(len(test_set)):if test_set[i][0] == result[i]:counts += 1print(counts)accurancy = counts/len(test_set)print(accurancy)

八、CART回归树

1.原理分析

CART回归树预测回归连续型数据,假设X与Y分别是输入和输出变量,并且Y是连续变量。在训练数据集所在的输入空间中,递归的将每个区域划分为两个子区域并决定每个子区域上的输出值,构建二叉决策树。

选择最优切分变量j与切分点s:遍历变量j,对规定的切分变量j扫描切分点s,选择使下式得到最小值时的(j,s)对。其中Rm是被划分的输入空间,cm是空间Rm对应的固定输出值。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DkE7EgTD-1616142940351)(https://ai-studio-static-online.cdn.bcebos.com/3dd74276d6794df38756258f504b6109d1524d8f60dd47fa908b85a0d68697a9)]

用选定的(j,s)对,划分区域并决定相应的输出值

继续对两个子区域调用上述步骤将输入空间划分为M个区域R1,R2,…,Rm,生成决策树。

当输入空间划分确定时,可以用平方误差来表示回归树对于训练数据的预测方法,用平方误差最小的准则求解每个单元上的最优输出值。

2.数据集

此次使用forestfires的数据集,共有12列特征

# 1. Montesinho公园地图内的X-x轴空间坐标:1到9
# 2. Montesinho公园地图内的 Y-y轴空间坐标:2到9
# 3.每年的月份-月:“ jan”到“ dec'
# 4.每天-星期几:从“周一”到“星期日”
# 5.FFMC
# - FWI系统中的FFMC指数:18.7至96.20 6. DMC-FWI系统中的DMC指数:1.1至291.3 7. DC- FWI系统的DC指数:7.9至860.6
# 8. ISI-FWI系统的ISI指数:0.0至56.10
# 9. temp- 摄氏温度:2.2至33.30
# 10. RH-相对湿度(%):15.0至100
# 11。风-以km / h为单位的风速:0.40至9.40
# 12.雨量-外部雨量,单位为mm / m2:0.0到6.4
# 13.面积-森林的燃烧面积(以ha为单位):0.00到1090.84
# (此输出变量非常偏向0.0,因此使用对数变换)

3. 引入要用的包

import csv
import copy
import calendar # 月份转换为数字的包
from math import log1p
from math import sqrt
from graphviz import Digraph

4.数据集的读取和处理

day为无关属性应该取出,area根据说明应该使用对数变换,month应该将字符转化为数字

def loadDataset(filename):with open(filename, 'r') as f:lines = csv.reader(f)data_set = list(lines)for i in range(len(data_set)):del(data_set[i][0])# 整理数据for i in range(len(data_set)):del(data_set[i][3])category = data_set[0]del(data_set[0])for i in range(len(data_set)):for j in range(len(data_set[0])):if j == 2:data_set[i][j] = str.capitalize(data_set[i][j])data_set[i][j] = list(calendar.month_abbr).index(data_set[i][j])data_set[i][j] = float(data_set[i][j])if j == 11:data_set[i][j] = log1p(data_set[i][j]+1)return data_set, category

5.area与特征值分离

def split_data(data):data_set = copy.deepcopy(data)data_mat = []label_mat = []for i in range(len(data_set)):label_mat.append(data_set[i][-1])del(data_set[i][-1])data_mat.append(data_set[i])print(data_mat)print(label_mat)return data_mat, label_mat

6.误差计算

def split_data(data):data_set = copy.deepcopy(data)data_mat = []label_mat = []for i in range(len(data_set)):label_mat.append(data_set[i][-1])del(data_set[i][-1])data_mat.append(data_set[i])print(data_mat)print(label_mat)return data_mat, label_mat

7.划分节点的数据与特征的获取

def get_best_split_value(data, result):data_set = set(data)data_set = list(data_set)data.sort()length = len(data_set)if length == 1:return float("Inf"), float("Inf")print(data_set)split_value = []for i in range(length-1):split_value.append((data_set[i+1] + data_set[i])/2)# if len(split_value) == 2:#     return (split_value[0]+split_value[1])/2, mse(data, (split_value[0]+split_value[1])/2, result)m = []for i in range(len(split_value)):m.append(mse(data, split_value[i], result))min_mse = 0for i in range(len(m)):if m[i] < m[min_mse]:min_mse = iprint(m)return split_value[min_mse], m[min_mse]def get_best_feature(data, category):length = len(category)-1data_set, result = split_data(data)feature_mse = []split_feature_value = []feature_values = []for i in range(length):feature_mse.append(0)split_feature_value.append(0)feature_values.append([])for j in range(len(data_set)):feature_values[i].append(data_set[j][i])for i in range(length):split_feature_value[i], feature_mse[i] = get_best_split_value(feature_values[i], result)min_f = 0for i in range(length):if feature_mse[i] < feature_mse[min_f]:min_f = iprint(feature_mse)return min_f, split_feature_value[min_f]

8.树结点

class Node(object):def __init__(self, category, item):self.name = categoryself.elem = itemself.lchild = Noneself.rchild = None

9.叶子节点回归值的计算

def leaf_value(data):sum = 0for i in range(len(data)):sum += data[i][-1]return sum/len(data)

10.生成树

def creat_tree(data, labels, feature_labels=[]):
# 结束情况if len(labels) == 1:return Node('result', leaf_value(data))if len(data) < 0.05*len(train_set):return Node('result', leaf_value(data))# 最优特征的标签best_feature_num, best_feature_value = get_best_feature(data, labels)feature_labels.append(labels[best_feature_num])node = Node(labels[best_feature_num], best_feature_value)ldata = []rdata = []for d in data:if d[best_feature_num] <= best_feature_value:del(d[best_feature_num])ldata.append(d)else:del(d[best_feature_num])rdata.append(d)labels2 = copy.deepcopy(labels)del(labels2[best_feature_num])tree = nodetree.lchild = creat_tree(ldata, labels2, feature_labels)tree.rchild = creat_tree(rdata, labels2, feature_labels)return tree

11.树的遍历

def breadth_travel(tree):"""广度遍历"""queue = [tree]while queue:cur_node = queue.pop(0)print(cur_node.name, end=" ")print(cur_node.elem, end=" ")if cur_node.lchild is not None:queue.append(cur_node.lchild)if cur_node.rchild is not None:queue.append(cur_node.rchild)print()

12.预测

def prediction(t_tree, test, labels):result = []test_mat, x = split_data(test)for data in test_mat:l = copy.deepcopy(labels)tree = t_treefor i in range(len(labels)):if tree.name == "result":result.append(tree.elem)breakj = 0while j:if tree.name == l[j]:breakj += 1if data[j] <= tree.elem:tree = tree.lchildelse:tree = tree.rchilddel(l[j])del(data[j])return result

13.“链表”式树转化为字典树

def tree_to_dict(tree, tree_dict):if tree.lchild == None and tree.rchild == None:tree_dict[tree.name+str(tree.elem)] = str(tree.elem)return tree_dict[tree.name+str(tree.elem)]tree_dict[tree.name+str(tree.elem)] = {}if tree.lchild != None:tree_to_dict(tree.lchild, tree_dict[tree.name+str(tree.elem)])if tree.rchild != None:tree_to_dict(tree.rchild, tree_dict[tree.name+str(tree.elem)])return tree_dict

14.树的可视化

def plot_model(tree, name):g = Digraph("G", filename=name, format='png', strict=False)first_label = list(tree.keys())[0]g.node("0", first_label)_sub_plot(g, tree, "0")g.view()root = "0"def _sub_plot(g, tree, inc):global rootfirst_label = list(tree.keys())[0]ts = tree[first_label]if type(ts).__name__ != 'dict':root = str(int(root) + 1)g.node(root, str(tree[first_label]))g.edge(inc, root, str(ts))returnfor i in ts.keys():if isinstance(tree[first_label][i], dict):root = str(int(root) + 1)g.node(root, list(tree[first_label][i].keys())[0])g.edge(inc, root, str(i))_sub_plot(g, tree[first_label][i], root)else:root = str(int(root) + 1)g.node(root, str(tree[first_label][i]))g.edge(inc, root, str(i))

15.完整代码

import csv
import copy
import calendar
from math import log1p
from math import sqrt
from graphviz import Digraph# 1. Montesinho公园地图内的X-x轴空间坐标:1到9
# 2. Montesinho公园地图内的 Y-y轴空间坐标:2到9
# 3.每年的月份-月:“ jan”到“ dec'
# 4.每天-星期几:从“周一”到“星期日”
# 5.FFMC
# - FWI系统中的FFMC指数:18.7至96.20 6. DMC-FWI系统中的DMC指数:1.1至291.3 7. DC- FWI系统的DC指数:7.9至860.6
# 8. ISI-FWI系统的ISI指数:0.0至56.10
# 9. temp- 摄氏温度:2.2至33.30
# 10. RH-相对湿度(%):15.0至100
# 11。风-以km / h为单位的风速:0.40至9.40
# 12.雨量-外部雨量,单位为mm / m2:0.0到6.4
# 13.面积-森林的燃烧面积(以ha为单位):0.00到1090.84
# (此输出变量非常偏向0.0,因此使用对数变换)def loadDataset(filename):with open(filename, 'r') as f:lines = csv.reader(f)data_set = list(lines)for i in range(len(data_set)):del(data_set[i][0])# 整理数据for i in range(len(data_set)):del(data_set[i][3])category = data_set[0]del(data_set[0])for i in range(len(data_set)):for j in range(len(data_set[0])):if j == 2:data_set[i][j] = str.capitalize(data_set[i][j])data_set[i][j] = list(calendar.month_abbr).index(data_set[i][j])data_set[i][j] = float(data_set[i][j])if j == 11:data_set[i][j] = log1p(data_set[i][j]+1)return data_set, categorydef split_data(data):data_set = copy.deepcopy(data)data_mat = []label_mat = []for i in range(len(data_set)):label_mat.append(data_set[i][-1])del(data_set[i][-1])data_mat.append(data_set[i])print(data_mat)print(label_mat)return data_mat, label_matdef mse(data_set, split_value, result):left_num = 0left_sum = 0left_list = []right_num = 0right_sum = 0rigth_list = []for i in range(len(data_set)):if data_set[i] <= split_value:left_num += 1left_sum += result[i]left_list.append(result[i])else:right_num += 1right_sum += result[i]rigth_list.append(result[i])c1 = left_sum/left_numc2 = right_sum/right_numm = 0for i in range(len(left_list)):m += pow((left_list[i]-c1), 2)for i in range(len(rigth_list)):m += pow((rigth_list[i]-c2), 2)return mdef get_best_split_value(data, result):data_set = set(data)data_set = list(data_set)data.sort()length = len(data_set)if length == 1:return float("Inf"), float("Inf")print(data_set)split_value = []for i in range(length-1):split_value.append((data_set[i+1] + data_set[i])/2)# if len(split_value) == 2:#     return (split_value[0]+split_value[1])/2, mse(data, (split_value[0]+split_value[1])/2, result)m = []for i in range(len(split_value)):m.append(mse(data, split_value[i], result))min_mse = 0for i in range(len(m)):if m[i] < m[min_mse]:min_mse = iprint(m)return split_value[min_mse], m[min_mse]def get_best_feature(data, category):length = len(category)-1data_set, result = split_data(data)feature_mse = []split_feature_value = []feature_values = []for i in range(length):feature_mse.append(0)split_feature_value.append(0)feature_values.append([])for j in range(len(data_set)):feature_values[i].append(data_set[j][i])for i in range(length):split_feature_value[i], feature_mse[i] = get_best_split_value(feature_values[i], result)min_f = 0for i in range(length):if feature_mse[i] < feature_mse[min_f]:min_f = iprint(feature_mse)return min_f, split_feature_value[min_f]class Node(object):def __init__(self, category, item):self.name = categoryself.elem = itemself.lchild = Noneself.rchild = Nonedef leaf_value(data):sum = 0for i in range(len(data)):sum += data[i][-1]return sum/len(data)def creat_tree(data, labels, feature_labels=[]):
# 结束情况if len(labels) == 1:return Node('result', leaf_value(data))if len(data) < 0.05*len(train_set):return Node('result', leaf_value(data))# 最优特征的标签best_feature_num, best_feature_value = get_best_feature(data, labels)feature_labels.append(labels[best_feature_num])node = Node(labels[best_feature_num], best_feature_value)ldata = []rdata = []for d in data:if d[best_feature_num] <= best_feature_value:del(d[best_feature_num])ldata.append(d)else:del(d[best_feature_num])rdata.append(d)labels2 = copy.deepcopy(labels)del(labels2[best_feature_num])tree = nodetree.lchild = creat_tree(ldata, labels2, feature_labels)tree.rchild = creat_tree(rdata, labels2, feature_labels)return treedef breadth_travel(tree):"""广度遍历"""queue = [tree]while queue:cur_node = queue.pop(0)print(cur_node.name, end=" ")print(cur_node.elem, end=" ")if cur_node.lchild is not None:queue.append(cur_node.lchild)if cur_node.rchild is not None:queue.append(cur_node.rchild)print()def prediction(t_tree, test, labels):result = []test_mat, x = split_data(test)for data in test_mat:l = copy.deepcopy(labels)tree = t_treefor i in range(len(labels)):if tree.name == "result":result.append(tree.elem)breakj = 0while j:if tree.name == l[j]:breakj += 1if data[j] <= tree.elem:tree = tree.lchildelse:tree = tree.rchilddel(l[j])del(data[j])return resultdef tree_to_dict(tree, tree_dict):if tree.lchild == None and tree.rchild == None:tree_dict[tree.name+str(tree.elem)] = str(tree.elem)return tree_dict[tree.name+str(tree.elem)]tree_dict[tree.name+str(tree.elem)] = {}if tree.lchild != None:tree_to_dict(tree.lchild, tree_dict[tree.name+str(tree.elem)])if tree.rchild != None:tree_to_dict(tree.rchild, tree_dict[tree.name+str(tree.elem)])return tree_dictdef plot_model(tree, name):g = Digraph("G", filename=name, format='png', strict=False)first_label = list(tree.keys())[0]g.node("0", first_label)_sub_plot(g, tree, "0")g.view()root = "0"def _sub_plot(g, tree, inc):global rootfirst_label = list(tree.keys())[0]ts = tree[first_label]if type(ts).__name__ != 'dict':root = str(int(root) + 1)g.node(root, str(tree[first_label]))g.edge(inc, root, str(ts))returnfor i in ts.keys():if isinstance(tree[first_label][i], dict):root = str(int(root) + 1)g.node(root, list(tree[first_label][i].keys())[0])g.edge(inc, root, str(i))_sub_plot(g, tree[first_label][i], root)else:root = str(int(root) + 1)g.node(root, str(tree[first_label][i]))g.edge(inc, root, str(i))if __name__ == "__main__":test_set, category = loadDataset('forestfires_test.csv')train_set, category = loadDataset('forestfires_train.csv')print(category)print(train_set)print(test_set)# a, b = get_best_feature(train_set, category)# print(a)# print(b)my_tree = creat_tree(train_set, category)breadth_travel(my_tree)result = prediction(my_tree, test_set, category)sme = 0for i in range(len(result)):sme += pow(abs(result[i]-test_set[i][-1]), 2)print(sqrt(sme/len(result)))tree_dict = {}tree_dict = tree_to_dict(my_tree, tree_dict)print(tree_dict)plot_model(tree_dict, "forestfires.gv")

九、剪枝

由于我自己写的后剪枝操作代码对于树没有任何修剪,这里就仅提供周志华《机器学习》中介绍的剪枝方法,不附带相关代码了。

本人正在学习人工智能方向,日常更新一些学习记录,欢迎互相交流

欢迎三连,互粉,等你呦

from math import log
import operator
import pickledef calcShannonEnt(dataSet):# 统计数据数量numEntries = len(dataSet)# 存储每个label出现次数label_counts = {}# 统计label出现次数for featVec in dataSet:current_label = featVec[-1]if current_label not in label_counts:  # 提取label信息label_counts[current_label] = 0  # 如果label未在dict中则加入label_counts[current_label] += 1  # label计数shannon_ent = 0  # 经验熵# 计算经验熵for key in label_counts:prob = float(label_counts[key]) / numEntriesshannon_ent -= prob * log(prob, 2)return shannon_entdef splitDataSet(data_set, axis, value):ret_dataset = []for feat_vec in data_set:if feat_vec[axis] == value:reduced_feat_vec = feat_vec[:axis]reduced_feat_vec.extend(feat_vec[axis + 1:])ret_dataset.append(reduced_feat_vec)return ret_datasetdef chooseBestFeatureToSplit(dataSet):# 特征数量num_features = len(dataSet[0]) - 1# 计算数据香农熵base_entropy = calcShannonEnt(dataSet)# 信息增益best_info_gain = 0.0# 最优特征索引值best_feature = -1# 遍历所有特征for i in range(num_features):# 获取dataset第i个特征feat_list = [exampel[i] for exampel in dataSet]# 创建set集合,元素不可重合unique_val = set(feat_list)# 经验条件熵new_entropy = 0.0# 计算信息增益for value in unique_val:# sub_dataset划分后的子集sub_dataset = splitDataSet(dataSet, i, value)# 计算子集的概率prob = len(sub_dataset) / float(len(dataSet))# 计算经验条件熵new_entropy += prob * calcShannonEnt(sub_dataset)# 信息增益info_gain = base_entropy - new_entropy# 打印每个特征的信息增益print("第%d个特征的信息增益为%.3f" % (i, info_gain))# 计算信息增益if info_gain > best_info_gain:# 更新信息增益best_info_gain = info_gain# 记录信息增益最大的特征的索引值best_feature = iprint("最优索引值:" + str(best_feature))print()return best_featuredef majority_cnt(class_list):class_count = {}# 统计class_list中每个元素出现的次数for vote in class_list:if vote not in class_count:class_count[vote] = 0class_count[vote] += 1# 根据字典的值降序排列sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)return sorted_class_count[0][0]def creat_tree(dataSet, labels, featLabels):# 取分类标签(是否放贷:yes or no)class_list = [exampel[-1] for exampel in dataSet]# 如果类别完全相同则停止分类if class_list.count(class_list[0]) == len(class_list):return class_list[0]# 遍历完所有特征时返回出现次数最多的类标签if len(dataSet[0]) == 1:return majority_cnt(class_list)# 选择最优特征best_feature = chooseBestFeatureToSplit(dataSet)# 最优特征的标签best_feature_label = labels[best_feature]featLabels.append(best_feature_label)# 根据最优特征的标签生成树my_tree = {best_feature_label: {}}# 删除已使用标签del(labels[best_feature])# 得到训练集中所有最优特征的属性值feat_value = [exampel[best_feature] for exampel in dataSet]# 去掉重复属性值unique_vls = set(feat_value)for value in unique_vls:my_tree[best_feature_label][value] = creat_tree(splitDataSet(dataSet, best_feature, value), labels, featLabels)return my_treedef get_num_leaves(my_tree):num_leaves = 0first_str = next(iter(my_tree))second_dict = my_tree[first_str]for key in second_dict.keys():if type(second_dict[key]).__name__ == 'dict':num_leaves += get_num_leaves(second_dict[key])else:num_leaves += 1return num_leavesdef get_tree_depth(my_tree):max_depth = 0       # 初始化决策树深度firsr_str = next(iter(my_tree))     # python3中myTree.keys()返回的是dict_keys,不在是list,所以不能使用myTree.keys()[0]的方法获取结点属性,可以使用list(myTree.keys())[0]second_dict = my_tree[firsr_str]    # 获取下一个字典for key in second_dict.keys():if type(second_dict[key]).__name__ == 'dict':     # 测试该结点是否为字典,如果不是字典,代表此结点为叶子结点this_depth = 1 + get_tree_depth(second_dict[key])else:this_depth = 1if this_depth > max_depth:max_depth = this_depth      # 更新层数return max_depthdef classify(input_tree, feat_labels, test_vec):# 获取决策树节点first_str = next(iter(input_tree))# 下一个字典second_dict = input_tree[first_str]feat_index = feat_labels.index(first_str)for key in second_dict.keys():if test_vec[feat_index] == key:if type(second_dict[key]).__name__ == 'dict':class_label = classify(second_dict[key], feat_labels, test_vec)else:class_label = second_dict[key]return class_labeldef storeTree(input_tree, filename):# 存储树with open(filename, 'wb') as fw:pickle.dump(input_tree, fw)def grabTree(filename):# 读取树fr = open(filename, 'rb')return pickle.load(fr)if __name__ == "__main__":# 数据集dataSet = [[0, 0, 0, 0, 'no'],[0, 0, 0, 1, 'no'],[0, 1, 0, 1, 'yes'],[0, 1, 1, 0, 'yes'],[0, 0, 0, 0, 'no'],[1, 0, 0, 0, 'no'],# [1, 0, 0, 0, 'yes'],[1, 0, 0, 1, 'no'],[1, 1, 1, 1, 'yes'],[1, 0, 1, 2, 'yes'],[1, 0, 1, 2, 'yes'],[2, 0, 1, 2, 'yes'],[2, 0, 1, 1, 'yes'],[2, 1, 0, 1, 'yes'],[2, 1, 0, 2, 'yes'],[2, 0, 0, 0, 'no']]# 分类属性labels = ['年龄', '有工作', '有自己的房子', '信贷情况']print(dataSet)print()print(calcShannonEnt(dataSet))print()featLabels = []myTree = creat_tree(dataSet, labels, featLabels)print(myTree)print(get_tree_depth(myTree))print(get_num_leaves(myTree))#测试数据testVec = [0, 1, 1, 1]result = classify(myTree, featLabels, testVec)if result == 'yes':print('放贷')if result == 'no':print('不放贷')# 存储树storeTree(myTree,'classifierStorage.txt')# 读取树myTree2 = grabTree('classifierStorage.txt')print(myTree2)testVec2 = [1, 0]result2 = classify(myTree2, featLabels, testVec)if result2 == 'yes':print('放贷')if result2 == 'no':print('不放贷')
[[0, 0, 0, 0, 'no'], [0, 0, 0, 1, 'no'], [0, 1, 0, 1, 'yes'], [0, 1, 1, 0, 'yes'], [0, 0, 0, 0, 'no'], [1, 0, 0, 0, 'no'], [1, 0, 0, 1, 'no'], [1, 1, 1, 1, 'yes'], [1, 0, 1, 2, 'yes'], [1, 0, 1, 2, 'yes'], [2, 0, 1, 2, 'yes'], [2, 0, 1, 1, 'yes'], [2, 1, 0, 1, 'yes'], [2, 1, 0, 2, 'yes'], [2, 0, 0, 0, 'no']]0.9709505944546686第0个特征的信息增益为0.083
第1个特征的信息增益为0.324
第2个特征的信息增益为0.420
第3个特征的信息增益为0.363
最优索引值:2第0个特征的信息增益为0.252
第1个特征的信息增益为0.918
第2个特征的信息增益为0.474
最优索引值:1{'有自己的房子': {0: {'有工作': {0: 'no', 1: 'yes'}}, 1: 'yes'}}
2
3
放贷
{'有自己的房子': {0: {'有工作': {0: 'no', 1: 'yes'}}, 1: 'yes'}}
放贷
import csv
import operator
import copy
import numpy as np# 0PassengerId:乘客的ID                               不重要
# 1Survived:乘客是否获救,Key:0=没获救,1=已获救
# 2Pclass:乘客船舱等级(1/2/3三个等级舱位)
# 3Name:乘客姓名                                       不重要
# 4Sex:性别
# 5Age:年龄
# 6SibSp:乘客在船上的兄弟姐妹/配偶数量
# 7Parch:乘客在船上的父母/孩子数量
# 8Ticket:船票号                                         不重要
# 9Fare:船票价
# 10Cabin:客舱号码                                        不重要
# 11Embarked:登船的港口                                   不重要def loadDataset(filename):with open(filename, 'r') as f:lines = csv.reader(f)data_set = list(lines)if filename != 'titanic.csv':for i in range(len(data_set)):del(data_set[i][0])# 整理数据for i in range(len(data_set)):del(data_set[i][0])del(data_set[i][2])data_set[i][4] += data_set[i][5]del(data_set[i][5])del(data_set[i][5])del(data_set[i][6])del(data_set[i][-1])category = data_set[0]del (data_set[0])# 转换数据格式for data in data_set:data[0] = int(data[0])data[1] = int(data[1])if data[3] != '':data[3] = float(data[3])else:data[3] = Nonedata[4] = float(data[4])data[5] = float(data[5])# 补全缺失值 转换记录方式 分类for data in data_set:if data[3] is None:data[3] = 28# male : 1, female : 0if data[2] == 'male':data[2] = 1else:data[2] = 0# age <25 为0, 25<=age<31为1,age>=31为2if data[3] < 60: # 但是测试得60分界准确率最高???!!!data[3] = 0else:data[3] = 1# sibsp&parcg以2为界限,小于为0,大于为1if data[4] < 2:data[4] = 0else:data[4] = 1# fare以64为界限if data[-1] < 64:data[-1] = 0else:data[-1] = 1return data_set, categorydef gini(data, i):num = len(data)label_counts = [0, 0, 0, 0]p_count = [0, 0, 0, 0]gini_count = [0, 0, 0, 0]for d in data:label_counts[d[i]] += 1for l in range(len(label_counts)):for d in data:if label_counts[l] != 0 and d[0] == 1 and d[i] == l:p_count[l] += 1print(label_counts)print(p_count)for l in range(len(label_counts)):if label_counts[l] != 0:gini_count[l] = 2*(p_count[l]/label_counts[l])*(1 - p_count[l]/label_counts[l])gini_p = 0for l in range(len(gini_count)):gini_p += (label_counts[l]/num)*gini_count[l]print(gini_p)return gini_pdef get_best_feature(data, category):if len(category) == 2:return 1, category[1]feature_num = len(category) - 1data_num = len(data)feature_gini = []for i in range(1, feature_num+1):feature_gini.append(gini(data, i))min = 0for i in range(len(feature_gini)):if feature_gini[i] < feature_gini[min]:min = iprint(feature_gini)print(category)print(min+1)print(category[min+1])return min+1, category[min + 1]def majority_cnt(class_list):class_count = {}# 统计class_list中每个元素出现的次数for vote in class_list:if vote not in class_count:class_count[vote] = 0class_count[vote] += 1# 根据字典的值降序排列sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)return sorted_class_count[0][0]class Node(object):def __init__(self, item):self.name = itemself.lchild = Noneself.rchild = Nonedef creat_tree(data, labels, feature_labels=[]):
# 三种结束情况# 取分类标签(survivor or death)class_list = [exampel[0] for exampel in data]if class_list == []:return Node(0)# 如果类别完全相同则停止分类if class_list.count(class_list[0]) == len(class_list):return Node(class_list[0])# 遍历完所有特征时返回出现次数最多的类标签if len(data[0]) == 1:return Node(majority_cnt(class_list))# 最优特征的标签best_feature_num, best_feature_label = get_best_feature(data, labels)feature_labels.append(best_feature_label)node = Node(best_feature_label)ldata = []rdata = []for d in data:if d[best_feature_num] == 1:del(d[best_feature_num])ldata.append(d)else:del(d[best_feature_num])rdata.append(d)labels2 = copy.deepcopy(labels)del(labels2[best_feature_num])tree = nodetree.lchild = creat_tree(ldata, labels2, feature_labels)tree.rchild = creat_tree(rdata, labels2, feature_labels)return treedef breadth_travel(tree):"""广度遍历"""queue = [tree]while queue:cur_node = queue.pop(0)print(cur_node.name, end=" ")if cur_node.lchild is not None:queue.append(cur_node.lchild)if cur_node.rchild is not None:queue.append(cur_node.rchild)print()def prediction(t_tree, test, labels):result = []for data in test:l = copy.deepcopy(labels)tree = t_treefor i in range(len(labels)):if tree.name == 1 or tree.name == 0:result.append(tree.name)breakif len(data) == 1:result.append(0)breakj = 1while j < len(data)-1:if tree.name == l[j]:breakj += 1if data[j] == 1:tree = tree.lchildelse:tree = tree.rchilddel(l[j])del(data[j])return resultif __name__ == "__main__":test_set, category = loadDataset('data/data43004/titanic_test.csv')train_set, category = loadDataset('data/data43004/titanic_train.csv')print(category)print(train_set)print()print(test_set)my_tree = creat_tree(train_set, category)print(my_tree)breadth_travel(my_tree)print(category)print(test_set)test = copy.deepcopy(test_set)result = prediction(my_tree, test_set, category)print(len(test_set))print(result)counts = 0for i in range(len(test_set)):if test_set[i][0] == result[i]:counts += 1print(counts)accurancy = counts/len(test_set)print(accurancy)
['Survived', 'Pclass', 'Sex', 'Age', 'SibSpParch', 'Fare']
[[0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 1, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 0, 0], [1, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 1, 1, 0, 1, 1], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 3, 0, 0, 0, 0], [0, 2, 1, 1, 0, 0], [0, 1, 1, 0, 1, 1], [0, 1, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 0, 0, 1, 1], [1, 2, 0, 0, 1, 0], [0, 1, 1, 1, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 3, 1, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 1], [0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 1, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 1, 1], [0, 2, 1, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 1, 1, 0, 0, 1], [1, 3, 1, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 1, 0], [1, 1, 0, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 1], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 1, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 0, 0, 1, 1], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 2, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 1, 1, 0, 1, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 1, 1, 1, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1], [1, 2, 0, 0, 0, 0], [1, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 1, 0, 1, 0], [1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1], [0, 1, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 1, 1, 1], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 1, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 0, 0, 1, 1], [1, 1, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 3, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [1, 1, 0, 0, 1, 1], [1, 1, 0, 0, 0, 1], [0, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 1, 0, 0], [1, 2, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 1], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 1, 1], [1, 1, 0, 0, 0, 1], [1, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 2, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [1, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 0, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 1, 1, 1], [1, 3, 0, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 1, 1, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 1], [0, 3, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1], [1, 3, 0, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 1], [0, 3, 1, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 0, 0, 1, 1], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [1, 2, 0, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1], [0, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 1, 1, 1, 1, 1], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 0, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 1, 0], [1, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [1, 1, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [0, 3, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 2, 0, 0, 1, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 1, 1, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1], [0, 3, 0, 0, 1, 0], [0, 3, 0, 0, 1, 0], [1, 2, 1, 0, 1, 0], [0, 1, 1, 1, 0, 0], [1, 2, 0, 0, 1, 0], [1, 2, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 1, 0, 1, 0], [1, 1, 1, 0, 1, 1], [0, 2, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 1, 1, 0, 0, 1], [1, 1, 0, 0, 1, 1], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 3, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 1, 0, 0, 0, 1], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 1], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 1, 0, 1, 0], [1, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 1, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 2, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 1, 1, 0, 0, 0], [1, 3, 1, 0, 1, 0], [0, 2, 1, 0, 1, 1], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 2, 1, 1, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 1, 1, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 3, 0, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 1, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 3, 1, 0, 1, 0], [1, 1, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 1, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 1, 1, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [1, 1, 0, 0, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 1, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 1], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 1], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 0, 0, 1, 1], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 1, 0, 0], [0, 3, 0, 0, 1, 0], [1, 1, 0, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [1, 1, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 1, 0], [1, 1, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0]][[1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 0, 1], [1, 3, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 0, 0, 0, 0], [0, 1, 1, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 1, 1, 0, 1, 0], [0, 1, 1, 0, 0, 1], [1, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 1, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [1, 2, 1, 0, 1, 0], [0, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [1, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 1, 1, 0, 0, 0], [1, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 2, 1, 0, 1, 0], [1, 1, 0, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 1, 1], [1, 3, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 2, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [1, 1, 1, 0, 1, 1], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 1, 0, 0], [1, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 0, 0, 1, 1], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 1, 0, 0], [1, 1, 0, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 2, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [0, 3, 0, 0, 0, 0], [1, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 1, 1, 0, 1, 0], [1, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 1, 1, 0, 0, 1], [1, 1, 1, 0, 0, 1], [1, 1, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 1, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 1, 0], [1, 1, 0, 1, 0, 1], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 2, 0, 0, 1, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0]]
[0, 151, 133, 347]
[0, 94, 59, 82]
0.41500155234215325
[224, 407, 0, 0]
[166, 69, 0, 0]
0.31785849178876113
[610, 21, 0, 0]
[231, 4, 0, 0]
0.4651697180911803
[409, 222, 0, 0]
[128, 107, 0, 0]
0.4544191702736179
[544, 87, 0, 0]
[174, 61, 0, 0]
0.4328858107536145
[0.41500155234215325, 0.31785849178876113, 0.4651697180911803, 0.4544191702736179, 0.4328858107536145]
['Survived', 'Pclass', 'Sex', 'Age', 'SibSpParch', 'Fare']
2
Sex
[0, 81, 79, 247]
[0, 25, 9, 35]
0.27174014737861063
[388, 19, 0, 0]
[67, 2, 0, 0]
0.28117880125476435
[300, 107, 0, 0]
[41, 28, 0, 0]
0.27552611234854224
[377, 30, 0, 0]
[62, 7, 0, 0]
0.2809355292113913
[0.27174014737861063, 0.28117880125476435, 0.27552611234854224, 0.2809355292113913]
['Survived', 'Pclass', 'Age', 'SibSpParch', 'Fare']
1
Pclass
[69, 12, 0, 0]
[23, 2, 0, 0]
0.4197530864197531
[57, 24, 0, 0]
[13, 12, 0, 0]
0.3959280918345246
[58, 23, 0, 0]
[18, 7, 0, 0]
0.42674958816887854
[0.4197530864197531, 0.3959280918345246, 0.42674958816887854]
['Survived', 'Age', 'SibSpParch', 'Fare']
2
SibSpParch
[22, 2, 0, 0]
[11, 1, 0, 0]
0.5
[9, 15, 0, 0]
[6, 6, 0, 0]
0.4666666666666667
[0.5, 0.4666666666666667]
['Survived', 'Age', 'Fare']
2
Fare
[47, 10, 0, 0]
[12, 1, 0, 0]
0.3451287793952968
[49, 8, 0, 0]
[12, 1, 0, 0]
0.3486394557823129
[0.3451287793952968, 0.3486394557823129]
['Survived', 'Age', 'Fare']
1
Age
[319, 7, 0, 0]
[44, 0, 0, 0]
0.232705733023059
[243, 83, 0, 0]
[28, 16, 0, 0]
0.2312228540002622
[319, 7, 0, 0]
[44, 0, 0, 0]
0.232705733023059
[0.232705733023059, 0.2312228540002622, 0.232705733023059]
['Survived', 'Age', 'SibSpParch', 'Fare']
2
SibSpParch
[82, 1, 0, 0]
[16, 0, 0, 0]
0.3103144284454893
[77, 6, 0, 0]
[16, 0, 0, 0]
0.3054295102487874
[0.3103144284454893, 0.3054295102487874]
['Survived', 'Age', 'Fare']
2
Fare
[237, 6, 0, 0]
[28, 0, 0, 0]
0.203226198537966
[242, 1, 0, 0]
[28, 0, 0, 0]
0.20378872904125428
[0.203226198537966, 0.20378872904125428]
['Survived', 'Age', 'Fare']
1
Age
[0, 70, 54, 100]
[0, 69, 50, 47]
0.2642805177626606
[222, 2, 0, 0]
[164, 2, 0, 0]
0.3825611325611326
[109, 115, 0, 0]
[87, 79, 0, 0]
0.37758989116188957
[167, 57, 0, 0]
[112, 54, 0, 0]
0.35471725721489356
[0.2642805177626606, 0.3825611325611326, 0.37758989116188957, 0.35471725721489356]
['Survived', 'Pclass', 'Age', 'SibSpParch', 'Fare']
1
Pclass
[68, 2, 0, 0]
[67, 2, 0, 0]
0.028151260504201584
[32, 38, 0, 0]
[32, 37, 0, 0]
0.02781954887218042
[17, 53, 0, 0]
[17, 52, 0, 0]
0.028032345013477084
[0.028151260504201584, 0.02781954887218042, 0.028032345013477084]
['Survived', 'Age', 'SibSpParch', 'Fare']
2
SibSpParch
[36, 2, 0, 0]
[35, 2, 0, 0]
0.05116959064327487
[9, 29, 0, 0]
[9, 28, 0, 0]
0.050816696914700484
[0.05116959064327487, 0.050816696914700484]
['Survived', 'Age', 'Fare']
2
Fare
[154, 0, 0, 0]
[97, 0, 0, 0]
0.4662674987350312
[77, 77, 0, 0]
[55, 42, 0, 0]
0.4520155169505819
[150, 4, 0, 0]
[95, 2, 0, 0]
0.4653679653679654
[0.4662674987350312, 0.4520155169505819, 0.4653679653679654]
['Survived', 'Age', 'SibSpParch', 'Fare']
2
SibSpParch
[77, 0, 0, 0]
[42, 0, 0, 0]
0.49586776859504134
[73, 4, 0, 0]
[40, 2, 0, 0]
0.4956413449564135
[0.49586776859504134, 0.4956413449564135]
['Survived', 'Age', 'Fare']
2
Fare
[77, 0, 0, 0]
[55, 0, 0, 0]
0.40816326530612246
[77, 0, 0, 0]
[55, 0, 0, 0]
0.40816326530612246
[0.40816326530612246, 0.40816326530612246]
['Survived', 'Age', 'Fare']
1
Age
<__main__.Node object at 0x7f96247b9e10>
Sex Pclass Pclass SibSpParch SibSpParch SibSpParch SibSpParch Fare Age Fare Age Fare 1 Fare Age Age Age Fare Fare 0 Age 0 Fare Age 1 Age Age 0 Fare 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 1
['Survived', 'Pclass', 'Sex', 'Age', 'SibSpParch', 'Fare']
[[1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 0, 1], [1, 3, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 0, 0, 0, 0], [0, 1, 1, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 1, 1, 0, 1, 0], [0, 1, 1, 0, 0, 1], [1, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 1, 0], [0, 1, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [1, 2, 1, 0, 1, 0], [0, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [1, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 2, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 1, 1, 0, 0, 0], [1, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 2, 1, 0, 1, 0], [1, 1, 0, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 1, 1], [1, 3, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [1, 2, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [1, 2, 0, 0, 0, 0], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [1, 1, 1, 0, 1, 1], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 2, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 1, 0, 0], [1, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 0, 0, 1, 1], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 3, 0, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 1, 0, 0], [1, 1, 0, 0, 1, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 2, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 2, 0, 0, 1, 0], [0, 1, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [1, 1, 1, 0, 0, 0], [1, 3, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1], [0, 3, 0, 0, 0, 0], [1, 1, 1, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 0], [0, 1, 1, 0, 1, 0], [1, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 1, 1, 0, 0, 1], [1, 1, 1, 0, 0, 1], [1, 1, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 2, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 2, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [1, 1, 0, 0, 0, 1], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 1, 1, 1, 1, 1], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 0, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 3, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0], [1, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 2, 1, 0, 1, 0], [1, 1, 0, 1, 0, 1], [1, 2, 1, 0, 1, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [1, 1, 0, 0, 1, 1], [0, 2, 0, 0, 1, 0], [1, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 2, 1, 0, 1, 0], [0, 3, 0, 0, 1, 1], [0, 3, 1, 0, 0, 0], [0, 1, 1, 0, 0, 0], [0, 3, 1, 0, 0, 0], [0, 3, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 3, 0, 0, 1, 0]]
259
[1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1]
194
0.749034749034749
import csv
import copy
import calendar
from math import log1p
from math import sqrt
from graphviz import Digraph# 1. Montesinho公园地图内的X-x轴空间坐标:1到9
# 2. Montesinho公园地图内的 Y-y轴空间坐标:2到9
# 3.每年的月份-月:“ jan”到“ dec'
# 4.每天-星期几:从“周一”到“星期日”
# 5.FFMC
# - FWI系统中的FFMC指数:18.7至96.20 6. DMC-FWI系统中的DMC指数:1.1至291.3 7. DC- FWI系统的DC指数:7.9至860.6
# 8. ISI-FWI系统的ISI指数:0.0至56.10
# 9. temp- 摄氏温度:2.2至33.30
# 10. RH-相对湿度(%):15.0至100
# 11。风-以km / h为单位的风速:0.40至9.40
# 12.雨量-外部雨量,单位为mm / m2:0.0到6.4
# 13.面积-森林的燃烧面积(以ha为单位):0.00到1090.84
# (此输出变量非常偏向0.0,因此使用对数变换)def loadDataset(filename):with open(filename, 'r') as f:lines = csv.reader(f)data_set = list(lines)for i in range(len(data_set)):del(data_set[i][0])# 整理数据for i in range(len(data_set)):del(data_set[i][3])category = data_set[0]del(data_set[0])for i in range(len(data_set)):for j in range(len(data_set[0])):if j == 2:data_set[i][j] = str.capitalize(data_set[i][j])data_set[i][j] = list(calendar.month_abbr).index(data_set[i][j])data_set[i][j] = float(data_set[i][j])if j == 11:data_set[i][j] = log1p(data_set[i][j]+1)return data_set, categorydef split_data(data):data_set = copy.deepcopy(data)data_mat = []label_mat = []for i in range(len(data_set)):label_mat.append(data_set[i][-1])del(data_set[i][-1])data_mat.append(data_set[i])print(data_mat)print(label_mat)return data_mat, label_matdef mse(data_set, split_value, result):left_num = 0left_sum = 0left_list = []right_num = 0right_sum = 0rigth_list = []for i in range(len(data_set)):if data_set[i] <= split_value:left_num += 1left_sum += result[i]left_list.append(result[i])else:right_num += 1right_sum += result[i]rigth_list.append(result[i])c1 = left_sum/left_numc2 = right_sum/right_numm = 0for i in range(len(left_list)):m += pow((left_list[i]-c1), 2)for i in range(len(rigth_list)):m += pow((rigth_list[i]-c2), 2)return mdef get_best_split_value(data, result):data_set = set(data)data_set = list(data_set)data.sort()length = len(data_set)if length == 1:return float("Inf"), float("Inf")print(data_set)split_value = []for i in range(length-1):split_value.append((data_set[i+1] + data_set[i])/2)# if len(split_value) == 2:#     return (split_value[0]+split_value[1])/2, mse(data, (split_value[0]+split_value[1])/2, result)m = []for i in range(len(split_value)):m.append(mse(data, split_value[i], result))min_mse = 0for i in range(len(m)):if m[i] < m[min_mse]:min_mse = iprint(m)return split_value[min_mse], m[min_mse]def get_best_feature(data, category):length = len(category)-1data_set, result = split_data(data)feature_mse = []split_feature_value = []feature_values = []for i in range(length):feature_mse.append(0)split_feature_value.append(0)feature_values.append([])for j in range(len(data_set)):feature_values[i].append(data_set[j][i])for i in range(length):split_feature_value[i], feature_mse[i] = get_best_split_value(feature_values[i], result)min_f = 0for i in range(length):if feature_mse[i] < feature_mse[min_f]:min_f = iprint(feature_mse)return min_f, split_feature_value[min_f]class Node(object):def __init__(self, category, item):self.name = categoryself.elem = itemself.lchild = Noneself.rchild = Nonedef leaf_value(data):sum = 0for i in range(len(data)):sum += data[i][-1]return sum/len(data)def creat_tree(data, labels, feature_labels=[]):
# 结束情况if len(labels) == 1:return Node('result', leaf_value(data))if len(data) < 0.05*len(train_set):return Node('result', leaf_value(data))# 最优特征的标签best_feature_num, best_feature_value = get_best_feature(data, labels)feature_labels.append(labels[best_feature_num])node = Node(labels[best_feature_num], best_feature_value)ldata = []rdata = []for d in data:if d[best_feature_num] <= best_feature_value:del(d[best_feature_num])ldata.append(d)else:del(d[best_feature_num])rdata.append(d)labels2 = copy.deepcopy(labels)del(labels2[best_feature_num])tree = nodetree.lchild = creat_tree(ldata, labels2, feature_labels)tree.rchild = creat_tree(rdata, labels2, feature_labels)return treedef breadth_travel(tree):"""广度遍历"""queue = [tree]while queue:cur_node = queue.pop(0)print(cur_node.name, end=" ")print(cur_node.elem, end=" ")if cur_node.lchild is not None:queue.append(cur_node.lchild)if cur_node.rchild is not None:queue.append(cur_node.rchild)print()def prediction(t_tree, test, labels):result = []test_mat, x = split_data(test)for data in test_mat:l = copy.deepcopy(labels)tree = t_treefor i in range(len(labels)):if tree.name == "result":result.append(tree.elem)breakj = 0while j:if tree.name == l[j]:breakj += 1if data[j] <= tree.elem:tree = tree.lchildelse:tree = tree.rchilddel(l[j])del(data[j])return resultdef tree_to_dict(tree, tree_dict):if tree.lchild == None and tree.rchild == None:tree_dict[tree.name+str(tree.elem)] = str(tree.elem)return tree_dict[tree.name+str(tree.elem)]tree_dict[tree.name+str(tree.elem)] = {}if tree.lchild != None:tree_to_dict(tree.lchild, tree_dict[tree.name+str(tree.elem)])if tree.rchild != None:tree_to_dict(tree.rchild, tree_dict[tree.name+str(tree.elem)])return tree_dictdef plot_model(tree, name):g = Digraph("G", filename=name, format='png', strict=False)first_label = list(tree.keys())[0]g.node("0", first_label)_sub_plot(g, tree, "0")g.view()root = "0"def _sub_plot(g, tree, inc):global rootfirst_label = list(tree.keys())[0]ts = tree[first_label]if type(ts).__name__ != 'dict':root = str(int(root) + 1)g.node(root, str(tree[first_label]))g.edge(inc, root, str(ts))returnfor i in ts.keys():if isinstance(tree[first_label][i], dict):root = str(int(root) + 1)g.node(root, list(tree[first_label][i].keys())[0])g.edge(inc, root, str(i))_sub_plot(g, tree[first_label][i], root)else:root = str(int(root) + 1)g.node(root, str(tree[first_label][i]))g.edge(inc, root, str(i))if __name__ == "__main__":test_set, category = loadDataset('data/data43017/test.csv')train_set, category = loadDataset('data/data43017/train.csv')print(category)print(train_set)print(test_set)# a, b = get_best_feature(train_set, category)# print(a)# print(b)my_tree = creat_tree(train_set, category)breadth_travel(my_tree)result = prediction(my_tree, test_set, category)sme = 0for i in range(len(result)):sme += pow(abs(result[i]-test_set[i][-1]), 2)print(sqrt(sme/len(result)))tree_dict = {}tree_dict = tree_to_dict(my_tree, tree_dict)print(tree_dict)plot_model(tree_dict, "forestfires.gv")

[飞桨机器学习]决策树算法相关推荐

  1. [飞桨机器学习]六种常见数据降维

    [飞桨机器学习]六种常见数据降维 事实上,在高维情形下 现的数据样本稀疏. 距离计算困 难等问是所有机器学习方法共同面 的严重障碍, 被称为" 维数灾难" (curse of di ...

  2. 机器学习-决策树算法原理及实现-附python代码

    1.决策树-分类树 sklearn.tree.DecisionTreeClassifier官方地址: https://scikit-learn.org/stable/modules/generated ...

  3. 机器学习-决策树算法ID3实现,含例子(红酒分类)

    决策树原理实现代码如下所示:(参考自机器学习实践 Peter Harrington). import mathx=[[0,1,"no"],[0,1,"no"], ...

  4. 机器学习 决策树算法

    4.1 决策树算法简介 学习目标 知道什么是决策树 能够通过sklearn实现决策树分类,进一步认识决策树 1 概念 决策树思想的来源非常朴素,程序设计中的条件分支结构就是if-else结构,最早的决 ...

  5. 机器学习——决策树算法

    文章目录 一.决策树介绍 二.利用信息增益选择最优划分属性 三.ID3代码实现 1.jupyter下python实现 2. 使用sklearn实现ID3 四.C4.5算法实现 五.CART算法实现 六 ...

  6. python决策树原理_Python机器学习决策树算法 | kTWO-个人博客

    ps:这篇文章主要来介绍决策树算法的基本原理和概念.具体的Python应用将在下一篇文章中介绍. 1.什么是决策树? 决策树(decision tree)也叫做判定树,类似于流程图的结构,每个内部的结 ...

  7. 机器学习决策树算法泰坦尼克号乘客生存预测

    目录 1 决策树算法api 2 泰坦尼克号乘客案例背景 2.1 步骤分析 2.2 代码实现 2.3 决策树可视化 2.3.1 保存树的结构到dot文件 2.3.2 网站显示结构 3 决策树总结 4 小 ...

  8. 机器学习——决策树算法原理及案例

    机器学习在各个领域都有广泛的应用,特别在数据分析领域有着深远的影响.决策树是机器学习中最基础且应用最广泛的算法模型.本文介绍了机器学习的相关概念.常见的算法分类和决策树模型及应用.通过一个决策树案例, ...

  9. Python数据分析实战【十二】:机器学习决策树算法案例实战【文末源码地址】

    文章目录 构造数据 决策树解决 报错解决 源码地址 构造数据 我们用pandas生成20条数据,其中标签为bad的数据有6条,标签为good的数据有14条,代码如下: import pandas as ...

最新文章

  1. 一个Solidity源文件的布局
  2. Oracle Listener 动态注册 与 静态注册
  3. Linux下KVM的图形界面管理工具(virt-manager)(桌面版)
  4. 英特尔“包抄”英伟达:oneAPI对标CUDA、为《王者荣耀》云游戏提供GPU
  5. Visual Studio Code的一个插件,让图标以Material的设计风格呈现
  6. 解决sublime text3 v3.1.1,Build3176中的汉字形状扭曲问题
  7. slf4j使用(二)
  8. 使用反射获取类的静态属性值
  9. oracle asm 加盘,ASM添加磁盘最佳实践
  10. 视频教程-Mugeda(木疙瘩)H5案例课—快闪制作-HTML5/CSS
  11. 算法题目打卡:Ques20201019
  12. 二元二次拟合 matlab函数
  13. 文档型漏洞攻击研究报告
  14. 为何国外的人都爱用电子邮箱?注册电子邮箱有哪些好处呢?
  15. 互联网、因特网、万维网的区别
  16. 增量式光栅尺与绝对式光栅尺的区别及发展趋势
  17. 每日时间管理【战隼】
  18. 20190930双色球中奖号码概率的处理(VC6.0)
  19. Small Tools(1) 前后端本地环境启动运行
  20. AIGC大一统模型来了!CV界泰斗黄煦涛创立团队提出「全能Diffusion」

热门文章

  1. 预定酒店的类图与E-R模型
  2. 寒衣节,这首歌突然火了!
  3. python实验心得_Python的学习心得
  4. java中的面向对象(封装,private)。
  5. 爬虫配套学习-前端学习笔记06-一些动态效果
  6. 酷兽内存条怎么样_酷兽内存条用什么颗粒_酷兽内存为什么那么便宜?
  7. 安全又智能的小车,奇瑞小蚂蚁
  8. c语言sql语句大全,在C语言中嵌入SQL语句.doc
  9. Vue 使用tinymce遇到的一些坑
  10. PBOC/EMV之小额支付, 电子钱包和借贷记