1. 什么是决策树

分类决策树模型是一种描述对实例进行分类的树形结构. 决策树由结点和有向边组成. 结点有两种类型: 内部结点和叶节点. 内部节点表示一个特征或属性, 叶节点表示一个类.

决策树(Decision Tree),又称为判定树, 是一种以树结构(包括二叉树和多叉树)形式表达的预测分析模型.

通过把实例从根节点排列到某个叶子节点来分类实例

叶子节点为实例所属的分类

树上每个节点说明了对实例的某个属性的测试, 节点的每个后继分支对应于该属性的一个可能值

2.决策树结构

决策树结构.png

3.决策树种类

分类树--对离散变量做决策树

回归树--对连续变量做决策树

4.决策树算法(贪心算法)

有监督的学习

非参数学习算法

自顶向下递归方式构造决策树

在每一步选择中都采取在当前状态下最好/优的选择

决策树学习的算法通常是一个递归地选择最优特征, 并根据该特征对训练数据进行分割, 使得各个子数据集有一个最好的分类的过程.

在决策树算法中,ID3基于信息增益作为属性选择的度量, C4.5基于信息增益作为属性选择的度量, CART基于基尼指数作为属性选择的度量

5.决策树学习过程

特征选择

决策树生成: 递归结构, 对应于模型的局部最优

决策树剪枝: 缩小树结构规模, 缓解过拟合, 对应于模型的全局选择

6.决策树优缺点

优点:

(1)速度快: 计算量相对较小, 且容易转化成分类规则. 只要沿着树根向下一直走到叶, 沿途的分裂条件就能够唯一确定一条分类的谓词.

(2)准确性高: 挖掘出来的分类规则准确性高, 便于理解, 决策树可以清晰的显示哪些字段比较重要, 即可以生成可以理解的规则.

(3)可以处理连续和种类字段

(4)不需要任何领域知识和参数假设

(5)适合高维数据

缺点:

(1)对于各类别样本数量不一致的数据, 信息增益偏向于那些更多数值的特征

(2)容易过拟合

(3)忽略属性之间的相关性

5.2 决策树数学知识

1.信息论:

若一事假有k种结果, 对应概率为

, 则此事件发生后所得到的信息量I为:

2.熵:

给定包含关于某个目标概念的正反样例的样例集S, 那么S相对这个布尔型分类的熵为:

其中

代表正样例,

代表反样例

3.条件熵:

假设随机变量(X,Y), 其联合分布概率为P(X=xi,Y=yi)=Pij, i=1,2,...,n;j=1,2,..,m

则条件熵H(Y|X)表示在已知随机变量X的条件下随机变量Y的不确定性, 其定义为X在给定条件下Y的条件概率分布的熵对X的数学期望

5.3 决策树算法Hunt

在Hunt算法中, 通过递归的方式建立决策树.

如果数据集D种所有的数据都属于一个类, 那么将该节点标记为节点.

如果数据集D中包含属于多个类的训练数据, 那么选择一个属性将训练数据划分为较小的子集, 对于测试条件的每个输出, 创建一个子节点, 并根据测试结果将D种的记录分布到子节点中, 然后对每一个子节点重复1,2过程, 对子节点的子节点依然是递归地调用该算法, 直至最后停止.

5.4 决策树算法ID3

1.分类系统信息熵

2.条件熵

3.信息增益Gain(S, A) 定义

4.属性选择度量

使用信息增益, 选择最高信息增益的属性作为当前节点的测试属性

5.算法不足

使用ID3算法构建决策树时, 若出现各属性值取值数分布偏差大的情况, 分类精度会大打折扣

ID3算法本身并未给出处理连续数据的方法

ID3算法不能处理带有缺失值的数据集, 故在算法挖掘之前需要对数据集中的缺失值进行预处理

ID3算法只有树的生成, 所以该算法生成的树容易产生过拟合

6.算法流程

ID3(Examples,Target_attribute,Attributes)

Examples即训练样例集. Target_attribute是这棵树要预测的目标属性. Attributes是除目标属性外供学习到的决策树测试的属性列表. 返回能正确分类给定Examples的决策树.

创建树的Root结点

如果Examples都为正, 那么返回label=+的单节点树Root

如果Examples都为负, 那么返回label=-的单节点树Root

如果Attributes为空, 那么返回单节点树Root, label=Examples中最普通的Target_attribute值

否则

A ← Attributes中分类Examples能力最好*的属性

7.算法Python实现

Python实现熵的计算

from math import log

def calcShanNonEnt(dataSet):

numEntries = len(dataSet)

labelCounts = {}

for featVec in dataSet:

currentLabel = featVec[-1]

if currentLabel not in labelCounts.keys():

labelCounts[currentLabel] = 0

labelCounts[currentLabel] += 1

shannonEnt = 0.0

for key in labelCounts:

prob = float(labelCounts[key])/numEntries

shannonEnt -= prob*log(prob,2)

return shannonEnt

# example

dataset = [[1],[2],[3],[3],]

sne = calcShanNonEnt(dataset)

print(sne)

Sklearn.tree参数介绍及使用建议

class sklearn.tree.DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)

# Examples

from sklearn.datasets import load_iris

from sklearn.model_selection import cross_val_score

from sklearn.tree import DecisionTreeClassifier

clf = DecisionTreeClassifier(random_state=0)

iris = load_iris()

cross_val_score(clf, iris.data, iris.target, cv=10)

from sklearn.model_selection import train_test_split

X_train,X_test, y_train, y_test = train_test_split(iris.data,iris.target,test_size=0.3)

res = clf.fit(X_train,y_train)

pre = clf.predict(X_test)

sco = clf.score(X_test, y_test)

print(y_test)

print(pre)

print(sco)

clf.apply(X_train)

clf.apply(X_test)

clf.decision_path(X_train)

type(clf.decision_path(X_train))

X_train.shape

clf.feature_importances_

from sklearn.tree import DecisionTreeClassifier

clf = DecisionTreeClassifier()

clf.fit(X_train, y_train)

clf.feature_importances_

clf.get_params()

clf.predict_log_proba(X_test)

clf.predict_proba(X_test)

限制决策树层数为4的DecisionTreeClassifier实例

from itertools import product

import numpy as np

import matplotlib.pyplot as plt

from sklearn import datasets

from sklearn.tree import DecisionTreeClassifier

# 使用iris数据

iris = datasets.load_iris()

X = iris.data[:, [0, 2]]

y = iris.target

# 训练模型, 限制树的最大深度为4

clf = DecisionTreeClassifier(max_depth=4)

clf.fit(X,y)

# Plot

x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1

y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1

xx, yy = np.meshgrid(np.arange(x_min, x_max, .1),

np.arange(y_min, y_max, .1))

Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])

Z = Z.reshape(xx.shape)

plt.contourf(xx, yy, Z, alpha=.4)

plt.scatter(X[:, 0], X[:, 1], c=y, alpha=.8)

plt.show()

output_12_0.png

This plot compares the decision surfaces learned by a dcision tree classifier(first column), by a random forest classifier(second column), by an extra-trees classifier(third column) and by an AdaBoost classifier(fouth column).

print(__doc__)

import numpy as np

import matplotlib.pyplot as plt

from matplotlib.colors import ListedColormap

from sklearn import clone

from sklearn.datasets import load_iris

from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier,

AdaBoostClassifier)

from sklearn.tree import DecisionTreeClassifier

# Parameters

n_classes = 3

n_estimators = 30

cmap = plt.cm.RdYlBu

plot_step = 0.02

plot_step_coarser = 0.5

RANDOM_SEED = 13

# Load data

iris = load_iris()

plot_idx = 1

models = [DecisionTreeClassifier(max_depth=None),

RandomForestClassifier(n_estimators=n_estimators),

ExtraTreesClassifier(n_estimators=n_estimators),

AdaBoostClassifier(DecisionTreeClassifier(max_depth=3), n_estimators=n_estimators)]

for pair in ([0,1], [0,2], [2,3]):

for model in models:

# print(pair, model)

# only take the two correspoding features

X = iris.data[:, pair]

y = iris.target

# Shuffle

idx = np.arange(X.shape[0])

np.random.seed(RANDOM_SEED)

np.random.shuffle(idx)

X = X[idx]

y = y[idx]

# Standardize

mean = X.mean(axis=0)

std = X.std(axis=0)

X = (X - mean) / std

# Train

clf = clone(model)

clf = model.fit(X, y)

scores = clf.score(X, y)

# Create a title for each column and the console by using str() and

# slicing away useless parts of the string

model_title = str(type(model)).split(".")[-1][:-2][:-len('Classifier')]

model_details = model_title

if hasattr(model, "estimators_"):

model_details += " with {} estimators".format(len(model.estimators_))

print(model_details + " with features", pair,

"has a score of", scores)

plt.subplot(3, 4, plot_idx)

if plot_idx <= len(models):

# Add a title at the top of eeach column

plt.title(model_title)

# Now plot the decision boundary using a fine mesh as input to a filled contour plot

x_min, x_max = X[:,0].min() - 1, X[:,0].max() + 1

y_min, y_max = X[:,1].min() - 1, X[:,1].max() + 1

xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),

np.arange(y_min, y_max, plot_step))

# Plot either a single DecisionTreeClassifier or alpha blend the

# decision surfaces of the ensemble of classifiers

if isinstance(model, DecisionTreeClassifier):

Z = model.predict(np.c_[xx.ravel(), yy.ravel()])

Z = Z.reshape(xx.shape)

cs = plt.contourf(xx, yy, Z, cmap=cmap)

else:

# Choose alpha blend level with respect to the number of estimators

# that are in use (nothing that AdaBoost can use fewer estimtors

# than its maximum if it achieves a good enough fit early on)

estimator_alpha = 1.0 / len(model.estimators_)

print(len(model.estimators_))

for tree in model.estimators_:

Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])

Z = Z.reshape(xx.shape)

cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)

# Build a coarser grid to plot a set of ensemble classifications

# to show how these are different to what we see in the decision

# surfaces. These points are regularly space and do not have a

# black outline

xx_coarser, yy_coarser = np.meshgrid(

np.arange(x_min, x_max, plot_step_coarser),

np.arange(y_min, y_max, plot_step_coarser))

Z_points_coarser = model.predict(np.c_[xx_coarser.ravel(),

yy_coarser.ravel()]

).reshape(xx_coarser.shape)

cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,

c=Z_points_coarser, cmap=cmap,

edgecolors="none")

plt.scatter(X[:, 0], X[:, 1], c=y,

cmap=ListedColormap(['r', 'y', 'b']),

edgecolor='k', s=20)

plot_idx += 1

Output:

Automatically created module for IPython interactive environment

DecisionTree with features [0, 1] has a score of 0.9266666666666666

RandomForest with 30 estimators with features [0, 1] has a score of 0.9266666666666666

30

ExtraTrees with 30 estimators with features [0, 1] has a score of 0.9266666666666666

30

AdaBoost with 30 estimators with features [0, 1] has a score of 0.84

30

DecisionTree with features [0, 2] has a score of 0.9933333333333333

RandomForest with 30 estimators with features [0, 2] has a score of 0.9933333333333333

30

ExtraTrees with 30 estimators with features [0, 2] has a score of 0.9933333333333333

30

AdaBoost with 30 estimators with features [0, 2] has a score of 0.9933333333333333

30

DecisionTree with features [2, 3] has a score of 0.9933333333333333

RandomForest with 30 estimators with features [2, 3] has a score of 0.9933333333333333

30

ExtraTrees with 30 estimators with features [2, 3] has a score of 0.9933333333333333

30

AdaBoost with 30 estimators with features [2, 3] has a score of 0.9933333333333333

30

output_14_1.png

A comparison of a several classifiers in scikit-learn on synthetic datasets.

The point of this examples is to illustrate the nature of decision boundaries of different classifiers.

Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers.

print(__doc__)

# Code source: Gaël Varoquaux

# Andreas Müller

# Mmodified for documentation by Jaques Grobler

# License: BSD 3 clause

import numpy as np

import matplotlib.pyplot as plt

from matplotlib.colors import ListedColormap

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler

from sklearn.datasets import make_moons, make_circles, make_classification

# classifier

from sklearn.neural_network import MLPClassifier

from sklearn.neighbors import KNeighborsClassifier

from sklearn.svm import SVC

from sklearn.gaussian_process import GaussianProcessClassifier

from sklearn.gaussian_process.kernels import RBF

from sklearn.tree import DecisionTreeClassifier

from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier

from sklearn.naive_bayes import GaussianNB

from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis

h = .02 # step size in the mesh

names = ['Nearest Neighbors', 'Linear SVM', 'RBF SVM', 'Gaussian Process', 'Decision Tree', 'Random Forest', 'Neural Net', 'AdaBoost',' Naive Bayes','QDA']

classifiers = [

KNeighborsClassifier(3),

SVC(kernel="linear",C=0.025), # C is pantly parameter

SVC(gamma=2, C=1), # kernel: rbf(default), gamma: Kernel coefficient

GaussianProcessClassifier(1.0 * RBF(1.0)),

DecisionTreeClassifier(max_depth=5),

RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),

MLPClassifier(alpha=1), # MultiLayer Perceptron

AdaBoostClassifier(),

GaussianNB(), # Gaussian Naive Bayes

QuadraticDiscriminantAnalysis()

]

X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,

random_state=1, n_clusters_per_class=1)

rng = np.random.RandomState(2)

X += 2 * rng.uniform(size=X.shape)

linearly_separable = (X, y)

datasets = [make_moons(noise=0.3, random_state=0),

make_circles(noise=0.2, factor=0.5, random_state=1),

linearly_separable

]

figure = plt.figure(figsize=(27,9))

i = 1

# iterate over datasets

for ds_cnt, ds in enumerate(datasets):

# preprocess datasset, split into training and test part

X, y = ds

X = StandardScaler().fit_transform(X)

X_train, X_test, y_train, y_test = \

train_test_split(X,y,test_size=.4,random_state=42)

x_min, x_max = X[:,0].min() - .5, X[:,0].max() + .5

y_min, y_max = X[:,1].min() - .5, X[:,0].max() + .5

xx, yy = np.meshgrid(np.arange(x_min, x_max, h),

np.arange(y_min, y_max, h))

# just plot the dataset first

cm = plt.cm.RdBu

cm_bright = ListedColormap(['#FF0000', '#0000FF'])

ax = plt.subplot(len(datasets), len(classifiers) + 1, i)

if ds_cnt == 0:

ax.set_title("Input data")

# Plot the training points

ax.scatter(X_train[:,0], X_train[:,1], c=y_train, cmap=cm_bright,

edgecolors='k')

# and testing points

ax.scatter(X_test[:,0], X_test[:,1], c=y_test, cmap=cm_bright, alpha=0.6,

edgecolors='k')

ax.set_xlim(xx.min(), xx.max())

ax.set_ylim(yy.min(), yy.max())

ax.set_xticks(())

ax.set_yticks(())

i += 1

# iterate over classifiers

for name, clf in zip(names, classifiers):

ax = plt.subplot(len(datasets), len(classifiers) + 1, i)

clf.fit(X_train, y_train)

score = clf.score(X_test, y_test)

# plot the decision boundary, For that, we will assign a color to each

# point in the mesh [x_min, x_max]*[y_min, y_max].

if hasattr(clf, 'decision_function'):

Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])

else:

Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]

# Put the result into a color plot

Z = Z.reshape(xx.shape)

ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)

# plot also the training points

ax.scatter(X_train[:,0], X_train[:,1], c=y_train, cmap=cm_bright,

edgecolors='k')

# and testing points

ax.scatter(X_test[:,0], X_test[:,1], c=y_test, cmap=cm_bright,

edgecolors='k', alpha=.6)

ax.set_xlim(xx.min(), xx.max())

ax.set_ylim(yy.min(), yy.max())

ax.set_xticks(())

ax.set_yticks(())

if ds_cnt == 0:

ax.set_title(name)

ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),

size=15, horizontalalignment='right')

i += 1

plt.tight_layout()

plt.show()

Classifier comparison.png

This example fits an AdaBoost decisin stump on a non-linearly separable classification dataset composed of two "Gaussian quantiles" clusters and plots the decision boundary and decision scores.

print(__doc__)

# Author: Noel Dawe

#

# License; BSD 3 clause

import numpy as np

import matplotlib.pyplot as plt

from sklearn.ensemble import AdaBoostClassifier

from sklearn.tree import DecisionTreeClassifier

from sklearn.datasets import make_gaussian_quantiles

# Construct dataset

X1, y1 = make_gaussian_quantiles(cov=2.,

n_samples=200, n_features=2,

n_classes=2, random_state=1)

X2, y2 = make_gaussian_quantiles(mean=(3,3), cov=1.5,

n_samples=300, n_features=2,

n_classes=2, random_state=1)

X = np.concatenate((X1, X2))

y = np.concatenate((y1, -y2 + 1))

# Create and fit an AdaBoosted decision tree

bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),

algorithm='SAMME',

n_estimators=200)

bdt.fit(X, y)

plot_colors = 'br'

plot_step = .02

class_names = 'AB'

plt.figure(figsize=(10,5))

# plot the decision boundaries

plt.subplot(121)

x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1

y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1

xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),

np.arange(y_min, y_max, plot_step))

Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])

Z = Z.reshape(xx.shape)

cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)

plt.axis("tight")

# Plot the training points

for i, n, c in zip(range(2), class_names, plot_colors):

idx = np.where(y == i)

plt.scatter(X[idx, 0], X[idx, 1],

c=c, cmap=plt.cm.Paired,

s=20, edgecolor='k',

label=("Class %s" % n))

plt.xlim(x_min, x_max)

plt.ylim(y_min, y_max)

plt.legend(loc='upper right')

plt.xlabel('x')

plt.ylabel('y')

plt.title('Decision Boundary')

# Plot the two-class decision scors

twoclass_output = bdt.decision_function(X)

plot_range = (twoclass_output.min(), twoclass_output.max())

plt.subplot(122)

for i, n, c in zip(range(2), class_names, plot_colors):

plt.hist(twoclass_output[y == i],

bins=10,

range=plot_range,

facecolor=c,

label=('Class %s' % n),

alpha=.5,

edgecolor='k')

x1, x2, y1, y2 = plt.axis()

plt.axis((x1, x2, y1, y2 * 1.2))

plt.legend(loc='upper right')

plt.ylabel('Samples')

plt.xlabel('Score')

plt.title('Decision Scores')

plt.tight_layout()

plt.subplots_adjust(wspace=0.35)

plt.show()

Output:

Automatically created module for IPython interactive environment

Two-class AdaBoost.png

python算法的缺陷和不足_决策树基本概念及算法优缺点相关推荐

  1. python算法的缺陷和不足_最全最实用的机器学习算法优缺点分析

    原标题:最全最实用的机器学习算法优缺点分析 最全最实用的机器学习算法优缺点分析 2017-06-10 数据派THU 来源:AI100 本文长度为4600字,建议阅读6分钟 本文结合使用场景及实际经验, ...

  2. python算法的缺陷和不足_机器学习算法优缺点及其应用领域

    决策树 一.  决策树优点 1.决策树易于理解和解释,可以可视化分析,容易提取出规则. 2.可以同时处理标称型和数值型数据. 3.测试数据集时,运行速度比较快. 4.决策树可以很好的扩展到大型数据库中 ...

  3. em算法怎么对应原有分类_机器学习系列之EM算法

    我讲EM算法的大概流程主要三部分:需要的预备知识.EM算法详解和对EM算法的改进. 一.EM算法的预备知识 1.极大似然估计 (1)举例说明:经典问题--学生身高问题 我们需要调查我们学校的男生和女生 ...

  4. Python 数据处理数据挖掘(六):决策树模型 之 CART算法

    声明:本文为学习笔记,侵权删 一.基尼系数&CART算法 CART(Classification And Regression Tree - 分类/回归树)是决策树算法的其中一种,依靠基尼系数 ...

  5. 二叉树剪枝_决策树,生成剪枝,CART算法

    决策树 1. 原理 1.1 模型简介 决策树是一种基本的回归和分类算法.在分类问题中,可以认为是一系列 if-then 规则的几何.决策树学通常包括三个步骤:特征选择,决策树的生成, 决策树的修剪. ...

  6. python算法工程师需要学什么_成为一名 AI 算法工程师,你需要具备哪些能力?...

    这是一篇关于如何成为一名 AI 算法工程师的长文~ 经常有朋友私信问,如何学 python 呀,如何敲代码呀,如何进入 AI 行业呀? 这里总结了成为AI算法工程师所需要掌握的一些要点 来看看你距离成 ...

  7. python计算iris数据集的均值_模糊C均值聚类算法及python实现

    目录 本文采用数据集为iris,将iris.txt放在程序的同一文件夹下.请先自行下载好. 模糊理论 模糊控制是自动化控制领域的一项经典方法.其原理则是模糊数学.模糊逻辑.1965,L. A. Zad ...

  8. python软件测试面试题及答案_软件测试面试 | 一道大厂算法面试真题,你能答上来吗?(附答案)...

    时光飞快,眨眼又到一年年底! 年底其实是跳槽换坑的绝佳时机,毕竟可以「年前面试,年后入职」,而且面试越早,好坑位较多,可选择的余地也较大.建议有换工作意向的测试同学可以多发发简历,多找找面试机会的.哪 ...

  9. python运行mcmc为何老出错_为什么我的metropolis算法(mcmc)的python实现这么慢?

    我回答了similar question previously.我在这里提到的很多东西(不计算每次迭代的当前可能性,预先计算随机创新等等)都可以在这里使用.在 实现的其他改进是不使用列表来存储示例.相 ...

最新文章

  1. Dickey-Fuller检验+迪基-福勒检验
  2. [图解教程]Axis2与Eclipse整合开发Web Service之二:WSDL逆向生成服务端
  3. Mysql主从复制(docker例子)
  4. 私有属性和方法-子类对象不能直接访问
  5. 使用dotnet Cli向nuget发布包
  6. 阿里在美申请区块链专利;Win10 最新漏洞被发现;MongoDB 4.2 发布​ | 极客头条...
  7. 文本格式化标签(HTML)
  8. gim-实时通讯框架
  9. 洛谷——【数据结构1-1】线性表
  10. c语言显示cpuid_ccpuid:CPUID信息模块。范例:显示所有的CPUID信息
  11. 基于SSM的宠物领养网站
  12. Latex:入门教程
  13. 初学者如何选择网络仿真软件
  14. 使用screw一键生成数据库文档
  15. Minecraft 我的世界 .minecraft下的各个文件夹的用处
  16. MOTO不相信眼泪!艰难回归!你,准备好了吗?
  17. 两种依赖注入的类型是什么?
  18. 腾讯机器狗耍中国功夫,挑战网红波士顿机器狗
  19. jquery获取ip地址
  20. 动态规划解决0-1背包问题详解(图文并茂)

热门文章

  1. yml读取环境变量_读取yml配置文件中的值
  2. Could not retrieve transaction isolation level from server
  3. Gsview裁剪EPS文件
  4. iOS绘图系列四:绘制文字和图像CGContextDrawImage,drawInRect:,drawAtPoint:, UIGraphicsBeginImageContext...
  5. 嵌入式 uboot引导kernel,kernel引导fs
  6. Redis 5种数据结构使用及注意事项
  7. BZOJ 1015 题解
  8. [JS调用]automation服务器不能创建对象
  9. POJ 3667 hotel(shǎ崽大神的模板|区间合并)
  10. UBB代码相关内容的收藏