ML之RF&XGBoost:分别基于RF随机森林、XGBoost算法对Titanic(泰坦尼克号)数据集进行二分类预测(乘客是否生还)

目录

输出结果

设计思路

核心代码


输出结果

设计思路

核心代码

rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
rfc.score(X_test, y_test)xgbc = XGBClassifier()
xgbc.fit(X_train, y_train)
xgbc.score(X_test, y_test)
class RandomForestClassifier(ForestClassifier):"""A random forest classifier.A random forest is a meta estimator that fits a number of decision treeclassifiers on various sub-samples of the dataset and use averaging toimprove the predictive accuracy and control over-fitting.The sub-sample size is always the same as the originalinput sample size but the samples are drawn with replacement if`bootstrap=True` (default).Read more in the :ref:`User Guide <forest>`.Parameters----------n_estimators : integer, optional (default=10)The number of trees in the forest.criterion : string, optional (default="gini")The function to measure the quality of a split. Supported criteria are"gini" for the Gini impurity and "entropy" for the information gain.Note: this parameter is tree-specific.max_features : int, float, string or None, optional (default="auto")The number of features to consider when looking for the best split:- If int, then consider `max_features` features at each split.- If float, then `max_features` is a percentage and`int(max_features * n_features)` features are considered at eachsplit.- If "auto", then `max_features=sqrt(n_features)`.- If "sqrt", then `max_features=sqrt(n_features)` (same as "auto").- If "log2", then `max_features=log2(n_features)`.- If None, then `max_features=n_features`.Note: the search for a split does not stop until at least onevalid partition of the node samples is found, even if it requires toeffectively inspect more than ``max_features`` features.max_depth : integer or None, optional (default=None)The maximum depth of the tree. If None, then nodes are expanded untilall leaves are pure or until all leaves contain less thanmin_samples_split samples.min_samples_split : int, float, optional (default=2)The minimum number of samples required to split an internal node:- If int, then consider `min_samples_split` as the minimum number.- If float, then `min_samples_split` is a percentage and`ceil(min_samples_split * n_samples)` are the minimumnumber of samples for each split... versionchanged:: 0.18Added float values for percentages.min_samples_leaf : int, float, optional (default=1)The minimum number of samples required to be at a leaf node:- If int, then consider `min_samples_leaf` as the minimum number.- If float, then `min_samples_leaf` is a percentage and`ceil(min_samples_leaf * n_samples)` are the minimumnumber of samples for each node... versionchanged:: 0.18Added float values for percentages.min_weight_fraction_leaf : float, optional (default=0.)The minimum weighted fraction of the sum total of weights (of allthe input samples) required to be at a leaf node. Samples haveequal weight when sample_weight is not provided.max_leaf_nodes : int or None, optional (default=None)Grow trees with ``max_leaf_nodes`` in best-first fashion.Best nodes are defined as relative reduction in impurity.If None then unlimited number of leaf nodes.min_impurity_split : float,Threshold for early stopping in tree growth. A node will splitif its impurity is above the threshold, otherwise it is a leaf... deprecated:: 0.19``min_impurity_split`` has been deprecated in favor of``min_impurity_decrease`` in 0.19 and will be removed in 0.21.Use ``min_impurity_decrease`` instead.min_impurity_decrease : float, optional (default=0.)A node will be split if this split induces a decrease of the impuritygreater than or equal to this value.The weighted impurity decrease equation is the following::N_t / N * (impurity - N_t_R / N_t * right_impurity- N_t_L / N_t * left_impurity)where ``N`` is the total number of samples, ``N_t`` is the number ofsamples at the current node, ``N_t_L`` is the number of samples in theleft child, and ``N_t_R`` is the number of samples in the right child.``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,if ``sample_weight`` is passed... versionadded:: 0.19bootstrap : boolean, optional (default=True)Whether bootstrap samples are used when building trees.oob_score : bool (default=False)Whether to use out-of-bag samples to estimatethe generalization accuracy.n_jobs : integer, optional (default=1)The number of jobs to run in parallel for both `fit` and `predict`.If -1, then the number of jobs is set to the number of cores.random_state : int, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator;If RandomState instance, random_state is the random number generator;If None, the random number generator is the RandomState instance usedby `np.random`.verbose : int, optional (default=0)Controls the verbosity of the tree building process.warm_start : bool, optional (default=False)When set to ``True``, reuse the solution of the previous call to fitand add more estimators to the ensemble, otherwise, just fit a wholenew forest.class_weight : dict, list of dicts, "balanced","balanced_subsample" or None, optional (default=None)Weights associated with classes in the form ``{class_label: weight}``.If not given, all classes are supposed to have weight one. Formulti-output problems, a list of dicts can be provided in the sameorder as the columns of y.Note that for multioutput (including multilabel) weights should bedefined for each class of every column in its own dict. For example,for four-class multilabel classification weights should be[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of[{1:1}, {2:5}, {3:1}, {4:1}].The "balanced" mode uses the values of y to automatically adjustweights inversely proportional to class frequencies in the input dataas ``n_samples / (n_classes * np.bincount(y))``The "balanced_subsample" mode is the same as "balanced" except thatweights are computed based on the bootstrap sample for every treegrown.For multi-output, the weights of each column of y will be multiplied.Note that these weights will be multiplied with sample_weight (passedthrough the fit method) if sample_weight is specified.Attributes----------estimators_ : list of DecisionTreeClassifierThe collection of fitted sub-estimators.classes_ : array of shape = [n_classes] or a list of such arraysThe classes labels (single output problem), or a list of arrays ofclass labels (multi-output problem).n_classes_ : int or listThe number of classes (single output problem), or a list containing thenumber of classes for each output (multi-output problem).n_features_ : intThe number of features when ``fit`` is performed.n_outputs_ : intThe number of outputs when ``fit`` is performed.feature_importances_ : array of shape = [n_features]The feature importances (the higher, the more important the feature).oob_score_ : floatScore of the training dataset obtained using an out-of-bag estimate.oob_decision_function_ : array of shape = [n_samples, n_classes]Decision function computed with out-of-bag estimate on the trainingset. If n_estimators is small it might be possible that a data pointwas never left out during the bootstrap. In this case,`oob_decision_function_` might contain NaN.Examples-------->>> from sklearn.ensemble import RandomForestClassifier>>> from sklearn.datasets import make_classification>>>>>> X, y = make_classification(n_samples=1000, n_features=4,...                            n_informative=2, n_redundant=0,...                            random_state=0, shuffle=False)>>> clf = RandomForestClassifier(max_depth=2, random_state=0)>>> clf.fit(X, y)RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=2, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2,min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=0, verbose=0, warm_start=False)>>> print(clf.feature_importances_)[ 0.17287856  0.80608704  0.01884792  0.00218648]>>> print(clf.predict([[0, 0, 0, 0]]))[1]Notes-----The default values for the parameters controlling the size of the trees(e.g. ``max_depth``, ``min_samples_leaf``, etc.) lead to fully grown andunpruned trees which can potentially be very large on some data sets. Toreduce memory consumption, the complexity and size of the trees should becontrolled by setting those parameter values.The features are always randomly permuted at each split. Therefore,the best found split may vary, even with the same training data,``max_features=n_features`` and ``bootstrap=False``, if the improvementof the criterion is identical for several splits enumerated during thesearch of the best split. To obtain a deterministic behaviour duringfitting, ``random_state`` has to be fixed.References----------.. [1] L. Breiman, "Random Forests", Machine Learning, 45(1), 5-32, 2001.See also--------DecisionTreeClassifier, ExtraTreesClassifier"""def __init__(self, n_estimators=10, criterion="gini", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0., max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0., min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False, class_weight=None):super(RandomForestClassifier, self).__init__(base_estimator=DecisionTreeClassifier(), n_estimators=n_estimators, estimator_params=("criterion", "max_depth", "min_samples_split", "min_samples_leaf", "min_weight_fraction_leaf", "max_features", "max_leaf_nodes", "min_impurity_decrease", "min_impurity_split", "random_state"), bootstrap=bootstrap, oob_score=oob_score, n_jobs=n_jobs, random_state=random_state, verbose=verbose, warm_start=warm_start, class_weight=class_weight)self.criterion = criterionself.max_depth = max_depthself.min_samples_split = min_samples_splitself.min_samples_leaf = min_samples_leafself.min_weight_fraction_leaf = min_weight_fraction_leafself.max_features = max_featuresself.max_leaf_nodes = max_leaf_nodesself.min_impurity_decrease = min_impurity_decreaseself.min_impurity_split = min_impurity_split
class XGBClassifier(XGBModel, XGBClassifierBase):# pylint: disable=missing-docstring,too-many-arguments,invalid-name__doc__ = "Implementation of the scikit-learn API for XGBoost classification.\n\n" + '\n'.join(XGBModel.__doc__.split('\n')[2:])def __init__(self, max_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective="binary:logistic", booster='gbtree', n_jobs=1, nthread=None, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, random_state=0, seed=None, missing=None, **kwargs):super(XGBClassifier, self).__init__(max_depth, learning_rate, n_estimators, silent, objective, booster, n_jobs, nthread, gamma, min_child_weight, max_delta_step, subsample, colsample_bytree, colsample_bylevel, reg_alpha, reg_lambda, scale_pos_weight, base_score, random_state, seed, missing, **kwargs)def fit(self, X, y, sample_weight=None, eval_set=None, eval_metric=None, early_stopping_rounds=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, callbacks=# pylint: disable = attribute-defined-outside-init,arguments-differNone):"""Fit gradient boosting classifierParameters----------X : array_likeFeature matrixy : array_likeLabelssample_weight : array_likeWeight for each instanceeval_set : list, optionalA list of (X, y) pairs to use as a validation set forearly-stoppingsample_weight_eval_set : list, optionalA list of the form [L_1, L_2, ..., L_n], where each L_i is a list ofinstance weights on the i-th validation set.eval_metric : str, callable, optionalIf a str, should be a built-in evaluation metric to use. Seedoc/parameter.rst. If callable, a custom evaluation metric. The callsignature is func(y_predicted, y_true) where y_true will be aDMatrix object such that you may need to call the get_labelmethod. It must return a str, value pair where the str is a namefor the evaluation and value is the value of the evaluationfunction. This objective is always minimized.early_stopping_rounds : int, optionalActivates early stopping. Validation error needs to decrease atleast every <early_stopping_rounds> round(s) to continue training.Requires at least one item in evals. If there's more than one,will use the last. If early stopping occurs, the model will havethree additional fields: bst.best_score, bst.best_iteration andbst.best_ntree_limit (bst.best_ntree_limit is the ntree_limit parameterdefault value in predict method if not any other value is specified).(Use bst.best_ntree_limit to get the correct value if num_parallel_treeand/or num_class appears in the parameters)verbose : boolIf `verbose` and an evaluation set is used, writes the evaluationmetric measured on the validation set to stderr.xgb_model : strfile name of stored xgb model or 'Booster' instance Xgb model to beloaded before training (allows training continuation).callbacks : list of callback functionsList of callback functions that are applied at end of each iteration.It is possible to use predefined callbacks by using :ref:`callback_api`.Example:.. code-block:: python[xgb.callback.reset_learning_rate(custom_rates)]"""evals_result = {}self.classes_ = np.unique(y)self.n_classes_ = len(self.classes_)xgb_options = self.get_xgb_params()if callable(self.objective):obj = _objective_decorator(self.objective)# Use default value. Is it really not used ?xgb_options["objective"] = "binary:logistic"else:obj = Noneif self.n_classes_ > 2:# Switch to using a multiclass objective in the underlying XGB instancexgb_options["objective"] = "multi:softprob"xgb_options['num_class'] = self.n_classes_feval = eval_metric if callable(eval_metric) else Noneif eval_metric is not None:if callable(eval_metric):eval_metric = Noneelse:xgb_options.update({"eval_metric":eval_metric})self._le = XGBLabelEncoder().fit(y)training_labels = self._le.transform(y)if eval_set is not None:if sample_weight_eval_set is None:sample_weight_eval_set = [None] * len(eval_set)evals = list(DMatrix(eval_set[i][0], label=self._le.transform(eval_set[i][1]), missing=self.missing, weight=sample_weight_eval_set[i], nthread=self.n_jobs) for i in range(len(eval_set)))nevals = len(evals)eval_names = ["validation_{}".format(i) for i in range(nevals)]evals = list(zip(evals, eval_names))else:evals = ()self._features_count = X.shape[1]if sample_weight is not None:train_dmatrix = DMatrix(X, label=training_labels, weight=sample_weight, missing=self.missing, nthread=self.n_jobs)else:train_dmatrix = DMatrix(X, label=training_labels, missing=self.missing, nthread=self.n_jobs)self._Booster = train(xgb_options, train_dmatrix, self.n_estimators, evals=evals, early_stopping_rounds=early_stopping_rounds, evals_result=evals_result, obj=obj, feval=feval, verbose_eval=verbose, xgb_model=xgb_model, callbacks=callbacks)self.objective = xgb_options["objective"]if evals_result:for val in evals_result.items():evals_result_key = list(val[1].keys())[0]evals_result[val[0]][evals_result_key] = val[1][evals_result_key]self.evals_result_ = evals_resultif early_stopping_rounds is not None:self.best_score = self._Booster.best_scoreself.best_iteration = self._Booster.best_iterationself.best_ntree_limit = self._Booster.best_ntree_limitreturn selfdef predict(self, data, output_margin=False, ntree_limit=None, validate_features=True):"""Predict with `data`... note:: This function is not thread safe.For each booster object, predict can only be called from one thread.If you want to run prediction using multiple thread, call ``xgb.copy()`` to make copiesof model object and then call ``predict()``... note:: Using ``predict()`` with DART boosterIf the booster object is DART type, ``predict()`` will perform dropouts, i.e. onlysome of the trees will be evaluated. This will produce incorrect results if ``data`` isnot the training data. To obtain correct results on test sets, set ``ntree_limit`` toa nonzero value, e.g... code-block:: pythonpreds = bst.predict(dtest, ntree_limit=num_round)Parameters----------data : DMatrixThe dmatrix storing the input.output_margin : boolWhether to output the raw untransformed margin value.ntree_limit : intLimit number of trees in the prediction; defaults to best_ntree_limit if defined(i.e. it has been trained with early stopping), otherwise 0 (use all trees).validate_features : boolWhen this is True, validate that the Booster's and data's feature_names are identical.Otherwise, it is assumed that the feature_names are the same.Returns-------prediction : numpy array"""test_dmatrix = DMatrix(data, missing=self.missing, nthread=self.n_jobs)if ntree_limit is None:ntree_limit = getattr(self, "best_ntree_limit", 0)class_probs = self.get_booster().predict(test_dmatrix, output_margin=output_margin, ntree_limit=ntree_limit, validate_features=validate_features)if output_margin:# If output_margin is active, simply return the scoresreturn class_probsif len(class_probs.shape) > 1:column_indexes = np.argmax(class_probs, axis=1)else:column_indexes = np.repeat(0, class_probs.shape[0])column_indexes[class_probs > 0.5] = 1return self._le.inverse_transform(column_indexes)def predict_proba(self, data, ntree_limit=None, validate_features=True):"""Predict the probability of each `data` example being of a given class... note:: This function is not thread safeFor each booster object, predict can only be called from one thread.If you want to run prediction using multiple thread, call ``xgb.copy()`` to make copiesof model object and then call predictParameters----------data : DMatrixThe dmatrix storing the input.ntree_limit : intLimit number of trees in the prediction; defaults to best_ntree_limit if defined(i.e. it has been trained with early stopping), otherwise 0 (use all trees).validate_features : boolWhen this is True, validate that the Booster's and data's feature_names are identical.Otherwise, it is assumed that the feature_names are the same.Returns-------prediction : numpy arraya numpy array with the probability of each data example being of a given class."""test_dmatrix = DMatrix(data, missing=self.missing, nthread=self.n_jobs)if ntree_limit is None:ntree_limit = getattr(self, "best_ntree_limit", 0)class_probs = self.get_booster().predict(test_dmatrix, ntree_limit=ntree_limit, validate_features=validate_features)if self.objective == "multi:softprob":return class_probselse:classone_probs = class_probsclasszero_probs = 1.0 - classone_probsreturn np.vstack((classzero_probs, classone_probs)).transpose()def evals_result(self):"""Return the evaluation results.If **eval_set** is passed to the `fit` function, you can call``evals_result()`` to get evaluation results for all passed **eval_sets**.When **eval_metric** is also passed to the `fit` function, the**evals_result** will contain the **eval_metrics** passed to the `fit` function.Returns-------evals_result : dictionaryExample-------.. code-block:: pythonparam_dist = {'objective':'binary:logistic', 'n_estimators':2}clf = xgb.XGBClassifier(**param_dist)clf.fit(X_train, y_train,eval_set=[(X_train, y_train), (X_test, y_test)],eval_metric='logloss',verbose=True)evals_result = clf.evals_result()The variable **evals_result** will contain.. code-block:: python{'validation_0': {'logloss': ['0.604835', '0.531479']},'validation_1': {'logloss': ['0.41965', '0.17686']}}"""if self.evals_result_:evals_result = self.evals_result_else:raise XGBoostError('No results.')return evals_result

ML之RFXGBoost:分别基于RF随机森林、XGBoost算法对Titanic(泰坦尼克号)数据集进行二分类预测(乘客是否生还)相关推荐

  1. ML之RFXGBoost:基于RF/XGBoost(均+5f-CrVa)算法对Titanic(泰坦尼克号)数据集进行二分类预测(乘客是否生还)

    ML之RF&XGBoost:基于RF/XGBoost(均+5f-CrVa)算法对Titanic(泰坦尼克号)数据集进行二分类预测(乘客是否生还) 目录 输出结果 比赛结果 设计思路 核心代码 ...

  2. ML之DT:基于DT决策树算法(交叉验证FS+for遍历最佳FS)对Titanic(泰坦尼克号)数据集进行二分类预测

    ML之DT:基于DT决策树算法(交叉验证FS+for遍历最佳FS)对Titanic(泰坦尼克号)数据集进行二分类预测 目录 输出结果 设计思路 核心代码 输出结果 设计思路 核心代码 fs = fea ...

  3. ML之DT:基于DT决策树算法(对比是否经特征筛选FS处理)对Titanic(泰坦尼克号)数据集进行二分类预测

    ML之DT:基于DT决策树算法(对比是否经特征筛选FS处理)对Titanic(泰坦尼克号)数据集进行二分类预测 目录 输出结果 设计思路 核心代码 输出结果 初步处理后的 X_train: (984, ...

  4. ML之LoRBaggingRF:依次利用LoR、Bagging、RF算法对titanic(泰坦尼克号)数据集 (Kaggle经典案例)获救人员进行二分类预测(最全)

    ML之LoR&Bagging&RF:依次利用LoR.Bagging.RF算法对titanic(泰坦尼克号)数据集 (Kaggle经典案例)获救人员进行二分类预测 目录 输出结果 设计思 ...

  5. ML之catboost:基于自带Pool数据集实现二分类预测

    ML之catboost:基于自带Pool数据集实现二分类预测 基于自带Pool数据集实现二分类预测 输出结果 Learning rate set to 0.5 0: learn: 0.9886498 ...

  6. DL之GD:利用LogisticGD算法(梯度下降)依次基于一次函数和二次函数分布的数据集实现二分类预测(超平面可视化)

    DL之GD:利用LogisticGD算法(梯度下降)依次基于一次函数和二次函数分布的数据集实现二分类预测(超平面可视化) 目录 利用LogisticGD算法(梯度下降)依次基于一次函数和二次函数分布的 ...

  7. ML之xgboost:利用xgboost算法对breast_cancer数据集实现二分类预测并进行graphviz二叉树节点图可视化

    ML之xgboost:利用xgboost算法对breast_cancer数据集实现二分类预测并进行graphviz二叉树节点图可视化 目录 实现结果 实现代码 实现结果

  8. 项目实例---随机森林在Kaggle实例:Titanic中的应用(二)

    4. 特征工程 特征工程主要是对一些不适合直接参与建模的特征进行各种处理,通过已有数据构建一些新特征,对特征进行哑变量转换等等. 4.1 对Name进行处理 由于名字一般都比较杂乱,似乎对模型预测没有 ...

  9. 基于蜣螂算法改进的随机森林回归算法 - 附代码

    基于蜣螂算法改进的随机森林回归算法 - 附代码 文章目录 基于蜣螂算法改进的随机森林回归算法 - 附代码 1.数据集 2.RF模型 3.基于蜣螂算法优化的RF 4.测试结果 5.Matlab代码 6. ...

最新文章

  1. sqlmap 连接mysql_sqlmap连接Mysql实现getshell | CN-SEC 中文网
  2. linux下查找网口_Linux查看网络端口
  3. 百度Q2扭亏为盈,市值一夜大涨300亿,李彦宏:呼唤猛将雄兵,要再上行业之巅...
  4. mysql分库分表事务控制_数据库分库分表之后,你是如何解决事务问题?
  5. Ajax判断图片类型
  6. Python 装饰器初探
  7. 面试精讲之面试考点及大厂真题 - 分布式专栏 15 如何解决消息重复,保证消息顺序问题
  8. 中添加2000坐标系_ArcGIS API for JavaScript 4.16在三维场景中以天地图为底图加载2000坐标系的倾斜摄影数据...
  9. yahoo的yui是一个好东东
  10. oracle rownum使用与分页
  11. 我们会不会与操作系统谈一场奋不顾身的爱情──《云端情人》有感
  12. 股票历史数据下载梳理汇总(一)
  13. 什么是配置文件,以及如何编辑它们?
  14. 经典歌曲多版本欣赏:刘欢《情怨》:华夏元素鲜明的“中国风格“
  15. 量化金融入门笔记(一)
  16. 鸿蒙开放beta版有什么用,鸿蒙2.0beta活动有什么内容 鸿蒙2.0 Beta手机版什么时候发布...
  17. 【秒杀】一、系统设计要点,从卖病鹅说起
  18. python程序设计入门书籍推荐_python刚刚入门,接下来这几本python的书会让你成为别人眼里的大神!...
  19. PyCharm下载插件失败解决方法
  20. u盘有图标计算机显示没有,电脑没插u盘却显示u盘图标是怎么回事?

热门文章

  1. linux方向键ascii_上下左右 方向键的ASCII码值是多少?
  2. 给Ubuntu添加清华的软件源
  3. IPv6系列(一)—快速入门
  4. 洛谷P2518 [HAOI2010]计数
  5. LVS原理详解及部署之五:LVS+keepalived实现负载均衡高可用
  6. 云服务器 ECS 建站教程:创建基于ECS和RDS的WordPress环境
  7. sqlite insert or replace 和 insert or ignore 用法
  8. 扫盲了!一个Java字符串中到底有多少个字符?
  9. 微服务2.0技术栈选型手册,值得架构师借鉴
  10. 一文读懂常用日志框架(Log4j、SLF4J、Logback)有啥区别