随机森林

  • 1. 使用Boston数据集进行随机森林模型构建
  • 2. 数据集划分
  • 3.构建自变量与因变量之间的公式
  • 4. 模型训练
  • 5. 寻找合适的ntree
  • 6. 查看变量重要性并绘图展示
  • 7. 偏依赖图:Partial Dependence Plot(PDP图)
  • 8. 训练集预测结果

1. 使用Boston数据集进行随机森林模型构建

library(rio)
library(ggplot2)
library(magrittr)
library(randomForest)
library(tidyverse)
library(skimr)
library(DataExplorer)
library(caret)
library(varSelRF)
library(pdp)
library(iml)
data("boston")as.data.frame(boston)
skim(boston)#数据鸟瞰
plot_missing(boston)#数据缺失
#na.roughfix() #填补缺失
hist(boston$lstat,breaks = 50)

数据展示:

2. 数据集划分

######################################
# 1.数据集划分
set.seed(123)
trains <- createDataPartition(y = boston$lstat,p=0.70,list = F)
traindata <- boston[trains,]
testdata <- boston[-trains,]

3.构建自变量与因变量之间的公式

#因变量自变量构建公式
colnames(boston)
form_reg <- as.formula(paste0("lstat ~",paste(colnames(traindata)[1:15],collapse = "+")))
form_reg


构建的公式:

4. 模型训练

#### 2.1模型mtry的最优选取,mry=12 % Var explained最佳
#默认情况下数据集变量个数的二次方根(分类模型)或1/3(预测模型)
set.seed(123)
n <- ncol(boston)-5
errRate <- c(1) #设置模型误判率向量初始值
for (i in 1:n) {rf_train <- randomForest(form_reg, data = traindata,ntree = 1000,#决策树的棵树p =0.8,mtry = i,#每个节点可供选择的变量数目importance = T #输出变量的重要性)errRate[i] <- mean(rf_train$mse)print(rf_train)
}
m= which.min(errRate)
print(m)

结果:
Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 1

      Mean of squared residuals: 13.35016% Var explained: 72.5

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 2

      Mean of squared residuals: 11.0119% Var explained: 77.31

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 3

      Mean of squared residuals: 10.51724% Var explained: 78.33

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 4

      Mean of squared residuals: 10.41254% Var explained: 78.55

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 5

      Mean of squared residuals: 10.335% Var explained: 78.71

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 6

      Mean of squared residuals: 10.22917% Var explained: 78.93

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 7

      Mean of squared residuals: 10.25744% Var explained: 78.87

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 8

      Mean of squared residuals: 10.11666% Var explained: 79.16

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 9

      Mean of squared residuals: 10.09725% Var explained: 79.2

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 10

      Mean of squared residuals: 10.09231% Var explained: 79.21

Call:
randomForest(formula = form_reg, data = traindata, ntree = 1000, p = 0.8, mtry = i, importance = T)
Type of random forest: regression
Number of trees: 1000
No. of variables tried at each split: 11

      Mean of squared residuals: 10.12222% Var explained: 79.15


结果显示mtry为11误差最小,精度最高

5. 寻找合适的ntree

#### 寻找合适的ntree
set.seed(123)
rf_train<-randomForest(form_reg,data=traindata,mtry=11,ntree=500,importance = T,proximity=TRUE)
plot(rf_train,main = "ERROR & TREES")    #绘制模型误差与决策树数量关系图

运行结果:

6. 查看变量重要性并绘图展示

#### 变量重要性
importance<-importance(rf_train) ##### 绘图法1
barplot(rf_train$importance[,1],main="输入变量重要性测度指标柱形图")
box()

重要性展示:

##### 绘图法2
varImpPlot(rf_train,main = "Variable Importance plot")
varImpPlot(rf_train,main = "Variable Importance plot",type = 1)
varImpPlot(rf_train,sort=TRUE,n.var=nrow(rf_train$importance),main = "Variable Importance plot",type = 2) # 基尼系数
hist(treesize(rf_train)) #展示随机森林模型中每棵决策树的节点数
max(treesize(rf_train));
min(treesize(rf_train))

“%IncMSE” 即increase in mean squared error,通过对每一个预测变量随机赋值,如果该预测变量更为重要,那么其值被随机替换后模型预测的误差会增大。“IncNodePurity”即increase in node purity,通过残差平方和来度量,代表了每个变量对分类树每个节点上观测值的异质性的影响,从而比较变量的重要性。两个指示值均是判断预测变量重要性的指标,均是值越大表示该变量的重要性越大,但分别基于两者的重要性排名存在一定的差异。

7. 偏依赖图:Partial Dependence Plot(PDP图)

部分依赖图可以显示目标和特征之间的关系是线性的、单调的还是更复杂的
缺点: 部分依赖函数中现实的最大特征数是两个,这不是PDP的错,而是2维表示(纸或屏幕)的错,是我们无法想象超过3维的错。

partialPlot(x = rf_train,pred.data = traindata,x.var = cmedv
)

PDP图:

rf_train %>%partial(pred.var = c("cmedv", "age"), chull = TRUE, progress = TRUE) %>%autoplot(contour = TRUE, legend.title = "SOS",option = "B", direction = -1) + theme_bw()+theme(text=element_text(size=12,  family="serif"))

交互结果展示:

#预测与指标的关系散点图
plot(lstat ~ cmedv, data = traindata)

8. 训练集预测结果

#图示训练集预测结果
plot(x = traindata$lstat,y = trainpred,xlab = "实际值",ylab = "预测值",main = "随机森林-实际值与预测值比较"
)trainlinmod <- lm(trainpred ~ traindata$lstat) #拟合回归模型
abline(trainlinmod, col = "blue",lwd =2.5, lty = "solid")
abline(a = 0,b = 1, col = "red",lwd =2.5, lty = "dashed")
legend("topleft",legend = c("Mode1","Base"),col = c("blue","red"),lwd = 2.5,lty = c("solid","dashed"))

#测试集预测结果
testpred <- predict(rf_train,newdata = testdata)
#测试集预测误差结果
defaultSummary(data.frame(obs = testdata$lstat,pred = testpred))
#图示测试集结果
plot(x = testdata$lstat,y = testpred,xlab = "实际值",ylab = "预测值",main = "随机森林-实际值与预测值比较"
)
testlinmod <- lm(testpred ~ testdata$lstat)
abline(testlinmod, col = "blue",lwd =2.5, lty = "solid")
abline(a = 0,b = 1, col = "red",lwd =2.5, lty = "dashed")
legend("topleft",legend = c("Mode1","Base"),col = c("blue","red"),lwd = 2.5,lty = c("solid","dashed"))

随机森林算法(Random Forest)R语言实现相关推荐

  1. R语言xgboost包:使用xgboost算法实现随机森林(random forest)模型

    R语言xgboost包:使用xgboost算法实现随机森林(random forest)模型 目录 R语言xgboost包:使用xgboost算法实现随机森林(random forest)模型

  2. 随机森林实例:利用基于CART算法的随机森林(Random Forest)树分类方法对于红酒质量进行预测

    随机森林实例:利用基于CART算法的随机森林(Random Forest)树分类方法对于红酒质量进行预测 1.引言 2.理论基础 2.1 什么是决策树 2.2 特征选择的算法 2.2.1 ID3:基于 ...

  3. Machine Learning | (8) Scikit-learn的分类器算法-随机森林(Random Forest)

    Machine Learning | 机器学习简介 Machine Learning | (1) Scikit-learn与特征工程 Machine Learning | (2) sklearn数据集 ...

  4. 集成学习-Bagging集成学习算法随机森林(Random Forest)

    随机森林算法属性 随机森林顾名思义,是用随机的方式建立一个森林,森林里面有很多的决策树组成,随机森林的每一棵决策树之间是没有关联的.在得到森林之后,当有一个新的输入样本进入的时候,就让森林中的每一棵决 ...

  5. 机器学习5—分类算法之随机森林(Random Forest)

    随机森林(Random Forest) 前言 一.随机森林 1.什么是随机森林 2.随机森林的特点 3.随机森林的生成 二.随机森林的函数模型 三.随机森林算法实现 1.数据的读取 2.数据的清洗和填 ...

  6. 随机森林(Random Forest)算法原理

    随机森林(Random Forest)算法原理 集成学习(Ensemble)思想.自助法(bootstrap)与bagging **集成学习(ensemble)**思想是为了解决单个模型或者某一组参数 ...

  7. 【机器学习】 随机森林(Random Forest)

    1 什么是随机森林? 作为新兴起的.高度灵活的一种机器学习算法,随机森林(Random Forest,简称RF)拥有广泛的应用前景,从市场营销到医疗保健保险,既可以用来做市场营销模拟的建模,统计客户来 ...

  8. [Machine Learning Algorithm] 随机森林(Random Forest)

    1 什么是随机森林? 作为新兴起的.高度灵活的一种机器学习算法,随机森林(Random Forest,简称RF)拥有广泛的应用前景,从市场营销到医疗保健保险,既可以用来做市场营销模拟的建模,统计客户来 ...

  9. 使用随机森林(Random Forest)进行特征筛选并可视化

    使用随机森林(Random Forest)进行特征筛选并可视化 随机森林可以理解为Cart树森林,它是由多个Cart树分类器构成的集成学习模式.其中每个Cart树可以理解为一个议员,它从样本集里面随机 ...

  10. 随机森林(Random Forest)为什么是森林?到底随机在哪里?行采样和列采样又是什么东西?

    ensemble.RandomForestClassifier([-]) A random forest classifier. ensemble.RandomForestRegressor([-]) ...

最新文章

  1. 35岁中年博士失业,决定给后辈一些建议!
  2. LIVE 预告 | 达摩院王玮:超大规模中文理解生成联合模型PLUG
  3. 翻翻git之---一个丰富的通知工具类 NotifyUtil
  4. Form表单标签的Enctype属性的编码格类型
  5. 嵌入式名词以及简略说明
  6. 20分钟快速了解Redis
  7. AC日记——阶乘和 openjudge 1.6 15
  8. html之浮动和定位
  9. undefined reference to `main`
  10. 前端后台学习笔记汇杂
  11. JAVA学习笔记:目录
  12. 【笔记】Protues仿真STM32的实现过程
  13. 基于Matlab的车辆型号识别系统
  14. 人类的终极目标是什么?
  15. 腾讯2021校园招聘-后台综合-第一次笔试 8.23 20.00-22.00 Apare_xzc
  16. 互联网+大赛作品_“颂中国力量 绘美好梦想”全市中小学生互联网+书画大赛作品展示(三)...
  17. SpringBoot微信小程序商城(前台+后台)源码分享
  18. 【SAP ABAP学习资料】SQL查询分析器——程序代码
  19. XXljob 使用教程(springboot)
  20. C++并查集算法(详细)

热门文章

  1. 小米笔记本Pro15.6蓝屏(0x00000124)——重装系统,拆机清灰加固态
  2. 谱分析——傅里叶级数(离散谱)
  3. access select max_超级玛丽2号Max:挑选重疾险需要避开这3大误区!
  4. SQLyog 最新版本12.5-64bit 完美破解,亲测可用!
  5. python open 函数漏洞_python和django的目录遍历漏洞
  6. #双11故事联播#守护篇| 支付王牌军-我们如何从容应对双11?
  7. ibm服务器密码破解_IBM Integration Bus中的密码术操作
  8. C# 参考 cool edit 样式, 绘制音频波形图
  9. nodejs和js之间有什么区别?
  10. 【Android】关于statusbar的处理