以Titanic乘客生存预测任务为例,进一步熟悉Ray Tune调参工具。

titanic数据集的目标是根据乘客信息预测他们在Titanic号撞击冰山沉没后能否生存。

本示例的基础代码参考了下面两篇文章:

  • 1-1,结构化数据建模流程范例(一个不错的PyTorch教程)

  • How to use Tune with PyTorch

也可以看一下上一篇文章:PyTorch + Ray Tune 调参

教程中的原始代码如下:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch.utils.data import Dataset,DataLoader,TensorDataset
from sklearn.metrics import accuracy_score
import datetimedftrain_raw = pd.read_csv('train.csv')
dftest_raw = pd.read_csv('test.csv')
dftrain_raw.head(10)def preprocessing(dfdata):dfresult= pd.DataFrame()#PclassdfPclass = pd.get_dummies(dfdata['Pclass'])dfPclass.columns = ['Pclass_' +str(x) for x in dfPclass.columns ]dfresult = pd.concat([dfresult,dfPclass],axis = 1)#SexdfSex = pd.get_dummies(dfdata['Sex'])dfresult = pd.concat([dfresult,dfSex],axis = 1)#Agedfresult['Age'] = dfdata['Age'].fillna(0)dfresult['Age_null'] = pd.isna(dfdata['Age']).astype('int32')#SibSp,Parch,Faredfresult['SibSp'] = dfdata['SibSp']dfresult['Parch'] = dfdata['Parch']dfresult['Fare'] = dfdata['Fare']#Carbindfresult['Cabin_null'] =  pd.isna(dfdata['Cabin']).astype('int32')#EmbarkeddfEmbarked = pd.get_dummies(dfdata['Embarked'],dummy_na=True)dfEmbarked.columns = ['Embarked_' + str(x) for x in dfEmbarked.columns]dfresult = pd.concat([dfresult,dfEmbarked],axis = 1)return(dfresult)x_train = preprocessing(dftrain_raw).values
y_train = dftrain_raw[['Survived']].valuesx_test = preprocessing(dftest_raw).values
y_test = dftest_raw[['Survived']].valuesprint("x_train.shape =", x_train.shape )
print("x_test.shape =", x_test.shape )print("y_train.shape =", y_train.shape )
print("y_test.shape =", y_test.shape )dl_train = DataLoader(TensorDataset(torch.tensor(x_train).float(),torch.tensor(y_train).float()),shuffle = True, batch_size = 8)
dl_valid = DataLoader(TensorDataset(torch.tensor(x_test).float(),torch.tensor(y_test).float()),shuffle = False, batch_size = 8)def create_net():net = nn.Sequential()net.add_module("linear1", nn.Linear(15, 20))net.add_module("relu1", nn.ReLU())net.add_module("linear2", nn.Linear(20, 15))net.add_module("relu2", nn.ReLU())net.add_module("linear3", nn.Linear(15, 1))net.add_module("sigmoid", nn.Sigmoid())return netnet = create_net()
print(net)loss_func = nn.BCELoss()
optimizer = torch.optim.Adam(params=net.parameters(),lr = 0.01)
metric_func = lambda y_pred,y_true: accuracy_score(y_true.data.numpy(),y_pred.data.numpy()>0.5)
metric_name = "accuracy"epochs = 10
log_step_freq = 30dfhistory = pd.DataFrame(columns = ["epoch","loss",metric_name,"val_loss","val_"+metric_name])
print("Start Training...")
nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print("=========="*8 + "%s"%nowtime)for epoch in range(1, epochs + 1):# 1,训练循环-------------------------------------------------net.train()loss_sum = 0.0metric_sum = 0.0step = 1for step, (features, labels) in enumerate(dl_train, 1):# 梯度清零optimizer.zero_grad()# 正向传播求损失predictions = net(features)loss = loss_func(predictions, labels)metric = metric_func(predictions, labels)# 反向传播求梯度loss.backward()optimizer.step()# 打印batch级别日志loss_sum += loss.item()metric_sum += metric.item()if step % log_step_freq == 0:print(("[step = %d] loss: %.3f, " + metric_name + ": %.3f") %(step, loss_sum / step, metric_sum / step))# 2,验证循环-------------------------------------------------net.eval()val_loss_sum = 0.0val_metric_sum = 0.0val_step = 1for val_step, (features, labels) in enumerate(dl_valid, 1):# 关闭梯度计算with torch.no_grad():predictions = net(features)val_loss = loss_func(predictions, labels)val_metric = metric_func(predictions, labels)val_loss_sum += val_loss.item()val_metric_sum += val_metric.item()# 3,记录日志-------------------------------------------------info = (epoch, loss_sum / step, metric_sum / step,val_loss_sum / val_step, val_metric_sum / val_step)dfhistory.loc[epoch - 1] = info# 打印epoch级别日志print(("\nEPOCH = %d, loss = %.3f," + metric_name + \"  = %.3f, val_loss = %.3f, " + "val_" + metric_name + " = %.3f")% info)nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')print("\n" + "==========" * 8 + "%s" % nowtime)print('Finished Training...')

运行效果如下:

x_train.shape = (712, 15)
x_test.shape = (179, 15)
y_train.shape = (712, 1)
y_test.shape = (179, 1)
Sequential((linear1): Linear(in_features=15, out_features=20, bias=True)(relu1): ReLU()(linear2): Linear(in_features=20, out_features=15, bias=True)(relu2): ReLU()(linear3): Linear(in_features=15, out_features=1, bias=True)(sigmoid): Sigmoid()
)
Start Training...
================================================================================2021-01-07 10:57:41
[step = 30] loss: 0.706, accuracy: 0.613
[step = 60] loss: 0.641, accuracy: 0.673EPOCH = 1, loss = 0.653,accuracy  = 0.660, val_loss = 0.595, val_accuracy = 0.745================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.593, accuracy: 0.700
[step = 60] loss: 0.591, accuracy: 0.700EPOCH = 2, loss = 0.574,accuracy  = 0.719, val_loss = 0.472, val_accuracy = 0.772================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.574, accuracy: 0.767
[step = 60] loss: 0.537, accuracy: 0.769EPOCH = 3, loss = 0.525,accuracy  = 0.772, val_loss = 0.454, val_accuracy = 0.804================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.523, accuracy: 0.783
[step = 60] loss: 0.503, accuracy: 0.794EPOCH = 4, loss = 0.506,accuracy  = 0.792, val_loss = 0.473, val_accuracy = 0.804================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.492, accuracy: 0.796
[step = 60] loss: 0.482, accuracy: 0.802EPOCH = 5, loss = 0.485,accuracy  = 0.796, val_loss = 0.429, val_accuracy = 0.799================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.461, accuracy: 0.808
[step = 60] loss: 0.461, accuracy: 0.798EPOCH = 6, loss = 0.471,accuracy  = 0.792, val_loss = 0.454, val_accuracy = 0.799================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.467, accuracy: 0.779
[step = 60] loss: 0.490, accuracy: 0.762EPOCH = 7, loss = 0.464,accuracy  = 0.787, val_loss = 0.401, val_accuracy = 0.810================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.444, accuracy: 0.804
[step = 60] loss: 0.444, accuracy: 0.808EPOCH = 8, loss = 0.442,accuracy  = 0.808, val_loss = 0.446, val_accuracy = 0.777================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.460, accuracy: 0.775
[step = 60] loss: 0.458, accuracy: 0.792EPOCH = 9, loss = 0.440,accuracy  = 0.802, val_loss = 0.429, val_accuracy = 0.810================================================================================2021-01-07 10:57:42
[step = 30] loss: 0.436, accuracy: 0.808
[step = 60] loss: 0.438, accuracy: 0.806EPOCH = 10, loss = 0.439,accuracy  = 0.803, val_loss = 0.418, val_accuracy = 0.810================================================================================2021-01-07 10:57:43
Finished Training...

回顾上一篇博客,要使用Ray Tune进行调参,我们要做以下几件事:

  • 将数据加载和训练过程封装到函数中;
  • 使一些网络参数可配置;
  • 增加检查点(可选);
  • 定义模型参数搜索空间。

下面我们看一下按照上面的要求怎么修改代码:

import os
import pandas as pd
import torch
from torch import nn
from functools import partial
from torch.utils.data import random_split
from torch.utils.data import DataLoader, TensorDataset
from sklearn.metrics import accuracy_scorefrom ray import tune
from ray.tune import CLIReporter
from ray.tune.schedulers import ASHAScheduler# 执行数据预处理功能
def preprocessing(dfdata):dfresult = pd.DataFrame()# PclassdfPclass = pd.get_dummies(dfdata['Pclass'])dfPclass.columns = ['Pclass_' + str(x) for x in dfPclass.columns]dfresult = pd.concat([dfresult, dfPclass], axis=1)# SexdfSex = pd.get_dummies(dfdata['Sex'])dfresult = pd.concat([dfresult, dfSex], axis=1)# Agedfresult['Age'] = dfdata['Age'].fillna(0)dfresult['Age_null'] = pd.isna(dfdata['Age']).astype('int32')# SibSp,Parch,Faredfresult['SibSp'] = dfdata['SibSp']dfresult['Parch'] = dfdata['Parch']dfresult['Fare'] = dfdata['Fare']# Carbindfresult['Cabin_null'] = pd.isna(dfdata['Cabin']).astype('int32')# EmbarkeddfEmbarked = pd.get_dummies(dfdata['Embarked'], dummy_na=True)dfEmbarked.columns = ['Embarked_' + str(x) for x in dfEmbarked.columns]dfresult = pd.concat([dfresult, dfEmbarked], axis=1)return (dfresult)# 执行数据加载功能的函数
def load_data(train_dir='F:/Data_Analysis/Visualization/7_ray_tune/train.csv',test_dir='F:/Data_Analysis/Visualization/7_ray_tune/test.csv'):dftrain_raw = pd.read_csv(train_dir)dftest_raw = pd.read_csv(test_dir)x_train = preprocessing(dftrain_raw).valuesy_train = dftrain_raw[['Survived']].valuesx_test = preprocessing(dftest_raw).valuesy_test = dftest_raw[['Survived']].valuestrainset = TensorDataset(torch.tensor(x_train).float(), torch.tensor(y_train).float())testset = TensorDataset(torch.tensor(x_test).float(), torch.tensor(y_test).float())return trainset, testset# 定义神经网络模型
class Net(nn.Module):def __init__(self, l1=20, l2=15):super(Net, self).__init__()self.net = nn.Sequential(nn.Linear(15, l1),nn.ReLU(),nn.Linear(l1, l2),nn.ReLU(),nn.Linear(l2, 1),nn.Sigmoid())def forward(self, x):x = self.net(x)return xmetric_func = lambda y_pred,y_true: accuracy_score(y_true.data.numpy(),y_pred.data.numpy()>0.5)# 执行模型训练过程的函数
def train_titanic(config, checkpoint_dir=None, train_dir=None, test_dir=None):net = Net(config["l1"], config["l2"])       # 可配置参数device = "cpu"net.to(device)loss_func = nn.BCELoss()optimizer = torch.optim.Adam(params=net.parameters(), lr=config["lr"])      # 可配置参数if checkpoint_dir:# 模型的状态、优化器的状态model_state, optimizer_state = torch.load(os.path.join(checkpoint_dir, "checkpoint"))net.load_state_dict(model_state)optimizer.load_state_dict(optimizer_state)# numpy.ndarraytrainset, testset = load_data(train_dir, test_dir)train_abs = int(len(trainset) * 0.8)train_subset, val_subset = random_split(trainset, [train_abs, len(trainset) - train_abs])trainloader = torch.utils.data.DataLoader(train_subset,batch_size=int(config["batch_size"]),       # 可配置参数shuffle=True,num_workers=4)valloader = torch.utils.data.DataLoader(val_subset,batch_size=int(config["batch_size"]),       # 可配置参数shuffle=True,num_workers=4)for epoch in range(1, 11):net.train()loss_sum = 0.0metric_sum = 0.0step = 1for step, (features, labels) in enumerate(trainloader, 1):# 梯度清零optimizer.zero_grad()# 正向传播求损失predictions = net(features)loss = loss_func(predictions, labels)metric = metric_func(predictions, labels)# 反向传播求梯度loss.backward()optimizer.step()net.eval()val_loss_sum = 0.0val_metric_sum = 0.0val_step = 1for val_step, (features, labels) in enumerate(valloader, 1):# 关闭梯度计算with torch.no_grad():predictions = net(features)val_loss = loss_func(predictions, labels)val_metric = metric_func(predictions, labels)val_loss_sum += val_loss.item()val_metric_sum += val_metric.item()with tune.checkpoint_dir(epoch) as checkpoint_dir:path = os.path.join(checkpoint_dir, "checkpoint")torch.save((net.state_dict(), optimizer.state_dict()), path)# 打印平均损失和平均精度tune.report(loss=(val_loss_sum / val_step), accuracy=(val_metric_sum / val_step))print("Finished Training")# 执行测试功能的函数
def test_accuracy(net, device="cpu"):trainset, testset = load_data()testloader = DataLoader(testset, shuffle = False, batch_size = 4, num_workers=6)loss_func = nn.BCELoss()net.eval()test_loss_sum = 0.0test_metric_sum = 0.0test_step = 1for test_step, (features, labels) in enumerate(testloader, 1):# 关闭梯度计算with torch.no_grad():predictions = net(features)test_loss = loss_func(predictions, labels)test_metric = metric_func(predictions, labels)test_loss_sum += test_loss.item()test_metric_sum += test_metric.item()return test_loss_sum / test_step, test_metric_sum / test_step# 主程序
def main(num_samples=10, max_num_epochs=10):checkpoint_dir = 'F:/Data_Analysis/Visualization/7_ray_tune/checkpoints/'train_dir = 'F:/Data_Analysis/Visualization/7_ray_tune/train.csv'test_dir = 'F:/Data_Analysis/Visualization/7_ray_tune/test.csv'load_data(train_dir, test_dir)# 定义模型参数搜索空间config = {"l1": tune.choice([8, 16, 32, 64]),"l2": tune.choice([8, 16, 32, 64]),"lr": tune.choice([0.1, 0.01, 0.001, 0.0001]),"batch_size": tune.choice([2, 4, 8, 16])}# ASHAScheduler会根据指定标准提前中止坏实验scheduler = ASHAScheduler(metric="loss",mode="min",max_t=max_num_epochs,grace_period=1,reduction_factor=2)# 在命令行打印实验报告reporter = CLIReporter(# parameter_columns=["l1", "l2", "lr", "batch_size"],metric_columns=["loss", "accuracy", "training_iteration"])# 执行训练过程result = tune.run(partial(train_titanic, checkpoint_dir=checkpoint_dir, train_dir=train_dir, test_dir=test_dir),# 指定训练资源resources_per_trial={"cpu": 6},config=config,num_samples=num_samples,scheduler=scheduler,progress_reporter=reporter)# 找出最佳实验best_trial = result.get_best_trial("loss", "min", "last")# 打印最佳实验的参数配置print("Best trial config: {}".format(best_trial.config))print("Best trial final validation loss: {}".format(best_trial.last_result["loss"]))print("Best trial final validation accuracy: {}".format(best_trial.last_result["accuracy"]))# 打印最优超参数组合对应的模型在测试集上的性能best_trained_model = Net(best_trial.config["l1"], best_trial.config["l2"])device = "cpu"best_trained_model.to(device)best_checkpoint_dir = best_trial.checkpoint.valuemodel_state, optimizer_state = torch.load(os.path.join(best_checkpoint_dir, "checkpoint"))best_trained_model.load_state_dict(model_state)test_loss, test_acc = test_accuracy(best_trained_model, device)print("Best trial test set loss: {}".format(test_loss))print("Best trial test set accuracy: {}".format(test_acc))if __name__ == "__main__":# You can change the number of GPUs per trial here:main(num_samples=10, max_num_epochs=10)

效果如下:

Ray Tune模型调参:以一个简单的二分类模型为例相关推荐

  1. 使用Ray Tune自动调参

    文章目录 前言 一.Ray Tune是什么? 二.使用步骤 1.安装包 2.引入库 3.读入数据(与Ray Tune无关) 4.构建神经网络模型(与Ray Tune无关) 5.模型的训练和测试(与Ra ...

  2. AIRec个性化推荐召回模型调参实战

    简介:本文是<AIRec个性化推荐召回模型调参实战(电商.内容社区为例)>的视频分享精华总结,主要由阿里巴巴的产品专家栀露向大家分享AIRec个性化推荐召回模型以及针对这些召回模型在电商和 ...

  3. 数据标注、模型调参debug...通通自动化!华为云AI开发集大成之作ModelArts 2.0发布...

    乾明 发自 凹非寺  量子位 报道 | 公众号 QbitAI 福音.AI开发门槛现在更低. 不用你编码,甚至无需AI开发经验. 如今你所需所做:只需标注一小部分数据,然后它就会帮你标注剩下数据,并且自 ...

  4. python网格搜索核函数_机器学习笔记——模型调参利器 GridSearchCV(网格搜索)参数的说明...

    算法 数据结构 机器学习笔记--模型调参利器 GridSearchCV(网格搜索)参数的说明 GridSearchCV,它存在的意义就是自动调参,只要把参数输进去,就能给出最优化的结果和参数.但是这个 ...

  5. 炼丹神器!模型调参这门“玄学”,终于被破解了

    吃一个苹果要几步?这对普通人来说,是一件很简单的事. 那么AI模型调参需要几步呢?调参是机器学习中至关重要的一环,因其复杂性而被称之为一门"玄学".这对开发小白和AI专业算法工程师 ...

  6. 大数据预测实战-随机森林预测实战(四)-模型调参

    之前对比分析的主要是数据和特征层面,还有另一部分非常重要的工作等着大家去做,就是模型调参问题,在实验的最后,看一下对于树模型来说,应当如何进行参数调节. 调参是机器学习必经的一步,很多方法和经验并不是 ...

  7. Kaggle泰坦尼克号生存预测挑战——模型建立、模型调参、融合

    Kaggle泰坦尼克号生存预测挑战 这是kaggle上Getting Started 的Prediction Competition,也是比较入门和简单的新人赛,我的最好成绩好像有进入top8%,重新 ...

  8. 天池工业蒸汽量预测-模型调参

    本文改编自<阿里云天池大赛赛题解析-机器学习篇>工业蒸汽量预测的模型调参.进行了部分素材的替换和知识点的归纳总结.新增了Datawhale8月集成学习中的网格搜索.随机搜索的内容 上一篇工 ...

  9. DeepFM模型调参

    Ref: https://tech.meituan.com/2018/06/07/searchads-dnn.html 影响神经网络的超参数非常多,神经网络调参也是一件非常重要的事情.工业界比较实用的 ...

  10. python 随机森林调参_python的随机森林模型调参

    一.一般的模型调参原则 1.调参前提:模型调参其实是没有定论,需要根据不同的数据集和不同的模型去调.但是有一些调参的思想是有规律可循的,首先我们可以知道,模型不准确只有两种情况:一是过拟合,而是欠拟合 ...

最新文章

  1. pythonurllib模块-python爬虫之urllib模块和requests模块学习
  2. Python动态类和动态方法的创建和调用
  3. 我为什么觉得数据产品经理更吃香了?
  4. JavaScript 里 window, document, screen, body 这几个名词的区别
  5. 去掉前后空格_mysql批量去掉某个字段字符中的空格
  6. 2015蓝桥杯b组java_Java实现第十一届蓝桥杯JavaB组 省赛真题
  7. 95-910-330-源码-FlinkSQL-Calcite-Flink结合Calcite
  8. 关于nginx unit服务非正常关闭后,无法重新启动问题的处理
  9. 深度学习2.0-神经网络
  10. Java基础SQL优化---面试题【一】
  11. CSF文件批量转换为AVI格式
  12. 读书笔记5.2——《让数字说话:审计,就这么简单》:孙含晖
  13. 3DMax一个重要功能,通过它制作出来的影视作品有很强立体感
  14. 对金融基础知识的小总结
  15. 新房贷政策难执行真实原因:央行货币政策非万能
  16. 原生的HTML Table表格实现表头添加斜杠
  17. 关于scipy库里面的DCT离散余弦变换函数
  18. python 函数的使用方法
  19. 一文带你读懂“TRIZ”
  20. python移动文件到另一个文件夹若有同名文件更改文件名_python 复制、移动文件到指定目录并修改名字...

热门文章

  1. 用python制作上海疫情评论词云图-自定义形状
  2. Kali 中文目录改英文目录
  3. 2021-06-01
  4. 阿里巴巴2015实习面试
  5. A3文件转换成A4文件
  6. 异数OS TCP协议栈测试(五)--关于QOS与延迟
  7. mysql设置密码错误修改步骤
  8. idea java配色方案_IDEA 主题配色方案+字体
  9. python 用余弦值反算出角度
  10. 怎么用软件测试相似相似度,文档相似性检测工具