目录:

  • 一. 数据预处理
    • 1.1 读取&清理&切割数据
    • 1.2 标签的分布
  • 二. 基础模型建立
    • 2.1 LightGBM建模
    • 2.2 默认参数的效果
  • 三. 设置参数空间
    • 3.* 参数空间采样
  • 四. 随机优化
    • 4.1 交叉验证LightGBM
    • 4.2 Objective Function
    • 4.3 执行随机调参
    • 4.4 Random Search 结果
  • 五. 贝叶斯优化
    • 5.1 Objective Function
    • 5.2 Domain Space
      • 5.2.1 学习率分布
      • 5.2.2 叶子数分布
      • 5.2.3 boosting_type
      • 5.2.4 参数分布汇总
        • 5.2.4.* 参数采样结果看一下
    • 5.3 准备贝叶斯优化
    • 5.4 贝叶斯优化结果
      • 5.4.1 保存结果
      • 5.4.2 测试集上的效果
  • 六. 随机VS贝叶斯 方法对比
    • 6.1 调参过程可视化展示
    • 6.2 学习率对比
    • 6.3 Boosting Type 对比
    • 6.4 数值型参数 对比
  • 七. 贝叶斯优化参数变化情况
    • 7.1 Boosting Type 变化
    • 7.2 学习率&叶子数&... 变化
    • 7.3 reg_alpha, reg_lambda 变化
    • 7.4 随机与贝叶斯优化损失变化的对比
    • 7.5 保存结果

保险数据集,来进行GBDT分类任务预测,基于贝叶斯和随机优化方法进行对比分析.

一. 数据预处理

1.1 读取&清理&切割数据

import pandas as pd
import numpy as npdata = pd.read_csv('caravan-insurance-challenge.csv')
data.head()

train = data[data['ORIGIN'] == 'train']
test = data[data['ORIGIN'] == 'test']train_labels = np.array(train['CARAVAN'].astype(np.int32)).reshape((-1,))
test_labels = np.array(test['CARAVAN'].astype(np.int32)).reshape((-1,))train = train.drop(['ORIGIN', 'CARAVAN'], axis = 1)
test = test.drop(['ORIGIN', 'CARAVAN'], axis = 1)features = np.array(train)
test_features = np.array(test)
lebels = train_labels[:]print('Train shape:', train.shape)
print('Test shape:', test.shape)
train.head()

1.2 标签的分布

import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inlineplt.hist(labels, edgecolor = 'k')
plt.xlabel('Label'); plt.ylabel('Count'); plt.title('Count of Labels')


样本是不平衡数据,所以在这里选择使用ROC曲线来进行评估,接下来我们的目标就是使得其AUC的值越大越好。

二. 基础模型建立

2.1 LightGBM建模

import lightgbm as lgb
model = lgb.LGBMClassifier()
model

LGBMClassifier(boosting_type=‘gbdt’, class_weight=None, colsample_bytree=1.0, importance_type=‘split’, learning_rate=0.1, max_depth=-1, min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0, n_estimators=100, n_jobs=-1, num_leaves=31, objective=None, random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True, subsample=1.0, subsample_for_bin=200000, subsample_freq=0)

2.2 默认参数的效果

这个基础模型,我们要做的就是尽可能高的来提升AUC指标。

from sklearn.metrics import roc_auc_score
from timeit import default_timer as timerstart = timer()
model.fit(features, labels)
train_time = timer() - startpredictions = model.predict_proba(test_featurs)[:, 1]
auc = roc_auc_score(test_labels, predictions)print('The baseline score on the test set is {:.4f}.'.format(auc))
print('The baseline training time is {:.4f} seconds.'.format(train_time))

The baseline score on the test set is 0.7092.
The baseline training time is 0.3402 seconds.

三. 设置参数空间

RandomizedSearchCV没有Early Stopping功能 , 所以我们来自己写一下 .

有些参数设置成对数分布,比如学习率,因为这类参数都是要累乘才能发挥效果的,一般经验都是写成log分布形式。还有一些参数得在其他参数控制下来进行选择

import randomparam_grid = {'class_weight': [None, 'balanced'],'boosting_type': ['gbdt', 'goss', 'dart'],'num_leaves': list(range(30, 150)),'learning_rate': list(np.logspace(np.log(0.005), np.log(0.2), base=np.exp(1), num=800))),'subsample_for_bin': list(range(20000, 300000, 20000)),'min_child_samples': list(range(20, 500, 5)),'reg_alpha': list(np.linspace(0, 1)),'reg_lambda': list(np.linspace(0, 1)),'colsample_bytree': list(np.linspace(0.6, 1, 10))}
subsample_dist = list(np.linepace(0.5, 1, 100))# 学习率的分布
plt.hist(param_grid['learning_rate'], color = 'r', edgecolor = 'k')
plt.xlabel('Learning Rate'); plt.ylabel('Count'); plt.title('Learning Rate Distribution', size =18)

# 叶子数目的分布
plt.hist(param_grid['num_leaves'], color = 'm', edgecolor = 'k')
plt.xlabel('Learning Number of Leaves'); plt.ylabel('Count'); plt.title('Number of Leaves Distribution')

3.* 参数空间采样

{key: random.sample(value, 2) for key, value in param_grid.items()}

params = {key: random.sample(value, 1)[0] for key, value in param_grid.items()}
params['subsample'] = random.sample(subsample_dist, 1)[0] if params['boosting_type'] != 'goss' else 1.0
params

{‘class_weight’: ‘balanced’, ‘boosting_type’: ‘gbdt’,
‘num_leaves’: 149, ‘learning_rate’: 0.024474734290096542,
‘subsample_for_bin’: 200000, ‘min_child_samples’: 110,
‘reg_alpha’: 0.8163265306122448, ‘reg_lambda’: 0.26530612244897955,
‘colsample_bytree’: 0.6888888888888889, ‘subsample’: 0.8282828282828283}

四. 随机优化

4.1 交叉验证LightGBM

# Create a lgb dataset
train_set = lgb.Dataset(features, label = labels)r = lgb.cv(params, train_set, num_boost_round=10000, nfold=10, metrics='auc',early_stopping_rounds = 80, verbose_eval = False, seed = 42)
# early_stopping_rounds = 80:如果再连续构造80次还是没进步,那就停止r_best = np.max(r['auc-mean']) # Highest score
r_best_std = r['auc-stdv'][np.argmax(r['auc-mean'])]
# Standard deviation of best scoreprint('The maximum ROC AUC on the validation set was {:.5f}.'.format(r_best, r_best_std))
print('The ideal numbel of iterations was {}.'.format(np.argmax(r['auc-mean']) + 1)

The maximum ROC AUC on the validation set was 0.75553 with std of 0.03082.
The ideal number of iterations was 73.

# 保存结果
random_results = pd.DataFrame(columns = ['loss', 'params', 'iteration', 'estimators','time'], index = list(range(Max_evals)))

4.2 Objective Function

用AUC指标当做我们的目标

Max_evals = 200
N_folds = 3
def random_objective(params, iteration, n_folds = N_folds):start = timer()cv_results = lgb.cv(params, train_set, num_boost_round = 10000, nfold = n_folds,early_stopping_rounds = 80, metrics = 'auc', seed = 42)end = timer()best_score = np.max(cv_results['auc-mean'])loss = 1 - best_scoren_estimators = int(np.argmax(cv_results['auc-mean']) + 1)return [loss, params, iteration, n_estimators, end-start]

4.3 执行随机调参

random.seed(42)for i in range(Max_evals):params = {key: random.sample(value, 1)[0] for key, value in param_grid.items()}if params['boosting_type'] == 'goss':params['subsample'] = 1.0else:params['subsample'] = random.sample(subsample_dist, 1)[0]results_list = random_objective(params, i)random_results.loc[i, :] = results_listrandom_results.sort_values('loss', ascending = True, inplace = True)
random_results.reset_index(inplace = True, drop = True)
random_results.head()

4.4 Random Search 结果

random_results.loc[0, 'params']

{‘class_weight’: None, ‘boosting_type’: ‘dart’, ‘num_leaves’: 112,
‘learning_rate’: 0.020631460653340816, ‘subsample_for_bin’: 160000,
‘min_child_samples’: 220, ‘reg_alpha’: 0.9795918367346939,
‘reg_lambda’: 0.08163265306, ‘colsample_bytree’: 0.6, ‘subsample’: 0.7929292929292929}

best_random_params = random_results.loc[0, 'params'].copy()
best_random_estimators = int(random_results.loc[0, 'estimators'])
best_random_model = lgb.LGBMClassifier(n_estimators=best_random_estimators, n_jobs=-1,objective='binary', **best_random_params, random_state=42)
best_random_model.fit(features, labels)
predictions = best_random_model.predict_proba(test_features)[:, 1]print('The best model from random search scores {:.4f} on the test data.'.format(roc_auc_score(test_labels, predictions)))
print('This was achieved using {} search iterations.'.format(random_results.loc[0, 'iteration']))

The best model from random search scores 0.7179 on the test data.
This was achieved using 38 search iterations.

五. 贝叶斯优化

5.1 Objective Function

import csv
from hyperopt import STATUS_OK
from timeit import default_timer as timerdef objective(params, n_folds = N_folds):global ITERATIONITERATION += 1subsample = params['boosting_type'].get('subsample', 1.0)params['boosting_type'] = params['boosting_type']['boosting_type']params['subsample'] = subsamplefor parameter_name in ['num_leaves', 'subsample_for_bin', 'min_child_samples']:params[parameter_name] = int(params[parameter_name])start = timer()cv_results = lgb.cv(params, train_set, num_boost_round = 10000, nfold = n_folds,early_stopping_rounds = 80, metrics = 'auc', seed = 42)run_time = timer() - startbest_score = np.max(cv_results['auc-mean'])loss = 1 - best_scoren_estimators = int(np.argmax(cv_results['auc-mean']) + 1)of_connection = open(out_file, 'a')writer = csv.writer(of_connection)writer.writerow([loss, params, ITERATION, n_estimators, run_time])return {'loss': loss, 'params': params, 'iteration': ITERATION,'estimators': n_estimators, 'train_time': run_time, 'status': STATUS_OK}

5.2 Domain Space

5.2.1 学习率分布

from hyperopt import hp
from hyperopt.pyll.stochastic import samplelearning_rate = {'learning_rate': hp.loguniform('learning_rate', np.log(0.005), np.log(0.2))}learning_rate_dist = []
for _ in range(10000):learning_rate_dist.append(sample(learning_rate)['learning_rate'])plt.figure(figsize = (8, 6))
sns.kdeplot(learning_rate_dist, color = 'r', linewidth = 2, shade = True)
plt.title('Learning Rate Distribution', size = 18)
plt.xlabel('Learning Rate', size = 16)
plt.ylabel('Density', size = 16)

5.2.2 叶子数分布

quniform的效果

num_leaves = {'num_leaves': hp.quniform('num_leaves', 30, 150, 1)}
num_leaves_dist = []
for _ in range(10000):num_leaves_dist.append(sample(num_leaves)['num_leaves'])plt.figure(figsize = (8,6))
sns.kdeplot(num_leaves_dist, linewidth = 2, shade = True)
plt.title('Number of Leaves Distribution', size = 18); plt.xlabel('Number of Leaves', size = 16); plt.ylabel('Density', size = 16)

5.2.3 boosting_type

boosting_type = {'boosting_type': hp.choice('boosting_type',[{'boosting_type': 'gbdt', 'subsample': hp.uniform('subsample', 0.5, 1)}, {'boosting_type': 'dart', 'subsample': hp.uniform('subsample', 0.5, 1)},{'boosting_type': 'goss', 'subsample': 1.0}])}
params = sample(boosting_type)
params

{‘boosting_type’: {‘boosting_type’: ‘gbdt’, ‘subsample’: 0.659771523544347}}

subsample = params['boosting_type'].get('subsample', 1.0)params['boosting_type'] = params['boosting_type']['boosting_type']
params['subsample'] = subsample
params

{‘boosting_type’: ‘gbdt’, ‘subsample’: 0.659771523544347}

5.2.4 参数分布汇总

space = {'class_weight': hp.choice('class_weight', [None, 'balanced']),'boosting_type': hp.choice('boosting_type', [{'boosting_type': 'gbdt', 'subsample': hp.uniform('gdbt_subsample', 0.5, 1)},{'boosting_type': 'dart', 'subsample': hp.uniform('dart_subsample', 0.5, 1)},{'boosting_type': 'goss', 'subsample': 1.0}]),'num_leaves': hp.quniform('num_leaves', 30, 150, 1),'learning_rate': hp.loguniform('learning_rate', np.log(0.01), np.log(0.2)),'subsample_for_bin': hp.quniform('subsample_for_bin', 20000, 300000, 20000),'min_child_samples': hp.quniform('min_child_samples', 20, 500, 5),'reg_alpha': hp.uniform('reg_alpha', 0.0, 1.0),'reg_lambda': hp.uniform('reg_lambda', 0.0, 1.0),'colsample_bytree': hp.uniform('colsample_by_tree', 0.6, 1.0)}
5.2.4.* 参数采样结果看一下
x = sample(space)
subsample = x['boosting_type'].get('subsample', 1.0)
x['boosting_type'] = x['boosting_type']['boosting_type']
x['subsample'] = subsample
x

{‘boosting_type’: ‘goss’,
‘class_weight’: ‘balanced’,
‘colsample_bytree’: 0.6765996025430209,
‘learning_rate’: 0.13232409656402305,
‘min_child_samples’: 330.0,
‘num_leaves’: 103.0,
‘reg_alpha’: 0.5849415659238283,
‘reg_lambda’: 0.4787001151843524,
‘subsample_for_bin’: 100000.0,
‘subsample’: 1.0}

5.3 准备贝叶斯优化

from hyperopt import tpe
tpe_algorithm = tpe.suggestfrom hyperopt import Trials
bayes_trials = Trials()# 可以将结果保存下来out_file = 'gbm_trials.csv'
of_connection = open(out_file, 'w')
writer = csv.writer(of_connection)writer.writerow(['loss', 'params', 'iteration', 'estimators', 'train_time'])
of_connection.close()

5.4 贝叶斯优化结果

from hyperopt import fmin# Global variable
global  ITERATIONITERATION = 0# Run optimization
best = fmin(fn = objective, space = space, algo = tpe.suggest, max_evals = Max_evals, trials = bayes_trials, rstate = np.random.RandomState(42))# Sort the trials with lowest loss (highest AUC) first
bayes_trials_results = sorted(bayes_trials.results, key = lambda x: x['loss'])
bayes_trials_results[0]

[{‘loss’: 0.23670902556787576,
‘params’: {‘boosting_type’: ‘dart’,
‘class_weight’: None,
‘colsample_bytree’: 0.6777142263201398,
‘learning_rate’: 0.10896162558676845,
‘min_child_samples’: 200,
‘num_leaves’: 50,
‘reg_alpha’: 0.75201502515923,
‘reg_lambda’: 0.2500317899561674,
‘subsample_for_bin’: 220000,
‘subsample’: 0.8299430626318801},
‘iteration’: 109,
‘estimators’: 39,
‘train_time’: 135.7437369420004,
‘status’: ‘ok’}]

5.4.1 保存结果

results = pd.read_csv('gbm_trials.csv')
results.sort_values('loss', ascending = True, inplace = True)
results.reset_index(inplace = True, drop = True)
print(results.shape)
results.head()

import ast
ast.literal_eval(results.loc[0, 'params'])
# 出于安全考虑,对字符串进行类型转换的时候,最好使用ast.literal_eval()函数, 而不是直接用eval()

{‘boosting_type’: ‘dart’,
‘class_weight’: None,
‘colsample_bytree’: 0.6777142263201398,
‘learning_rate’: 0.10896162558676845,
‘min_child_samples’: 200,
‘num_leaves’: 50,
‘reg_alpha’: 0.75201502515923,
‘reg_lambda’: 0.2500317899561674,
‘subsample_for_bin’: 220000,
‘subsample’: 0.8299430626318801}

5.4.2 测试集上的效果

best_bayes_estimators = int(results.loc[0, 'estimators'])
best_bayes_params = ast.literal_eval(results.loc[0, 'params']).copy()best_bayes_model = lgb.LGBMClassifier(n_estimators=best_bayes_estimators, n_jobs=-1,objective='binary', **best_bayes_params, random_state=42)
best_bayes_model.fit(features, labels)

LGBMClassifier(boosting_type=‘dart’, class_weight=None,
colsample_bytree=0.6777142263201398, importance_type=‘split’,
learning_rate=0.10896162558676845, max_depth=-1,
min_child_samples=200, min_child_weight=0.001, min_split_gain=0.0,
n_estimators=39, n_jobs=-1, num_leaves=50, objective=‘binary’,
random_state=42, reg_alpha=0.75201502515923,
reg_lambda=0.2500317899561674, silent=True,
subsample=0.8299430626318801, subsample_for_bin=220000,
subsample_freq=0)

preds = best_bayes_model.predict_proba(test_features)[:, 1]
print('The best model from Bayes optimization scores {:.4f} AUC ROC on the test set.'.format(roc_auc_score(test_labels, preds)))
print('This was achieved after {} search iteration.'.format(results.loc[0, 'iteration']))

The best model from Bayes optimization scores 0.7275 AUC ROC on the test set.
This was achieved after 109 search iteration.

六. 随机VS贝叶斯 方法对比

best_random_params['method'] = 'random search'
best_bayes_params['method'] = 'Bayesian optimization'
best_params = pd.DataFrame(best_bayes_params, index = [0]).append(pd.DataFrame(best_random_params, index = [0]), ignore_index = True)
best_params

6.1 调参过程可视化展示

random_params = pd.DataFrame(columns = list(random_results.loc[0, 'params'].keys()),index = list(range(len(random_results))))
for i, params in enumerate(random_results['params']):random_params.loc[i, :] = list(params.values())random_params['loss'] = random_results['loss']
random_params['iteration'] = random_results['iteration']
random_params.head()

bayes_params = pd.DataFrame(columns = list(ast.literal_eval(results.loc[0,'params']).keys()),index = list(range(len(results))))
for i, params in enumerate(results['params']):bayes_params.loc[i, :] = list(ast.literal_eval(params).values())bayes_params['loss'] = results['loss']
bayes_params['iteration'] = results['iteration']
bayes_params.head()

6.2 学习率对比

plt.figure(figsize = (20, 8))
plt.rcParams['font.size'] = 18sns.kdeplot(learning_rate_dist, label = 'Sampling Distribution', linewidth = 2)
sns.kdeplot(random_params['learning_rate'], label = 'Random Search', linewidth = 2)
sns.kdeplot(bayes_params['learning_rate'], label = 'Bayes Optimization', linewidth=2)
plt.legend()
plt.xlabel('Learning Rate')
plt.ylabel('Density')
plt.title('Learning Rate Distribution')

6.3 Boosting Type 对比

fig, axs = plt.subplots(1, 2, sharey = True, sharex = True)random_params['boosting_type'].value_counts().plot.bar(ax=axs[0], figsize=(14,6),color='orange', title='Random Search Boosting Type')
bayes_params['boosting_type'].value_counts().plot.bar(ax=axs[1], figsize= (14,6),color='green', title='Bayes Optimization Boosting Type')

print('Random Search boosting type percentages:')
print(100 * random_params['boosting_type'].value_counts() / len(random_params))print('Bayes Optimization boosting type percentages:')
print(100 * bayes_params['boosting_type'].value_counts() / len(bayes_params))

Random Search boosting type percentages:
dart 36.5
gbdt 33.0
goss 30.5
Name: boosting_type, dtype: float64

Bayes Optimization boosting type percentages:
dart 54.5
gbdt 29.0
goss 16.5
Name: boosting_type, dtype: float64

6.4 数值型参数 对比

for i, hyper in enumerate(random_params.columns):if hyper not in ['class_weight','boosting_type','iteration','subsample','metric','verbose']:plt.figure(figsize = (14, 6))if hyper != 'loss':sns.kdeplot([sample(space[hyper]) for _ in range(1000)], label = 'Sampling Distribution')sns.kdeplot(random_params[hyper], label = 'Random Search')sns.kdeplot(bayes_params[hyper], label = 'Bayes Optimization')plt.legend(loc = 1)plt.title('{} Distribution'.format(hyper))plt.xlabel('{}'.format(hyper))plt.ylabel('Density')




七. 贝叶斯优化参数变化情况

7.1 Boosting Type 变化

bayes_params['boosting_int'] = bayes_params['boosting_type'].replace({'gbdt':1,'goss':2,'dart':3})
plt.plot(bayes_params['iteration'], bayes_params['boosting_int'], 'ro')
plt.yticks([1, 2, 3], ['gdbt', 'goss', 'dart'])
plt.xlabel('Iteration')
plt.title('Boosting Type over Search')

7.2 学习率&叶子数&… 变化

plt.figure(figsize = (14, 14))
colors = ['red', 'blue', 'orange', 'green']for i, hyper in enumerate(['colsample_bytree', 'learning_rate', 'min_child_samples', 'num_leaves']):plt.subplot(2, 2, i+1)sns.regplot('iteration', hyper, data = bayes_params, color = colors[i])# plt.xlabel('Iteration')# plt.ylabel('{}'.format(hyper))plt.title('{} over Search'.format(hyper))
plt.tight_layout()

7.3 reg_alpha, reg_lambda 变化

fig, axes = plt.subplots(1, 3, figsize = (18, 6))
for i, hyper in enumerate(['reg_alpha', 'reg_lambda', 'subsample_for_bin']):sns.regplot('iteration', hyper, data = bayes_params, ax = axes[i])axes[i].set(title = '{} over Search'.format(hyper))
plt.tight_layout()

7.4 随机与贝叶斯优化损失变化的对比

scores = pd.DataFrame({'ROC AUC': 1 - random_params['loss'],'iteration': random_params['iteration'],'search': 'random'})
scores = scores.append(pd.DataFrame({'ROC AUC': 1 - bayes_params['loss'],'iteration': bayes_params['iteration'],'search': 'Bayes'}))
scores['ROC AUC'] = scores['ROC AUC'].astype(np.float32)
scores['iteration'] = scores['iteration'].astype(np.int32)
scores.head()

plt.figure(figsize = (18, 6))plt.subplot(1, 2, 1)
plt.hist(1 - random_results['loss'].astype(np.float32), label = 'Random Search', edgecolor = 'k')
plt.xlabel('Validation Roc Auc')
plt.ylabel('Count')
plt.title('Random Search Validation Scores')
plt.xlim(0.73, 0.765)plt.subplot(1, 2, 2)
plt.hist(1 - bayes_params['loss'], label = 'Bayes Optimization', edgecolor = 'k')
plt.xlabel('Validation Roc Auc')
plt.ylabel('Count')
plt.title('Bayes Optimization Validation Scores')
plt.xlim(0.73, 0.765)

sns.lmplot('iteration', 'ROC AUC', hue = 'search', data = scores, height = 8)
plt.xlabel('Iteration')
plt.ylabel('ROC AUC')
plt.title('ROC AUC versus Iteration')

7.5 保存结果

import json
with open('trials.json', 'w') as f:f.write(json.dumps(bayes_trials.results))bayes_params.to_csv('bayes_params.csv', index = False)
random_params.to_csv('random_params.csv', index = False)

实战: 对GBDT(lightGBM)分类任务进行贝叶斯优化, 并与随机方法对比相关推荐

  1. lightgbm实战-二分类问题(贝叶斯优化下调参方法)

    # use bayes_opt from sklearn.datasets import make_classification from sklearn.ensemble import Random ...

  2. 【CNN分类】基于贝叶斯优化卷积神经网络BO-CNN实现故障诊断附matlab代码

    ✅作者简介:热爱科研的Matlab仿真开发者,修心和技术同步精进,matlab项目合作可私信.

  3. 随机森林、LGBM基于贝叶斯优化调参

    前言 本文基于孕妇吸烟与胎儿问题中数据集与前期处理 针对随机森林与LGBM模型网格搜索效率低,使用贝叶斯调参提高效率 有关于贝叶斯优化包相关参数说明详解可以看GitHub地址 将处理好的数据用dill ...

  4. 多任务进化优化算法(二) 多因子进化算法(MFEA)的理论基础、多任务贝叶斯优化以及MFEAII简介

    摘要 ​ 此篇博客主要介绍了MFEA理论推导及其改进算法MFEA-II.在多任务优化的情景下,如果任务之间存在潜在关系,那么高质量的解在这些任务之间的转移可以显著提高算法的性能.然而有的时候缺乏关于任 ...

  5. 贝叶斯优化python包_贝叶斯优化

    万壑松风知客来,摇扇抚琴待留声 1. 文起 本篇文章记录通过 Python 调用第三方库,从而调用使用了贝叶斯优化原理的 Hyperopt 方法来进行超参数的优化选择.具体贝叶斯优化原理与相关介绍将在 ...

  6. 贝叶斯优化方法和应用综述

    贝叶斯优化方法和应用综述 各类优化算法入门优秀论文总结目录 1.摘要 设计类问题在科学研究和工业领域无处不在.作为一种十分有效的全局优化算法,近年来,贝叶斯优化方法在设计类问题上被广泛应用.通过设计恰 ...

  7. Python实现贝叶斯优化器(Bayes_opt)优化BP神经网络分类模型(BP神经网络分类算法)项目实战

    说明:这是一个机器学习实战项目(附带数据+代码+文档+视频讲解),如需数据+代码+文档+视频讲解可以直接到文章最后获取. 1.项目背景 贝叶斯优化器(BayesianOptimization) 是一种 ...

  8. 贝叶斯优化python包_贝叶斯全局优化(LightGBM调参)

    这里结合Kaggle比赛的一个数据集,记录一下使用贝叶斯全局优化和高斯过程来寻找最佳参数的方法步骤. 1.安装贝叶斯全局优化库 从pip安装最新版本 pip install bayesian-opti ...

  9. 机器学习监督学习之分类算法---朴素贝叶斯代码实践

    目录 1. 言论过滤器 1.1 项目描述 1.2 朴素贝叶斯 工作原理: 1.2.1 词条向量 1.3 开发流程: 1.4 代码实现 1.4.1 创建样本 1.4.2 构建词汇表,用于建立词集向量 1 ...

最新文章

  1. java鼠标样式设置,设置Echarts鼠标悬浮样式
  2. 【二分图最大匹配】【HDU2063】过山车
  3. 新书上市:《FLUENT 14.0超级学习手册》
  4. android 代理 wifi热点,android wifi热点默认网关
  5. 60、date的使用
  6. 我的模型有多快?——深度学习网络模型的运算复杂度、空间占用和内存访问情况计算...
  7. python string_Python String casefold()
  8. 「力扣」509. 斐波那契数【动态规划】详解!
  9. YAML 有漏洞被弃用?网友:YAML 不背锅!
  10. 设置linearlayout最大高度_桥式、门式起重机-安全防护装置30条设置要求(六)- 第1~10条...
  11. Inception 模块作用
  12. vs 2019 创建rdl报表
  13. idea创建maven项目失败, Failed to create a Maven project
  14. npm run tsc报错 (声明文件报错||tsc.js报错)
  15. 书小宅之C#——实现的第三方程序嵌入自己的WinForm
  16. 【CUDA 基础】3.4 避免分支分化
  17. php生成红包,PHP 生成微信红包代码简单,php生成红包代码
  18. 软件工程实训——点歌管理系统开发记录
  19. Java基础巩固Day2作业
  20. Learning VR Photography and Video 学习VR摄影和视频 Lynda课程中文字幕

热门文章

  1. 2018年最新PHP面试题
  2. 什么是大数据?漫谈大数据仓库与挖掘系统
  3. 计算机显卡显示图片原理,认识显卡!浅析显卡及显卡工作原理
  4. 报告总结:无线通信中的数学问题
  5. 2019-CS224N-Assignment 1: Exploring Word Vectors
  6. 【FreeSwitch开发实践】外呼网关配置(拨打电话)
  7. 2021年12月中国A股上市企业股价涨幅排行榜:三羊马涨幅最大,从事传媒行业的企业最多(附月榜TOP100详单)
  8. 《MLB棒球创造营》:走近棒球运动·西雅图水手队
  9. 一个计算机网络典型系统可由,计算机网络基础试题2
  10. label 标签属性的总结归纳