逻辑回归


学习目标:

  • 将(在之前的练习中构建的)房屋价值中位数预测模型重新构建为二元分类模型
  • 比较逻辑回归与线性回归解决二元分类问题的有效性

与在之前的练习中一样,我们将使用加利福尼亚州住房数据集,但这次我们会预测某个城市街区的住房成本是否高昂,从而将其转换成一个二元分类问题。此外,我们还会暂时恢复使用默认特征。

将问题构建为二元分类问题

数据集的目标是 median_house_value,它是一个数值(连续值)特征。我们可以通过向此连续值使用阈值来创建一个布尔值标签。

我们希望通过某个城市街区的特征预测该街区的住房成本是否高昂。为了给训练数据和评估数据准备目标,我们针对房屋价值中位数定义了分类阈值 - 第 75 百分位数(约为 265000)。所有高于此阈值的房屋价值标记为 1,其他值标记为 0

设置

运行以下单元格,以加载数据并准备输入特征和目标。

from __future__ import print_functionimport mathfrom IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Datasettf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.formatcalifornia_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")california_housing_dataframe = california_housing_dataframe.reindex(np.random.permutation(california_housing_dataframe.index))

注意以下代码与之前练习中的代码之间稍有不同。我们并没有将 median_house_value 用作目标,而是创建了一个新的二元目标 median_house_value_is_high

def preprocess_features(california_housing_dataframe):"""Prepares input features from California housing data set.Args:california_housing_dataframe: A Pandas DataFrame expected to contain datafrom the California housing data set.Returns:A DataFrame that contains the features to be used for the model, includingsynthetic features."""selected_features = california_housing_dataframe[["latitude","longitude","housing_median_age","total_rooms","total_bedrooms","population","households","median_income"]]processed_features = selected_features.copy()# Create a synthetic feature.processed_features["rooms_per_person"] = (california_housing_dataframe["total_rooms"] /california_housing_dataframe["population"])return processed_featuresdef preprocess_targets(california_housing_dataframe):"""Prepares target features (i.e., labels) from California housing data set.Args:california_housing_dataframe: A Pandas DataFrame expected to contain datafrom the California housing data set.Returns:A DataFrame that contains the target feature."""output_targets = pd.DataFrame()# Create a boolean categorical feature representing whether the# medianHouseValue is above a set threshold.output_targets["median_house_value_is_high"] = (california_housing_dataframe["median_house_value"] > 265000).astype(float)return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
Training examples summary:
  latitude longitude housing_median_age total_rooms total_bedrooms population households median_income rooms_per_person
count 12000.0 12000.0 12000.0 12000.0 12000.0 12000.0 12000.0 12000.0 12000.0
mean 35.6 -119.6 28.7 2635.9 536.8 1426.5 499.5 3.9 2.0
std 2.1 2.0 12.6 2190.6 416.7 1161.6 382.4 1.9 1.3
min 32.5 -124.3 1.0 8.0 1.0 8.0 1.0 0.5 0.1
25% 33.9 -121.8 18.0 1464.8 297.0 793.0 282.0 2.6 1.5
50% 34.2 -118.5 29.0 2125.5 433.0 1167.0 408.0 3.5 1.9
75% 37.7 -118.0 37.0 3138.2 644.0 1715.0 603.0 4.8 2.3
max 42.0 -114.3 52.0 37937.0 6445.0 35682.0 6082.0 15.0 55.2
Validation examples summary:
  latitude longitude housing_median_age total_rooms total_bedrooms population households median_income rooms_per_person
count 5000.0 5000.0 5000.0 5000.0 5000.0 5000.0 5000.0 5000.0 5000.0
mean 35.6 -119.6 28.4 2662.2 545.7 1436.9 505.3 3.9 2.0
std 2.1 2.0 12.5 2154.3 432.8 1114.3 389.6 1.9 0.9
min 32.6 -124.3 1.0 2.0 2.0 3.0 2.0 0.5 0.0
25% 33.9 -121.8 18.0 1452.8 295.0 779.0 281.0 2.6 1.5
50% 34.3 -118.5 29.0 2131.5 435.0 1167.0 410.0 3.6 1.9
75% 37.7 -118.0 37.0 3173.5 661.0 1739.2 608.2 4.8 2.3
max 41.9 -114.5 52.0 28258.0 4952.0 12427.0 4616.0 15.0 26.5
Training targets summary:
  median_house_value_is_high
count 12000.0
mean 0.3
std 0.4
min 0.0
25% 0.0
50% 0.0
75% 1.0
max 1.0
Validation targets summary:
  median_house_value_is_high
count 5000.0
mean 0.2
std 0.4
min 0.0
25% 0.0
50% 0.0
75% 0.0
max 1.0

线性回归会有怎样的表现?

为了解逻辑回归为什么有效,我们首先训练一个使用线性回归的简单模型。该模型将使用 {0, 1} 中的值为标签,并尝试预测一个尽可能接近 0 或 1 的连续值。此外,我们希望将输出解读为概率,所以最好模型的输出值可以位于 (0, 1) 范围内。然后我们会应用阈值 0.5,以确定标签。

运行以下单元格,以使用 LinearRegressor 训练线性回归模型。

def construct_feature_columns(input_features):"""Construct the TensorFlow Feature Columns.Args:input_features: The names of the numerical input features to use.Returns:A set of feature columns"""return set([tf.feature_column.numeric_column(my_feature)for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):"""Trains a linear regression model of one feature.Args:features: pandas DataFrame of featurestargets: pandas DataFrame of targetsbatch_size: Size of batches to be passed to the modelshuffle: True or False. Whether to shuffle the data.num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitelyReturns:Tuple of (features, labels) for next data batch"""# Convert pandas data into a dict of np arrays.features = {key:np.array(value) for key,value in dict(features).items()}                                            # Construct a dataset, and configure batching/repeatingds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limitds = ds.batch(batch_size).repeat(num_epochs)# Shuffle the data, if specifiedif shuffle:ds = ds.shuffle(10000)# Return the next batch of datafeatures, labels = ds.make_one_shot_iterator().get_next()return features, labels
def train_linear_regressor_model(learning_rate,steps,batch_size,training_examples,training_targets,validation_examples,validation_targets):"""Trains a linear regression model.In addition to training, this function also prints training progress information,as well as a plot of the training and validation loss over time.Args:learning_rate: A `float`, the learning rate.steps: A non-zero `int`, the total number of training steps. A training stepconsists of a forward and backward pass using a single batch.batch_size: A non-zero `int`, the batch size.training_examples: A `DataFrame` containing one or more columns from`california_housing_dataframe` to use as input features for training.training_targets: A `DataFrame` containing exactly one column from`california_housing_dataframe` to use as target for training.validation_examples: A `DataFrame` containing one or more columns from`california_housing_dataframe` to use as input features for validation.validation_targets: A `DataFrame` containing exactly one column from`california_housing_dataframe` to use as target for validation.Returns:A `LinearRegressor` object trained on the training data."""periods = 10steps_per_period = steps / periods# Create a linear regressor object.my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)linear_regressor = tf.estimator.LinearRegressor(feature_columns=construct_feature_columns(training_examples),optimizer=my_optimizer)# Create input functions  training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value_is_high"], batch_size=batch_size)predict_training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)predict_validation_input_fn = lambda: my_input_fn(validation_examples, validation_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)# Train the model, but do so inside a loop so that we can periodically assess# loss metrics.print("Training model...")print("RMSE (on training data):")training_rmse = []validation_rmse = []for period in range (0, periods):# Train the model, starting from the prior state.linear_regressor.train(input_fn=training_input_fn,steps=steps_per_period)# Take a break and compute predictions.training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)training_predictions = np.array([item['predictions'][0] for item in training_predictions])validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])# Compute training and validation loss.training_root_mean_squared_error = math.sqrt(metrics.mean_squared_error(training_predictions, training_targets))validation_root_mean_squared_error = math.sqrt(metrics.mean_squared_error(validation_predictions, validation_targets))# Occasionally print the current loss.print("  period %02d : %0.2f" % (period, training_root_mean_squared_error))# Add the loss metrics from this period to our list.training_rmse.append(training_root_mean_squared_error)validation_rmse.append(validation_root_mean_squared_error)print("Model training finished.")# Output a graph of loss metrics over periods.plt.ylabel("RMSE")plt.xlabel("Periods")plt.title("Root Mean Squared Error vs. Periods")plt.tight_layout()plt.plot(training_rmse, label="training")plt.plot(validation_rmse, label="validation")plt.legend()return linear_regressor
def train_linear_regressor_model(learning_rate,steps,batch_size,training_examples,training_targets,validation_examples,validation_targets):"""Trains a linear regression model.In addition to training, this function also prints training progress information,as well as a plot of the training and validation loss over time.Args:learning_rate: A `float`, the learning rate.steps: A non-zero `int`, the total number of training steps. A training stepconsists of a forward and backward pass using a single batch.batch_size: A non-zero `int`, the batch size.training_examples: A `DataFrame` containing one or more columns from`california_housing_dataframe` to use as input features for training.training_targets: A `DataFrame` containing exactly one column from`california_housing_dataframe` to use as target for training.validation_examples: A `DataFrame` containing one or more columns from`california_housing_dataframe` to use as input features for validation.validation_targets: A `DataFrame` containing exactly one column from`california_housing_dataframe` to use as target for validation.Returns:A `LinearRegressor` object trained on the training data."""periods = 10steps_per_period = steps / periods# Create a linear regressor object.my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)linear_regressor = tf.estimator.LinearRegressor(feature_columns=construct_feature_columns(training_examples),optimizer=my_optimizer)# Create input functions  training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value_is_high"], batch_size=batch_size)predict_training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)predict_validation_input_fn = lambda: my_input_fn(validation_examples, validation_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)# Train the model, but do so inside a loop so that we can periodically assess# loss metrics.print("Training model...")print("RMSE (on training data):")training_rmse = []validation_rmse = []for period in range (0, periods):# Train the model, starting from the prior state.linear_regressor.train(input_fn=training_input_fn,steps=steps_per_period)# Take a break and compute predictions.training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)training_predictions = np.array([item['predictions'][0] for item in training_predictions])validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])# Compute training and validation loss.training_root_mean_squared_error = math.sqrt(metrics.mean_squared_error(training_predictions, training_targets))validation_root_mean_squared_error = math.sqrt(metrics.mean_squared_error(validation_predictions, validation_targets))# Occasionally print the current loss.print("  period %02d : %0.2f" % (period, training_root_mean_squared_error))# Add the loss metrics from this period to our list.training_rmse.append(training_root_mean_squared_error)validation_rmse.append(validation_root_mean_squared_error)print("Model training finished.")# Output a graph of loss metrics over periods.plt.ylabel("RMSE")plt.xlabel("Periods")plt.title("Root Mean Squared Error vs. Periods")plt.tight_layout()plt.plot(training_rmse, label="training")plt.plot(validation_rmse, label="validation")plt.legend()return linear_regressor
linear_regressor = train_linear_regressor_model(learning_rate=0.000001,steps=200,batch_size=20,training_examples=training_examples,training_targets=training_targets,validation_examples=validation_examples,validation_targets=validation_targets)

任务 1:我们可以计算这些预测的对数损失函数吗?

检查预测,并确定是否可以使用它们来计算对数损失函数。

LinearRegressor 使用的是 L2 损失,在将输出解读为概率时,它并不能有效地惩罚误分类。例如,对于概率分别为 0.9 和 0.9999 的负分类样本是否被分类为正分类,二者之间的差异应该很大,但 L2 损失并不会明显区分这些情况。

相比之下,LogLoss(对数损失函数)对这些"置信错误"的惩罚力度更大。请注意,LogLoss 的定义如下:

但我们首先需要获得预测值。我们可以使用 LinearRegressor.predict 获得预测值。

我们可以使用预测和相应目标计算 LogLoss 吗?

解决方案

predict_validation_input_fn = lambda: my_input_fn(validation_examples, validation_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])_ = plt.hist(validation_predictions)

任务 2:训练逻辑回归模型并计算验证集的对数损失函数

要使用逻辑回归非常简单,用 LinearClassifier 替代 LinearRegressor 即可。完成以下代码。

注意:在 LinearClassifier 模型上运行 train() 和 predict() 时,您可以通过返回的字典(例如 predictions["probabilities"])中的 "probabilities" 键获取实值预测概率。Sklearn 的 log_loss 函数可基于这些概率计算对数损失函数,非常方便。

def train_linear_classifier_model(learning_rate,steps,batch_size,training_examples,training_targets,validation_examples,validation_targets):"""Trains a linear regression model of one feature.In addition to training, this function also prints training progress information,as well as a plot of the training and validation loss over time.Args:learning_rate: A `float`, the learning rate.steps: A non-zero `int`, the total number of training steps. A training stepconsists of a forward and backward pass using a single batch.batch_size: A non-zero `int`, the batch size.training_examples: A `DataFrame` containing one or more columns from`california_housing_dataframe` to use as input features for training.training_targets: A `DataFrame` containing exactly one column from`california_housing_dataframe` to use as target for training.validation_examples: A `DataFrame` containing one or more columns from`california_housing_dataframe` to use as input features for validation.validation_targets: A `DataFrame` containing exactly one column from`california_housing_dataframe` to use as target for validation.Returns:A `LinearClassifier` object trained on the training data."""periods = 10steps_per_period = steps / periods# Create a linear classifier object.my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)  linear_classifier = tf.estimator.LinearClassifier(feature_columns=construct_feature_columns(training_examples),optimizer=my_optimizer)# Create input functionstraining_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value_is_high"], batch_size=batch_size)predict_training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)predict_validation_input_fn = lambda: my_input_fn(validation_examples, validation_targets["median_house_value_is_high"], num_epochs=1, shuffle=False)# Train the model, but do so inside a loop so that we can periodically assess# loss metrics.print("Training model...")print("LogLoss (on training data):")training_log_losses = []validation_log_losses = []for period in range (0, periods):# Train the model, starting from the prior state.linear_classifier.train(input_fn=training_input_fn,steps=steps_per_period)# Take a break and compute predictions.    training_probabilities = linear_classifier.predict(input_fn=predict_training_input_fn)training_probabilities = np.array([item['probabilities'] for item in training_probabilities])validation_probabilities = linear_classifier.predict(input_fn=predict_validation_input_fn)validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])training_log_loss = metrics.log_loss(training_targets, training_probabilities)validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)# Occasionally print the current loss.print("  period %02d : %0.2f" % (period, training_log_loss))# Add the loss metrics from this period to our list.training_log_losses.append(training_log_loss)validation_log_losses.append(validation_log_loss)print("Model training finished.")# Output a graph of loss metrics over periods.plt.ylabel("LogLoss")plt.xlabel("Periods")plt.title("LogLoss vs. Periods")plt.tight_layout()plt.plot(training_log_losses, label="training")plt.plot(validation_log_losses, label="validation")plt.legend()return linear_classifier
linear_classifier = train_linear_classifier_model(learning_rate=0.000005,steps=500,batch_size=20,training_examples=training_examples,training_targets=training_targets,validation_examples=validation_examples,validation_targets=validation_targets)

任务 3:计算准确率并为验证集绘制 ROC 曲线

分类时非常有用的一些指标包括:模型准确率、ROC 曲线和 ROC 曲线下面积 (AUC)。我们会检查这些指标。

LinearClassifier.evaluate 可计算准确率和 AUC 等实用指标。

evaluation_metrics = linear_classifier.evaluate(input_fn=predict_validation_input_fn)print("AUC on the validation set: %0.2f" % evaluation_metrics['auc'])
print("Accuracy on the validation set: %0.2f" % evaluation_metrics['accuracy'])
AUC on the validation set: 0.72
Accuracy on the validation set: 0.75

您可以使用类别概率(例如由 LinearClassifier.predict 和 Sklearn 的 roc_curve 计算的概率)来获得绘制 ROC 曲线所需的真正例率和假正例率。

validation_probabilities = linear_classifier.predict(input_fn=predict_validation_input_fn)
# Get just the probabilities for the positive class
validation_probabilities = np.array([item['probabilities'][1] for item in validation_probabilities])false_positive_rate, true_positive_rate, thresholds = metrics.roc_curve(validation_targets, validation_probabilities)
plt.plot(false_positive_rate, true_positive_rate, label="our model")
plt.plot([0, 1], [0, 1], label="random classifier")
_ = plt.legend(loc=2)

看看您是否可以调整任务 2 中训练的模型的学习设置,以改善 AUC。

通常情况下,某些指标在提升的同时会损害其他指标,因此您需要找到可以实现理想折中情况的设置。

验证所有指标是否同时有所提升。

解决方案

一个可能有用的解决方案是,只要不过拟合,就训练更长时间。

要做到这一点,我们可以增加步数和/或批量大小。

所有指标同时提升,这样,我们的损失指标就可以很好地代理 AUC 和准确率了。

注意它是如何进行很多很多次迭代,只是为了再尽量增加一点 AUC。这种情况很常见,但通常情况下,即使只有一点小小的收获,投入的成本也是值得的。

linear_classifier = train_linear_classifier_model(learning_rate=0.000003,steps=20000,batch_size=500,training_examples=training_examples,training_targets=training_targets,validation_examples=validation_examples,validation_targets=validation_targets)evaluation_metrics = linear_classifier.evaluate(input_fn=predict_validation_input_fn)print("AUC on the validation set: %0.2f" % evaluation_metrics['auc'])
print("Accuracy on the validation set: %0.2f" % evaluation_metrics['accuracy'])

机器学习速成课程编程练习参考链接:

https://developers.google.com/machine-learning/crash-course/exercises

机器学习速成课程 | 练习 | Google Development——编程练习:逻辑回归相关推荐

  1. 机器学习速成课程 | 练习 | Google Development——编程练习:稀疏数据和嵌套简介

    稀疏数据和嵌入简介 学习目标: 将影评字符串数据转换为稀疏特征矢量 使用稀疏特征矢量实现情感分析线性模型 通过将数据投射到二维空间的嵌入来实现情感分析 DNN 模型 将嵌入可视化,以便查看模型学到的词 ...

  2. 机器学习速成课程 | 练习 | Google Development——编程练习:使用神经网络对手写数字进行分类

    使用神经网络对手写数字进行分类 学习目标: 训练线性模型和神经网络,以对传统 MNIST 数据集中的手写数字进行分类 比较线性分类模型和神经网络分类模型的效果 可视化神经网络隐藏层的权重 我们的目标是 ...

  3. 机器学习速成课程 | 练习 | Google Development——编程练习:提高神经网络的性能

    提高神经网络性能 学习目标:通过将特征标准化并应用各种优化算法来提高神经网络的性能 注意:本练习中介绍的优化方法并非专门针对神经网络:这些方法可有效改进大多数类型的模型. 设置 首先,我们将加载数据. ...

  4. 机器学习速成课程 | 练习 | Google Development——编程练习:神经网络简介

    神经网络简介 学习目标: 使用 TensorFlow DNNRegressor 类定义神经网络 (NN) 及其隐藏层 训练神经网络学习数据集中的非线性规律,并实现比线性回归模型更好的效果 在之前的练习 ...

  5. 机器学习速成课程 | 练习 | Google Development——编程练习:稀疏性和 L1 正则化

    稀疏性和 L1 正则化 学习目标: 计算模型大小 通过应用 L1 正则化来增加稀疏性,以减小模型大小 降低复杂性的一种方法是使用正则化函数,它会使权重正好为零.对于线性模型(例如线性回归),权重为零就 ...

  6. 机器学习速成课程 | 练习 | Google Development——编程练习:特征组合

    特征组合 学习目标: 通过添加其他合成特征来改进线性回归模型(这是前一个练习的延续) 使用输入函数将 Pandas DataFrame 对象转换为 Tensors,并在 fit() 和 predict ...

  7. 机器学习速成课程 | 练习 | Google Development——编程练习:特征集

    特征集 学习目标:创建一个包含极少特征但效果与更复杂的特征集一样出色的集合 到目前为止,我们已经将所有特征添加到了模型中.具有较少特征的模型会使用较少的资源,并且更易于维护.我们来看看能否构建这样一种 ...

  8. 机器学习速成课程 | 练习 | Google Development——编程练习:验证

    验证 学习目标: 使用多个特征而非单个特征来进一步提高模型的有效性 调试模型输入数据中的问题 使用测试数据集检查模型是否过拟合验证数据 与在之前的练习中一样,我们将使用加利福尼亚州住房数据集,尝试根据 ...

  9. 机器学习速成课程 | 练习 | Google Development——编程练习:合成特征和离群值

    合成特征和离群值 学习目标: 创建一个合成特征,即另外两个特征的比例 将此新特征用作线性回归模型的输入 通过识别和截取(移除)输入数据中的离群值来提高模型的有效性 我们来回顾下之前的"使用 ...

最新文章

  1. visual studio spy使用实现后台按键_多种精华液应该如何叠加使用?
  2. 【数字信号处理】傅里叶变换性质 ( 序列傅里叶变换共轭对称性质 | 推论 )
  3. ALV GRID中实现RadioButton单选按钮
  4. php操作redis phpredis扩展
  5. 计算机模拟 博弈 善意,从两本奇书看人与人的重复博弈
  6. linux apache gzip压缩,Linux入门教程:配置Apache开启gzip压缩传输,gzip压缩 LoadModul
  7. android字体和可免费商用的字体
  8. 成都盛铭轩:商家怎么装修设计
  9. html ul动态添加li,javaScript动态添加Li元素
  10. html页面旋转图标或标签
  11. C#控制定位Word光标移动到任意行或者最后一行,取得光标位置等操作
  12. linux 回收站创建
  13. CorelDRAW制作360安全浏览器图标
  14. PHPMyWind 事务
  15. Github标星超7k!从零开始,最简明扼要的数据科学学习路径
  16. 做自己的神——极高自由度的功能游戏:我的世界
  17. css背景分割两种颜色
  18. 利用requests库下载视频
  19. 智能反射面(IRS)在无线通信安全领域应用的论文复现
  20. c++语言打开文件对话框,C++采用openfilename打开文件对话框用法实例

热门文章

  1. python_安装PIL/pillow
  2. 双目密集匹配的一般过程
  3. ros melodic 版本sudo rosdep init和rosdep upgrade失败的解决办法
  4. 什么是微调(Fine Tune)?什么时候使用什么样的微调?【数据量和数据相似度决定】
  5. NativeScaler()与loss_scaler
  6. 损失函数与优化器理解+【PyTorch】在反向传播前为什么要手动将梯度清零?optimizer.zero_grad()
  7. 通过Xsheel命令:获取nginx的安装目录
  8. python字符串匹配的准确率_说说在 Python 中,如何找出所有字符串匹配
  9. wordpress主题Z-blog拓源纯净版
  10. Fancy Product Designer 产品定制 wordpress插件