Using Deep Neural Networks for regression problems might seem like overkill (and quite often is), but for some cases where you have a significant amount of high dimensional data they can outperform any other ML models.

使用深度神经网络解决回归问题似乎过大(并且经常是),但是在某些情况下,如果您拥有大量的高维数据,它们可能会胜过其他任何ML模型。

When you learn about Neural Networks you usually start with some image classification problem like the MNIST dataset — this is an obvious choice as advanced tasks with high dimensional data is where DNNs really thrive.

当您了解神经网络时,通常会遇到一些图像分类问题,例如MNIST数据集-这是一个显而易见的选择,因为具有高维数据的高级任务是DNN真正蓬勃发展的地方。

Surprisingly, when you try to apply what you learned on MNIST on a regression tasks you might struggle for a while before your super-advanced DNN model is any better than a basic Random Forest Regressor. Sometimes you might never reach that moment…

令人惊讶的是,当您尝试将在MNIST上学到的知识应用到回归任务上时,您可能需要花一阵子才能使超高级DNN模型比基本的Random Forest Regressor更好。 有时您可能永远都无法到达那一刻……

In this guide, I listed some key tips and tricks learned while using DNN for regression problems. The data is a set of nearly 50 features describing 25k properties in Warsaw. I described the feature selection process in my previous article: feature-selection-and-error-analysis-while-working-with-spatial-data so now we will focus on creating the best possible model predicting property price per m2 using the selected features.

在本指南中,我列出了使用DNN解决回归问题时学到的一些关键技巧。 该数据是一组近50个要素的集合,描述了华沙的25k属性。 我在上一篇文章中介绍了特征选择过程:在处理空间数据时进行特征选择和错误分析,因此现在我们将着重于使用选定的特征创建最佳的模型来预测每平方米的房地产价格。

The code and data source used for this article can be found on GitHub.

本文使用的代码和数据源可在 GitHub 找到

1.入门 (1. Getting started)

When training a Deep Neural Network I usually follow these key steps:

在训练深度神经网络时,我通常遵循以下关键步骤:

  • A) Choose a default architecture — no. of layers, no. of neurons, activation

    A)选择默认架构-不。 层数 的神经元,激活

  • B) Regularize model

    B)正则化模型

  • C) Adjust network architecture

    C)调整网络架构

  • D) Adjust the learning rate and no. of epochs

    D)调整学习率和否。 时代

  • E) Extract optimal model using CallBacks

    E)使用回调提取最佳模型

Usually creating the final model takes a few runs through all of these steps but an important thing to remember is: DO ONE THING AT A TIME. Don’t try to change architecture, regularization, and learning rate at the same time as you will not know what really worked and probably spend hours going in circles.

通常,创建最终模型需要完成所有这些步骤,但是要记住的重要一件事是: 一次做一件事。 不要试图同时更改体系结构,正则化和学习率,因为您将不知道真正有效的方法,并且可能会花费数小时来讨论。

Before you start building any DNNs for regression tasks there are 3 key things you must remember:

在开始为回归任务构建任何DNN之前,您必须记住3个关键事项:

  • Standarize your data to make training more efficient

    标准化您的数据以提高培训效率

  • Use RELU activation function for all hidden layers — you will be going nowhere with default sigmoid activation

    对所有隐藏层使用RELU激活功能-默认Sigmoid激活将无处可走

  • Use Linear activation function for the single-neuron output layer

    对单神经元输出层使用线性激活函数

Another important task is selecting the loss function. Mean Squared Error or Mean Absolute Error are the two most common choices. As my goal to minimize the average percentage error and maximize the share of all buildings within 5% error I choose MAE, as it penalizes outliers less and is easier to interpret — it pretty much tells you how many $$/m2 on average each offer is off the actual value.

另一个重要任务是选择损失函数。 均方误差均绝对误差是两个最常见的选择。 我的目标是最大程度地减少平均百分比误差并在5%误差内最大化所有建筑物的份额,因此我选择MAE,因为它减少了异常值,并且更易于解释-它几乎可以告诉您每个报价平均要多少美元/平方米偏离实际值。

There is also a function directly linked to my goal — Mean Absolute Percentage Error, but after testing it against MAE I found the training to be less efficient.

还有一个与我的目标直接相关的功能- 平均绝对百分比误差 ,但是在针对MAE进行测试后,我发现训练的效率较低。

2.基本的DNN模型 (2. Base DNN model)

We start with a basic network with 5 hidden layers and a decreasing number of neurons in every second layer.

我们从一个具有5个隐藏层的基本网络开始,并且每隔第二层神经元的数量就会减少。

tf.keras.backend.clear_session()tf.random.set_seed(60)model=keras.models.Sequential([

    keras.layers.Dense(512, input_dim = X_train.shape[1], activation='relu'),      keras.layers.Dense(512, input_dim = X_train.shape[1], activation='relu'),      keras.layers.Dense(units=256,activation='relu'),      keras.layers.Dense(units=256,activation='relu'),        keras.layers.Dense(units=128,activation='relu'),    keras.layers.Dense(units=1, activation="linear"),],name="Initial_model",)model.summary()

We use Adam optimizer and start with training each model for 200 epochs — this part of the model configuration will be kept constant up to point 7.

我们使用Adam优化器,并开始训练每个模型200个时期-模型配置的这一部分将保持恒定,直到第7点。

optimizer = keras.optimizers.Adam()model.compile(optimizer=optimizer, warm_start=False,             loss='mean_absolute_error')history = model.fit(X_train, y_train,                    epochs=200, batch_size=1024,                    validation_data=(X_test, y_test),                     verbose=1)

初始模型学习曲线 (Initial model learning curve)

Initial model learning curve (starting from epoch 10)
初始模型学习曲线(从纪元10开始)

Our first model turned out to be quite a failure, we have horrendous overfitting on Training data and our Validation Loss is actually increasing after epoch 100.

我们的第一个模型证明是完全失败的,我们在训练数据上存在过分的过拟合,并且在第100个时期之后,我们的验证损失实际上正在增加。

3.带辍学的正则化 (3. Regularization with Drop-out)

Drop out is probably the best answer to DNN regularization and works with all types of network sizes and architectures. Applying Dropout randomly drops a portion of neurons in a layer in each epoch during training, which forces the remaining neurons to be more versatile — this decreases overfitting as one Neuron can no longer map one specific instance as it will not always be there during training.

辍学可能是DNN正则化的最佳解决方案,并且适用于所有类型的网络规模和体系结构。 在训练期间,应用Dropout随机将每个时期的一部分神经元掉落到一层中,这将迫使其余的神经元具有更多的通用性-这减少了过度拟合,因为一个神经元无法再映射一个特定的实例,因为在训练过程中它不会一直存在。

I advise reading the original paper as it describes the idea very well and does not require years of academic experience to understand it — Dropout: A Simple Way to Prevent Neural Networks from Overfitting

我建议您阅读原始论文,因为它很好地描述了这个想法,不需要多年的学术经验就可以理解它- 辍学:一种防止神经网络过度拟合的简单方法

tf.keras.backend.clear_session()tf.random.set_seed(60)model=keras.models.Sequential([

    keras.layers.Dense(512, input_dim = X_train.shape[1], activation='relu'),      keras.layers.Dropout(0.3),

    keras.layers.Dense(512, activation='relu'),      keras.layers.Dropout(0.3),keras.layers.Dense(units=256,activation='relu'),     keras.layers.Dropout(0.2),

    keras.layers.Dense(units=256,activation='relu'),     keras.layers.Dropout(0.2),

    keras.layers.Dense(units=128,activation='relu'),    keras.layers.Dense(units=1, activation="linear"),],name="Dropout",)

The (0.x) after Dropout specifies what share of Neurons you want to drop, which translates into how much you want to regularize. I usually start with dropout around (0.3–0.5) in the largest layer and then reduce its rigidness in deeper layers. The idea behind such approach is that neurons in deeper networks tend to have more specific tasks and therefore dropping too many will increase bias too much.

删除后的(0.x)指定要删除的神经元份额,即要调整的神经元数量。 我通常从最大层的落差(0.3–0.5)开始,然后在较深层减小其刚度。 这种方法背后的想法是,更深层网络中的神经元倾向于执行更具体的任务,因此,丢弃过多的神经元会增加偏见。

辍学模型学习曲线 (Dropout model learning curve)

Droput model learning curve (starting from epoch 10)
Droput模型学习曲线(从纪元10开始)

Analyzing learning curve for the modified model we can see that we are going in the right direction. First of all we managed to make progress from the Validation Loss of the previous model (marked by the grey threshold line), secondly, we seem to replace overfitting with a slight underfit.

分析修改后的模型的学习曲线,我们可以看到我们朝着正确的方向前进。 首先,我们设法从先前模型的“验证损失”(由灰色阈值线标记)中取得了进展,其次,我们似乎用稍微欠拟合代替了过度拟合。

4.通过批量归一化处理即将死亡/爆炸的神经元 (4. Tackling dying/exploding neurons with Batch normalization)

When working with several layers with RELU activation we have a significant risk of dying neurons having a negative effect on our performance. This can lead to underfitting we could see in the previous model as we might actually not be using a large share of our neurons, which basically reduced their outputs to 0.

当使用RELU激活的多个层进行工作时,我们将面临死亡神经元的巨大风险,这会对我们的表现产生负面影响。 这可能会导致我们在先前模型中看到的拟合不足,因为我们实际上可能没有使用大量的神经元,这实际上将它们的输出降低为0。

Batch Normalization is one of the best ways to handle this issue — when applied we normalize activation outputs of each layer for each batch to reduce the effect of extreme activations on parameter training, which in turn reduces the risk of vanishing/exploding gradients. The original paper describing the solution is more complicated to read than the previous one referenced but I would still suggest giving it a try — Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

批次归一化是处理此问题的最佳方法之一-应用后,我们将为每个批次归一化每一层的激活输出,以减少极端激活对参数训练的影响,从而降低了消失/爆炸梯度的风险。 描述该解决方案的原始论文比之前的参考文献更难阅读,但我仍然建议您尝试一下— 批量归一化:通过减少内部协变量偏移来加速深度网络训练

tf.keras.backend.clear_session()tf.random.set_seed(60)model=keras.models.Sequential([

    keras.layers.Dense(512, input_dim = X_train.shape[1], activation='relu'),     keras.layers.BatchNormalization(),    keras.layers.Dropout(0.3),

    keras.layers.Dense(512, activation='relu'),      keras.layers.BatchNormalization(),    keras.layers.Dropout(0.3),keras.layers.Dense(units=256,activation='relu'),     keras.layers.BatchNormalization(),    keras.layers.Dropout(0.2),

    keras.layers.Dense(units=256,activation='relu'),     keras.layers.BatchNormalization(),    keras.layers.Dropout(0.2),

    keras.layers.Dense(units=128,activation='relu'),    keras.layers.Dense(units=1, activation="linear"),],name="Batchnorm",)

BatchNorm模型学习曲线 (BatchNorm model learning curve)

BatchNorm model learning curve (starting from epoch 10)
BatchNorm模型学习曲线(从纪元10开始)

Adding Batch Normalization helped us to bring some of the neurons back to life, which increased our model variance changing underfitting to slight overfitting — training neural networks is often a game of cat and mouse, balancing between optimal bias and variance.

添加“批量归一化”可以帮助我们使一些神经元恢复活力,从而增加了模型方差的变化,从不完全拟合到轻微过度拟合-训练神经网络通常是猫和老鼠的游戏,在最佳偏差和方差之间取得平衡。

Another good news is that we still are improving in terms of a validation error.

另一个好消息是,我们仍然在验证错误方面进行改进。

5.将激活功能更改为泄漏的RELU (5. Changing activation function to Leaky RELU)

Leaky RELU activation function is a slight modification of RELU function, which allows some negative activations to leak through, further reducing the risk of dying neurons. Leaky RELU usually takes longer to train, which is why we will train this model for another 100 epochs.

泄漏的RELU激活功能是对RELU功能的轻微修改,可以使某些负向激活功能泄漏出去,从而进一步降低了神经元死亡的风险。 泄漏的RELU通常需要更长的时间来训练,这就是为什么我们将这个模型再训练100个时期。

Leaky RELU activation
泄漏的RELU激活
tf.keras.backend.clear_session()tf.random.set_seed(60)model=keras.models.Sequential([

    keras.layers.Dense(512, input_dim = X_train.shape[1]),     keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.3),

    keras.layers.Dense(512),      keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.3),keras.layers.Dense(units=256),     keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.2),

    keras.layers.Dense(units=256),     keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.2),

    keras.layers.Dense(units=128),    keras.layers.LeakyReLU(),     keras.layers.Dense(units=1, activation="linear"),],name="LeakyRELU",)

泄漏的ReLU模型学习曲线 (Leaky ReLU model learning curve)

Leaky ReLU model learning curve (starting from epoch 10)
泄漏的ReLU模型学习曲线(从第10纪开始)

It seems Leaky RELU reduced the overfitting and gave us a healthier learning curve, where we can see the potential for improvement even after 300 epochs. We nearly reached the lowest error from previous model, but we managed to do that without overfitting, which leaves us space for increasing variance.

Leaky RELU似乎减少了过拟合,并为我们提供了更健康的学习曲线,即使在300个时代之后,我们仍可以看到改进的潜力。 我们几乎达到了先前模型中的最低误差,但是我们设法做到了这一点而没有过度拟合,这为我们留出了增加差异的空间。

6.通过具有1024个神经元的附加隐藏层扩展网络 (6. Expanding network with an additional hidden layer with 1024 neurons)

At this point, I am happy enough with the basic model to make the network larger by adding another hidden layer with 1024 neurons. The new layer also has the highest dropout rate. I also experimented with dropout rates for lower levels due to change in the overall architecture.

在这一点上,我对基本模型很满意,可以通过添加具有1024个神经元的另一个隐藏层来扩大网络。 新层的辍学率也最高。 由于整体架构的变化,我还尝试了较低级别的辍学率。

tf.keras.backend.clear_session()tf.random.set_seed(60)model=keras.models.Sequential([        keras.layers.Dense(1024, input_dim = X_train.shape[1]),     keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.4),

    keras.layers.Dense(512),      keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.3),keras.layers.Dense(512),      keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.3),

    keras.layers.Dense(units=256),     keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.2),

    keras.layers.Dense(units=256),     keras.layers.LeakyReLU(),    keras.layers.BatchNormalization(),    keras.layers.Dropout(0.01),keras.layers.Dense(units=128),    keras.layers.LeakyReLU(),     keras.layers.Dropout(0.05),    keras.layers.Dense(units=1, activation="linear"),],name="Larger_network",)

更大的网络模型学习曲线 (Larger network model learning curve)

Larger network model learning curve (starting from epoch 10)
更大的网络模型学习曲线(从纪元10开始)

Expanding network architecture seems to be going in the right direction, we increased variance slightly getting learning curve, which is close to optimal balance. We also managed to get our Validation Loss nearly on par with the overfitted BatchNorm model.

扩展网络体系结构似乎朝着正确的方向发展,我们略微增加了方差,获得了学习曲线,接近最佳平衡。 我们还设法使验证损失几乎与过度拟合的BatchNorm模型相当。

7.通过学习率衰减提高培训效率 (7. Improved training efficiency with Learning Rate Decay)

Once we are happy with the network architecture, Learning Rate is the most important hyperparameter, which needs tuning. I decided to use learning rate decay, which allows me to train my model faster at the beginning and then decrease the learning rate with further epochs to make training more precise.

一旦我们对网络体系结构感到满意,学习率就是最重要的超参数,需要进行调整。 我决定使用学习率衰减,这使我可以在一开始就更快地训练我的模型,然后再降低学习率,以使训练更加精确。

optimizer = keras.optimizers.Adam(lr=0.005, decay=5e-4)

Selecting the right starting rate and decay can be challenging and takes some trial and error. In my case it turned out that the default Adam learning rate in Keras, which is 0.001 was a bit high. I started with a Learning rate of 0.005 and over 400 epochs decreased it to 0.001.

选择正确的起始速率和衰减可能会很困难,并且需要反复试验。 在我的案例中,事实证明,Keras中默认的Adam学习速率为0.001,有点高。 我以0.005的学习率开始,超过400个时期将其降低到0.001。

Learning rate decay over 400 epochs
学习率下降超过400个时代

学习率衰减模型学习曲线 (Learning rate decay model learning curve)

Learning rate decay model learning curve (starting from epoch 10)
学习率衰减模型学习曲线(从时期10开始)

Tuning Learning Rate helped us to finally improve our validation error result, while still keeping the learning curve healthy without too much risk of overfitting — there might even be some space for training the model for another 100 epochs.

调整学习速度可帮助我们最终改善验证错误结果,同时仍保持学习曲线健康,而又不会存在过度拟合的风险-甚至可能还有空间可以训练模型另外100个时期。

8.使用回调在最佳时期停止训练 (8. Stopping the training at best epoch using Callbacks)

The last task remaining before choosing our best model is to use CallBacks to stop training at the optimal epoch. This allows us to retrieve the model at the exact epoch, where we reached minimall error. The big advantage of this solution is that you do not really need to worry if you want to train for 300 or 600 epochs — if your model starts overfitting the Call Back will get you back to the optimal epoch.

在选择最佳模型之前,剩下的最后一项任务是使用CallBacks在最佳时期停止训练。 这使我们能够在达到最小误差的确切时期检索模型。 该解决方案的最大优点是,您真的不需要担心要训练300或600个时期-如果您的模型开始过度拟合,Call Back将使您回到最佳时期。

checkpoint_name = 'Weights\Weights-{epoch:03d}--{val_loss:.5f}.hdf5' checkpoint = ModelCheckpoint(checkpoint_name, monitor='val_loss', verbose = 1, save_best_only = True, mode ='auto')callbacks_list = [checkpoint]

You need to define your callbacks: checkpoint_name specifying where and how you want to save weights for each epoch, checkpoint specifies how the CallBack should behave —I advise monitoring val_loss for improvement and saving only if the epoch made some progress on that.

您需要定义回调:checkpoint_name指定要为每个纪元保存权重的位置和方式,checkpoint指定CallBack的行为方式-我建议监视val_loss以进行改进并仅在纪元取得了一些进展时进行保存。

history = model.fit(X_train, y_train,                    epochs=500, batch_size=1024,                    validation_data=(X_test, y_test),                     callbacks=callbacks_list,                     verbose=1)

Then all you need to do is to add callbacks while fitting your model.

然后,您要做的就是在拟合模型的同时添加回调。

回调模型学习曲线 (Callbacks model learning curve)

Callbacks model learning curve (starting from epoch 10)
回调模型学习曲线(从纪元10开始)

Using Callbacks allowed us to retrieve the optimal model trained at epoch 468 — the next 30 epochs did not improve as we started to overfit the train set.

使用回调使我们能够检索在468阶段训练的最优模型-由于我们开始过度拟合训练集,因此接下来的30个时期并没有改善。

9.模型演变总结 (9. Model evolution summary)

比较模型之间的验证损失 (Comparing validation loss between models)

It took us 7 steps in order to get to the desired model output. We managed to improve at nearly every step, with a plateau between batch_norm and 1024_layer model, when our key goal was to reduce overfitting. To be honest refining these 7 steps, probably took me 70 steps so bear in mind that training DNNs is an interative process and don’t be put off if your improvement stagnates for a few hours.

为了获得所需的模型输出,我们花了7个步骤。 当我们的主要目标是减少过度拟合时,我们设法在几乎每个步骤上都进行了改进,在batch_norm和1024_layer模型之间保持稳定。 老实说,细化这7个步骤可能要花我70个步骤,因此请记住,训练DNN是一个交互过程,如果您的改进停滞了几个小时,也不要拖延。

10. DNN与随机森林 (10. DNN vs Random Forest)

Finally, how did our best DNN perform in comparison to a base Random Forest Regressor trained on the same data in the previous article?

最后,与上一篇文章中基于相同数据训练的基本随机森林回归算法相比,我们最好的DNN表现如何?

In two key KPIs our Random Forest scored as follows:

在两个关键的KPI中,我们的随机森林得分如下:

  • Share of forecasts within 5% absolute error = 44.6%占绝对误差5%以内的预测份额= 44.6%
  • Mean percentage error = 8.8%平均百分比误差= 8.8%

Our best Deep Neural Network scored:

我们最好的深度神经网络得分:

  • Share of forecasts within 5% absolute error = 43.3% (-1.3 p.p.)占绝对误差5%以内的预测份额= 43.3%(-1.3 pp)
  • Mean percentage error = 9.1% (+0.3 p.p.)平均百分比误差= 9.1%(+0.3 pp)

Can we cry now? How is it possible that after hours of meticulous training our advanced neural network did not beat a Random Forest? To be honest there are two key reasons:

我们现在可以哭吗? 经过数小时的精心训练,我们先进的神经网络怎么可能没有击败随机森林? 坦白地说,有两个主要原因:

  • A sample size of 25k records is still quite small in terms of training DNNs, I choose to give this architecture a try as I am gathering new data every month and I am confident that within a few months I will reach samples closer to 100k, which should give DNN the needed edge就训练DNN而言,25k记录的样本量仍然很小,我选择尝试一下这种架构,因为我每个月都在收集新数据,并且我相信在几个月内我将达到接近100k的样本。应该给DNN所需的优势
  • The Random Forest model was quite overfitted and I am not confident that it would generalize well too other properties, despite high performance on validation set — at this point, I would probably still use the DNN model in production as more reliable.尽管验证集具有很高的性能,但是Random Forest模型非常适合,并且我不相信它会很好地推广其他属性,这时,我仍然会在生产中使用DNN模型,因为它更加可靠。

To summarize — I would advise against starting the solving of a regression problem with DNN. Unless you are working with hundreds of k samples on a really complex project, a Random Forest Regressor will usually be much faster to get initial results — if they prove to be promising you can proceed to DNN. Training efficient DNN takes more time and if your data sample is not large enough it might never reach Random Forest performance.

总结一下-我建议不要开始使用DNN解决回归问题。 除非您在一个非常复杂的项目中使用数百个样本,否则,Random Forest Regressor通常会更快地获得初始结果-如果它们被证明可以保证您可以继续进行DNN。 训练有效的DNN需要花费更多时间,并且如果您的数据样本不够大,则可能永远无法达到Random Forest性能。

[1]: Nitish Srivastava. (June 14 2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting

[1]:Nitish Srivastava。 (2014年6月14日)。 辍学:防止神经网络过度拟合的简单方法

[2]: Sergey Ioffe. (Mar 2 2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

[2]:谢尔盖·艾菲(Sergey Ioffe)。 (2015年3月2日)。 批量归一化:通过减少内部协变量偏移来加速深度网络训练

翻译自: https://towardsdatascience.com/training-neural-networks-for-price-prediction-with-tensorflow-8aafe0c55198


http://www.taodudu.cc/news/show-863436.html

相关文章:

  • 您应该如何改变数据科学教育
  • r语言解释回归模型的假设_模型假设-解释
  • 参考文献_参考文献:
  • 深度学习用于视频检测_视频如何用于检测您的个性?
  • 角距离恒星_恒星问卷调查的10倍机器学习生产率
  • apache beam_Apache Beam ML模型部署
  • 转正老板让你谈谈你的看法_让我们谈谈逻辑回归
  • openai-gpt_GPT-3报告存在的问题
  • 机器学习 凝聚态物理_机器学习遇到了凝聚的问题
  • 量子计算 qiskit_将Tensorflow和Qiskit集成到量子机器学习中
  • throw 烦人_烦人的简单句子聚类
  • 使用NumPy优于Python列表的优势
  • 迷你5和迷你4区别_可变大小的视频迷你批处理
  • power bi可视化表_如何使用Power BI可视化数据?
  • 变形金刚2_变形金刚(
  • 机器学习 测试_测试优先机器学习
  • azure机器学习_Microsoft Azure机器学习x Udacity —第4课笔记
  • 机器学习嵌入式实现_机器学习中的嵌入
  • 无监督学习 k-means_无监督学习-第3部分
  • linkedin爬虫_机器学习的学生和从业者的常见问题在LinkedIn上提问
  • lime 深度学习_用LIME解释机器学习预测并建立信任
  • 神经网络 梯度下降_梯度下降优化器对神经网络训练的影响
  • 深度学习实践:计算机视觉_深度学习与传统计算机视觉技术:您应该选择哪个?
  • 卷积神经网络如何解释和预测图像
  • 深度学习 正则化 正则化率_何时以及如何在深度学习中使用正则化
  • 杨超越微数据_资料来源同意:数据科学技能超越数据
  • 统计概率分布_概率统计中的重要分布
  • 人口预测和阻尼-增长模型_使用分类模型预测利率-第1部分
  • 基于kb的问答系统_1KB以下基于表的Q学习
  • 图论为什么这么难_图论是什么,为什么要关心?

使用TensorFlow训练神经网络进行价格预测相关推荐

  1. 生猪价格matlab,基于时间序列的BP神经网络猪肉价格预测

    科技创新 2m6钎第20期I科技创新与应用 基于时间序列的BP神经网络猪肉价格预测 张津张瑞斌 (成都理工大学管理科学学院,四川成都610059) 摘要:猪肉价格是不稳定的,起伏变化的,猪肉价格的预测 ...

  2. tensorflow训练神经网络时loss出现nan的问题

    tensorflow训练神经网络时loss出现nan的问题 一般情况下原因是由于优化器上的学习比率learning_rate定义值太大,如: train_step = tf.compat.v1.tra ...

  3. Tensorflow训练神经网络保存*.pb模型及载入*.pb模型

    1 神经网络结构 1.0 保存*.pb模型 import tensorflow as tf from tensorflow.python.framework import graph_util fro ...

  4. 吴裕雄 python 神经网络——TensorFlow训练神经网络:不使用隐藏层

    import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dataINPUT_NODE = 784 # ...

  5. Tensorflow训练神经网络

    Tensorflow实战google深度学习框架中章节 import os os.getcwd() #查看当前路径 'E:\\深度学习\\mnist' os.chdir('E:\深度学习\mnist' ...

  6. 使用TensorFlow训练神经网络识别MNIST数据一

    环境: python 3.7 tensorflow 1.14 代码: # import os # os.environ["TF_CPP_MIN_LOG_LEVEL"] = '2' ...

  7. Python机器学习实例----用神经网络进行价格预测

    本篇文章主要来记录一下使用机器学习解决我遇到的问题的过程. 目录 一.问题描述 二.数据处理 2.1 缺失值处理 2.2 异常值处理 2.3 特征工程 ​ 三.降维 四.模型建立 五.模型训练 一.问 ...

  8. 用python写bp神经网络作价格预测_bp神经网络预测python

    一文详解神经网络 BP 算法原理及 Python 实现 什么是梯度下降和链式求导法则 假设我们有一个函数 J(w),如下图所示. 梯度下降示意图 现在,我们要求当 w 等于什么的时候,J(w) 能够取 ...

  9. 使用机器学习预测天气_使用机器学习的二手车价格预测

    使用机器学习预测天气 You can reach all Python scripts relative to this on my GitHub page. If you are intereste ...

最新文章

  1. springboot项目中pom.xml文件的颜色变成灰色,图标变成蜘蛛图形
  2. leetcode 714. Best Time to Buy and Sell Stock with Transaction Fee | 714. 买卖股票的佳最时机含手续费(递归->傻缓存->dp)
  3. C 语言结构体_点运算符( . )和箭头运算符( - )的区别
  4. 学生上课睡觉班主任怎么处理_【师问师答】学生上课说话,点名批评还嘴怎么办?...
  5. .net Reflection(反射)- 二
  6. 讯飞输入法有没有Linux,Debian testing 安装讯飞输入法 - Linux系统与应用 - LinuxApp - 水木社区...
  7. 【Java】输出50-100范围内所有的素数
  8. cms文章 mysql存储_MySQL存储引擎笔记
  9. php 实现错字检查,PHP每日一练:编写写字符串检查函数
  10. php inqude函数,Python匿名函数(lambda函数)
  11. 【心音信号】基于matlab GUI心音诊断系统【含Matlab源码 762期】
  12. Struts2升级版本至2.5.10,高危漏洞又来了
  13. ESP32电容式触摸传感器引脚实例
  14. otool 分析Mach-O
  15. “质量总监成长记”读后感
  16. 英汉汉英词典,牛津高级词典,电子词典,离线英汉,汉英词典的使用方法
  17. linux查看磁带大小命令,16条实用的Linux/Unix 磁带管理命令
  18. 人脸识别支付欺诈手段和防范措施
  19. oracle 用户被锁住 28000 the account is locked
  20. 程序员的十层楼 11层(上帝)

热门文章

  1. PHP使用header方式实现文件下载
  2. Minor【 PHP框架】1.简介
  3. 选择列表和可多选的选择列表
  4. kill掉占用端口程序
  5. Python核心数据类型之字典15
  6. nginx下面server配置
  7. 下拉列表与多选框以及GridView中的多选框的问题
  8. JAVA中浅复制与深复制
  9. python的socket编程执行顺序_Python篇-Socket网络编程
  10. rest服务器性能,使用多线程提高REST服务性能