网络受限

动机 (Motivation)

Dust always finds its way to the forlorn places, the long forgotten shelves, the old books, and all the neglected space that the world has moved on from. It must be no different in the virtual realm of hard drives. Old files inside forgotten folders surely must have layers of invisible dust covering them.

尘土总会流向孤零零的地方,久久被遗忘的架子,旧书以及世界所迁移的所有被忽视的空间。 它在硬盘驱动器的虚拟领域中必须没有什么不同。 被遗忘的文件夹中的旧文件肯定必须覆盖一层看不见的灰尘。

When I finally picked up this project again, it felt like dust had accumulated all over it. Not from neglect though, I just ended up taking so many detours to finally arrive back to where I left it.

当我终于再次拿起这个项目时,感觉就像灰尘在其上堆积。 但是我并没有因为疏忽而走了很多弯路,最终回到了我离开的地方。

It was at first implemented fully in NumPy and meant to be part of my Neural Network from scratch project. But the scope of that project was already large enough. And after the end of that project, I went through what I interpret to be a self-inflicted credibility crisis. Who am I to claim the position of being able to teach anyone anything? This drove me down a path of frenetic differentiations on my white board, for days untangling my path between the rows and columns of Jacobians, and for days subjecting PyTorch Autograd to testify as I prove my worth to myself.

它最初在NumPy中完全实现,并且打算从头开始成为我的神经网络的一部分。 但是该项目的范围已经足够大。 在该项目结束后,我经历了我认为是自我造成的信誉危机。 我可以要求谁教任何人什么位置呢? 这使我沿着白板上疯狂的分化之路,连续数天弄乱了我在雅各布书的行与列之间的走法,又有几天让PyTorch Autograd作证,因为我向自己证明了自己的价值。

Most importantly, I ended up proving to myself that there has to be some higher purpose to my endeavor. Which brought me back to writing. Not because I think I deserve to teach. Rather because I saw something I wanted to change. I saw that people from different backgrounds are eager to learn about Artificial Intelligence but feel intimidated towards the math involved. People in general are quite happy to learn if only the material is made to be accessible. So I wrote in my attempt to make neural networks’ math reader-friendly and accessible. I wrote all the functions we are going to use inside our neural network in this project. And I explained how to differentiate them, what their input and output look like and mean, and how to implement them from scratch or using PyTorch.

最重要的是,我最终向自己证明,我的努力必须有更高的目标。 这使我重新开始写作。 不是因为我认为我应该教书。 而是因为我看到了一些我想改变的东西。 我看到来自不同背景的人们渴望学习人工智能,但对所涉及的数学感到害怕。 如果仅使材料易于访问,那么一般人都会很高兴学习。 因此,我写这篇文章的目的是使神经网络的数学方法易于阅读和使用 。 我在该项目中编写了将在神经网络内部使用的所有功能。 我还解释了如何区分它们,它们的输入和输出是什么样子,意味着什么,以及如何从头开始或使用PyTorch来实现它们。

It only felt right to return to this project after having covered the fundamentals it requires. This is my ongoing journey to make learning engaging. Because life is already hard enough and you don’t have to undergo anymore suffering than necessary just to learn.

在了解了所需的基础知识之后,才有回到该项目的感觉。 这是我不断进行学习的旅程。 因为生活已经足够艰难,您不必为了学习而遭受不必要的痛苦。

We will create our own dataset and build a neural network to classify happiness based on what it has. Which will also turn out to be a profound look inwards and a reflection upon what it means for us to pursue happiness. All my cheers and hopes that you enjoy the journey.

我们将创建自己的数据集,并建立一个神经网络,根据幸福感对幸福感进行分类。 这也将成为对内心的深刻了解,并反思我们追求幸福的意义。 我所有的欢呼,并希望您旅途愉快。

在约束数据集中创建幸福 (Creating The Happiness in Confinement Dataset)

“I have been years seeking the ideal place. And I have come to the conclusion that the only way I can possibly find it is to be it.” — Alan Watts

“多年来,我一直在寻找理想的地方。 我得出的结论是,我唯一可能找到它的方法就是它。” — 艾伦·沃茨

More often than not, to feel happy is a choice one makes. When we are neither happy nor sad, it falls upon us to define our own terms for joy. These terms are different from one person to another. And while some of us argue that true happiness is unconditional, many others carry long lists of conditions they have yet to meet in their pursuit of happiness.

通常,感到快乐是一种选择。 当我们既不快乐也不悲伤时,就必须定义自己的快乐条件。 这些术语在一个人与另一个人之间是不同的。 虽然我们中的一些人认为真正的幸福是无条件的,但许多其他人却提出了他们追求幸福时尚未满足的一长串条件。

For our project’s neural network, life is simple enough to allow the following conditions to be the only stepping stones towards joy:

对于我们项目的神经网络,生活足够简单,可以使以下条件成为通往欢乐的唯一垫脚石:

  • An ideal interval of temperatures for its tea.茶的理想温度间隔。
  • Fast internet connection.快速的互联网连接。
  • Books that are interesting.有趣的书。

If at least two among these three terms are satisfied, the neural network should output a state of happiness. More specifically, we will define the ideal tea temperature to be greater than or equal to 30°C and lower than 60°C.

如果满足这三个条件中的至少两个条件,则神经网络应输出幸福状态。 更具体地说,我们将理想的茶温度定义为大于或等于30°C且小于60°C。

The chosen threshold of 30°C is arbitrary. In contrast, the threshold of 60°C is supported by this research which found that preference for hot drinks higher than 60°C is associated with peptic disease.

所选的30°C阈值是任意的。 相比之下,这项研究支持60°C的阈值,该研究发现,偏好高于60°C的热饮与消化系统疾病有关。

We will define fast internet speed as greater or equal to 20 Mbps. Again an arbitrary choice, you are welcome to choose a different threshold.

我们将快速互联网速度定义为大于或等于20 Mbps。 再次是任意选择,欢迎您选择其他阈值。

Concerning books we will say that there are two categories. Books that the neural network likes will be labeled 1. While books that it does not like will be labeled 0.

关于书籍,我们将说有两类。 神经网络喜欢的书将被标记为1。而它不喜欢的书将被标记为0。

Expressing how we feel is not an easy task. Many feelings intertwine at every moment. And each feeling has its own spectrum of sub-feelings. It is truly fascinating to look at attempts to discretize the continuous space of feelings. However and for the sake of simplicity, we will assume that our neural network regards happiness as binary. It is either happy or unhappy. We label unhappiness with 0. And we label happiness with 1.

表达我们的感受并非易事。 每一刻都有许多交织在一起的感觉。 而且每种感觉都有其自己的子感觉范围。 观察试图离散化连续情感空间的尝试确实令人着迷。 但是,为了简单起见,我们将假定我们的神经网络将幸福视为二进制。 它是快乐的还是不快乐的。 我们用0标记不快乐。用1标记快乐。

With these guidelines in mind, we can implement code that generates our dataset. The following code randomly generates 2000 cold, hot, and burning tea temperatures. Then 3000 slow and fast internet speed measures. Followed by another 3000 disliked and liked books. And finally, 500 unhappy and happy labels.

牢记这些准则,我们可以实现生成数据集的代码。 以下代码随机生成2000个冷,热和燃烧的茶温度。 然后进行3000次慢速和快速Internet速度测量。 其次是另外3000本不喜欢的书。 最后,有500个不愉快和快乐的标签。

A main concern when creating a dataset is to make it balanced. We have to ensure that each combination of features is equally represented. With one tea temperature, one internet speed and one kind of books, there are 12 possible combinations. We will create 500 instances for each combination. Then, depending on whether an instance has a majority of ideal attributes, we will assign the corresponding happiness label. Consequently, our dataset will have 4 columns of features, and 6000 rows of instances.

创建数据集时,主要要考虑的是使其平衡。 我们必须确保每种功能组合均得到平等代表。 以一种茶温度,一种互联网速度和一种书籍,有12种可能的组合。 我们将为每个组合创建500个实例。 然后,根据实例是否具有大多数理想属性,我们将分配相应的幸福标签。 因此,我们的数据集将具有4列要素和6000行实例。

The following code can be divided into two main parts: a vertical concatenation of columns, followed by a horizontal concatenation of rows. In the first part, the 12 combinations of features are created with 500 rows each. In the second part, a final concatenation of all the rows gives us a full dataset with 6000 rows. You can find the detailed implementation here.

以下代码可以分为两个主要部分:列的垂直串联,然后是行的水平串联。 在第一部分中,创建了12个要素组合,每个要素组合有500行。 在第二部分中,所有行的最终连接为我们提供了具有6000行的完整数据集。 您可以在此处找到详细的实现。

分割数据集 (Splitting the Dataset)

We will now split our dataset into training, validation, and testing sets. The method train_test_split() from Scikit-learn will provide the added benefit of shuffling the rows of our data.

现在,我们将数据集分为训练集,验证集和测试集。 来自Scikit-learn的方法train_test_split()将提供改组数据行的额外好处。

标准化 (Standardizing)

To standardize our dataset, we will use the class StandardScaler from Scikit-learn. We have to be careful about only fitting StandardScaler on the training set. We also don’t want to standardize our one-hot-encoded categorical columns. As a result, the following code standardizes the first two columns (tea temperature and internet speed). Then concatenates the standardized output with the two last columns (books and happiness).

为了标准化我们的数据集,我们将使用Scikit-learn的StandardScaler类。 我们必须注意仅在培训集上安装StandardScaler。 我们也不想标准化我们的一键编码类别列。 因此,以下代码将前两列(茶水温度和互联网速度)标准化。 然后将标准化输出与最后两列(书和幸福)连接起来。

将NumPy转换为PyTorch DataLoader (Converting NumPy to PyTorch DataLoader)

There are only a few steps left before we can consider our data fully prepared:

只有几步之遥,我们才能认为我们的数据已经准备就绪:

  • We have to make our dataset compatible with the input that our neural network can take.我们必须使数据集与神经网络可以接受的输入兼容。
  • We have to be able to load our neural network with mini-batches from the training set for training, and from the validation and testing sets for evaluation.我们必须能够从训练集中的迷你批处理中加载神经网络,并从评估集的验证和测试集中加载微型批次。

We will begin by converting our data from NumPy arrays to PyTorch tensors. After this, we will use each tensor to create a TensorDataset. Finally, we will convert each TensorDataset to a DataLoader with specific sizes for the mini-batches.

我们将从将数据从NumPy数组转换为PyTorch张量开始。 之后,我们将使用每个张量创建一个TensorDataset 。 最后,我们将每个TensorDataset转换为具有特定大小的迷你批次的DataLoader 。

The Happiness in Confinement Dataset is now ready. And we are ready to finally meet the Confined Neural Network.

约束数据集中的幸福现已准备就绪。 我们已经准备好最终与受限神经网络会面。

受限神经网络 (The Confined Neural Network)

Architecture : 3 inputs, linear layer with a ReLU activation, linear layer with a Softmax activation, 2 outputs.
体系结构:3个输入,带有ReLU激活的线性层,带有Softmax激活的线性层,2个输出。

The Confined Neural Network will have an input layer, followed by a linear layer and a ReLU activation, followed by another linear layer and a Softmax activation. The first output neuron will store the probability that the input describes an unhappy state. The second neuron will store the equivalent probability for the happy state.

受限神经网络将具有输入层,然后是线性层和ReLU激活,然后是另一个线性层和Softmax激活。 第一个输出神经元将存储输入描述不满意状态的概率。 第二神经元将存储幸福状态的等效概率。

The following code implements this architecture as a class called Network:

以下代码将这种体系结构实现为称为Network的类:

Note: You might have noticed that the Network class does not include any Softmax activation. The reason is that in PyTorch, CrossEntropyLoss starts by computing Softmax before computing its negative log loss. Which we address in the next section.

注意:您可能已经注意到Network类不包含任何Softmax激活。 原因是在PyTorch中, CrossEntropyLoss首先从计算Softmax开始,然后计算其负对数损耗。 我们将在下一节中讨论。

训练 (Training)

Before we can proceed with training the neural network, we have to choose a learning rate and a number of epochs. We also have to define an optimization algorithm and a loss function. The loss function will be cross-entropy loss. And for now, we start with a stochastic gradient descent optimizer.

在继续进行神经网络训练之前,我们必须选择学习率和几个时期。 我们还必须定义优化算法和损失函数。 损失函数将是交叉熵损失。 而现在,我们从随机梯度下降优化器开始。

Perhaps the most exciting part here is that we will visualize the training of our neural network using TensorBoard. As an avid PyTorch user, it came as great news for me to know that I can still take advantage of TensorBoard for visualization. Even greater was my excitement to learn that there is an extension of TensorBoard that integrates it within Colab notebooks.

也许这里最令人兴奋的部分是我们将使用TensorBoard可视化神经网络的训练。 作为PyTorch的狂热用户,对于我来说仍然可以利用TensorBoard进行可视化是一个好消息。 更令我兴奋的是,我发现TensorBoard的扩展将其集成到Colab笔记本中。

If you are interested in learning how to setup TensorBoard within a Colab notebook, I highly recommend you check the section titled TensorBoard in my notebook. The following code trains our neural network and visualizes the progress of the training and validation losses:

如果您有兴趣学习如何在Colab笔记本中设置TensorBoard,强烈建议您查看笔记本中名为TensorBoard的部分。 以下代码训练我们的神经网络,并可视化训练和验证损失的进度:

You will notice in the code above that we are using an object called summary to call the method scalar. The basic idea behind using TensorBoard is to first specify some log directories. Inside those directories, we create the files that are going to be read by TensorBoard. These files are called summary file writers. In the code above, train_summary_writer and valid_summary_writer are both summary file writers. By calling the method scalar we are writing our loss values for each epoch inside the appropriate summary file. This file is then read by TensorBoard and conveniently displayed with an interactive interface.

您会在上面的代码中注意到,我们正在使用名为summary的对象来调用scalar方法。 使用TensorBoard的基本思想是首先指定一些日志目录。 在这些目录中,我们创建将由TensorBoard读取的文件。 这些文件称为摘要文件编写器 。 在上面的代码中, train_summary_writervalid_summary_writer都是摘要文件编写器。 通过调用标量方法,我们将在适当的摘要文件中为每个时期写入损失值。 然后,该文件由TensorBoard读取,并通过交互界面方便地显示。

TensorBoard: training and validation losses w.r.t epochs.
TensorBoard:培训和验证损失。

Some improvements to try:

可以尝试一些改进:

  • Increase the number of epochs.增加时期数。
  • Increase the size of the hidden layer.增加隐藏层的大小。
  • Replace SGD with Adam optimization.用Adam优化替换SGD。

You can find the detailed implementation of these improvements here.

您可以在此处找到这些改进的详细实现。

评价 (Evaluation)

In this section, we focus on evaluating our model using different metrics.

在本节中,我们着重于使用不同的指标评估模型。

  • We start by making predictions on a small batch from the test set to inspect our model’s performance.我们首先对测试集中的一小部分进行预测,以检查模型的性能。
  • Next, we make predictions on the full test set and aggregate both the output probabilities and the predictions.接下来,我们对整个测试集进行预测,并汇总输出概率和预测。
  • We then plot precision-recall curves for our classes using TensorBoard.然后,我们使用TensorBoard绘制类的精确调用曲线。
  • We implement our own confusion matrix method to not only return the usual matrix but also the indices of the instances where the model was wrong.我们实现了自己的混淆矩阵方法,不仅可以返回通常的矩阵,还可以返回模型错误的实例的索引。
  • We calculate the precision, recall, and accuracy of our model.我们计算模型的精度,召回率和准确性。
  • Finally, we use the indices returned from our confusion matrix method to inspect the incorrect cases and gain insight on the neural network’s weaknesses.最后,我们使用从混淆矩阵方法返回的索引来检查错误的情况,并深入了解神经网络的弱点。

Making Predictions

做出预测

Besides making predictions on the test set, we are also interested in checking our model’s output and understanding what goes on under the hood. To achieve both goals, we implement the following steps:

除了对测试集进行预测之外,我们还希望检查模型的输出并了解引擎盖下的情况。 为了实现这两个目标,我们执行以下步骤:

  1. We create a DataLoader with a batch size of 3 for the test set. We are limiting the batch size to 3 so that we can inspect without feeling overwhelmed.我们为测试集创建一个批处理大小为3的DataLoader。 我们将批量大小限制为3,这样我们就可以检查而不会感到不知所措。
  2. We feedforward 3 instances to our network and store the result in a variable called linout denoting the second linear output of the model.

    我们将3个实例前馈到网络,并将结果存储在名为linout的变量中,该变量表示模型的第二线性输出。

  3. We calculate the probability of each instance using the PyTorch method softmax. The output of softmax is stored inside a variable called prob.

    我们使用PyTorch方法softmax计算每个实例的概率。 softmax的输出存储在名为prob的变量中。

  4. We want our model to predict the class that has the highest probability. The method max can return both the highest probability and its index. We are only interested in the index, which we store in the variable pred.

    我们希望我们的模型能够预测具有最高概率的类别。 最大方法 可以同时返回最高概率及其索引。 我们只对索引感兴趣,该索引存储在变量pred中

Our results look very promising. The first three predictions are correct, and the softmax probabilities show that the model is very confident about the correct output.

我们的结果看起来很有希望。 前三个预测是正确的,并且softmax概率表明该模型对正确的输出非常有信心。

More for the purpose of learning than actual need, let’s plot the precision-recall curves for our classes. We apply the above steps for the rest of the test set, then we concatenate all the probabilities and all the predictions of each batch. With these probabilities and predictions, we can easily plot precision-recall curves for our classes with the method add_pr_curve. I relied on this official PyTorch tutorial for the implementation of the following plots:

为了学习而不是实际需要,让我们为我们的班级绘制精确调用曲线。 我们将上述步骤应用于其余测试集,然后将每个批次的所有概率和所有预测合并在一起。 有了这些概率和预测,我们可以使用add_pr_curve方法轻松地为我们的类绘制精确召回曲线。 我依靠此官方PyTorch 教程来实现以下绘图:

Precision recall curves for each class.
每个类别的精确召回曲线。

Once again, our results are ideal. We are getting the precision-recall curves of a perfect classifier. A kind reminder that this is a fun and feel-good project, and that the purpose here is to learn and practice while feeling entertained.

再一次,我们的结果是理想的。 我们正在得到一个完美分类器的精确调用曲线。 谨此提醒您,这是一个有趣且感觉很好的项目,其目的是在感到娱乐的同时学习和练习。

Confusion Matrix

混淆矩阵

It is already very insightful to compute a confusion matrix but we will go a little further. Our method will return the usual confusion matrix with true positive, false positives, true negatives, and false negatives. In addition, it will also return the indices of the false cases for later inspection.

计算混淆矩阵已经非常有见地,但是我们将走得更远。 我们的方法将返回带有真阳性,假阳性,真阴性和假阴性的常见混淆矩阵。 此外,它还将返回虚假案件的索引,以供以后检查。

To avoid overwhelming the article with code, here is a link to the implementation of the confusion matrix method. The output of the method is:

为避免用代码淹没文章,此处提供了混淆矩阵方法的实现链接 。 该方法的输出为:

Confusion matrix of the neural network.
神经网络的混淆矩阵。

Precision, Recall, and Accuracy

精度,召回率和准确性

From the results of the confusion matrix method, we can easily calculate the precision, recall, and accuracy of our neural network.

根据混淆矩阵方法的结果,我们可以轻松地计算神经网络的精度,召回率和准确性。

  • The precision of our model is 0.9877我们的模型的精度是0.9877
  • The recall of our model is 0.9678我们的模型的召回率是0.9678
  • The accuracy of our model is 0.9817我们的模型的准确性是0.9817

Incorrect Cases

不正确的情况

We can only improve once we reflect upon our flaws. So it is interesting to look at the instances where the model made mistakes. We already have the indices of the incorrect cases. However, if we were to use them on the test set, we would get the standardized values, and we won’t be able to draw any conclusions from looking at standardized data.

只有反思自己的缺点,我们才能有所进步。 因此,有趣的是查看模型出错的实例。 我们已经有了不正确案例的索引。 但是,如果我们在测试集中使用它们,我们将获得标准化值,并且无法通过查看标准化数据得出任何结论。

We first need to convert instances back to their initial scale. Thankfully, Scikit-learn provides a method called inverse_transform for its StandardScaler. We will use this method to reverse the scaling back to normal.

我们首先需要将实例转换回其初始规模。 幸运的是,Scikit-learn提供了一种称为inverse_transform的方法 为其StandardScaler 我们将使用此方法将缩放比例恢复为正常。

As we examine the false positives and false negatives, it becomes quickly evident that they are edge cases. In each instance, there was at least one attribute with a value close to the defined thresholds.

当我们检查假阳性和假阴性时,很快就会发现它们是边缘情况。 在每种情况下,至少有一个属性的值接近定义的阈值。

Some cases that live close to their defined thresholds.
一些情况接近其定义的阈值。

In other instances, the decision of the neural network is skewed by an extreme value of the tea temperature. The combination of an internet speed that is just above the good threshold, a good book, but a very high or very low tea temperature, resulted in some unhappy predictions.

在其他情况下,神经网络的决策会因茶温度的极高值而产生偏差。 互联网速度刚好超过好阈值,好书,但是茶温度很高或很低的结合,导致了一些令人不快的预测。

Some cases where an extreme tea temperature overwhelmed the just-right internet speed and a good book.
在某些情况下,极端的茶温压倒了正确的互联网速度和一本好书。

We could argue that these are minor mistakes and that we have already reached excellent evaluation scores. But it would be a wrong and weak argument driven by the lassitude accumulated throughout our journey here. Ultimately, a model’s worth is only truly evaluated on edge cases. To improve our model, we could train it using more edge cases. This, however, is only an iteration of what we already did. So I leave it to you as an exercise, if you would like to replicate and improve upon my work: join the happy playground.

我们可以说这些都是小错误,并且我们已经达到了出色的评估分数。 但这将是我们整个旅途中积累的乏味所致的错误和虚弱的论点。 最终,仅在边缘情况下才能真正评估模型的价值。 为了改进我们的模型,我们可以使用更多的边缘情况来训练它。 但是,这只是我们已经做过的一个迭代。 因此,如果您想复制并改进我的工作,我将它留给您作为练习:加入快乐的操场 。

As for me, I stop here. Because although the neural network is still flawed, these are the kind of flaws to reflect upon. Flaws from which we can draw invaluable insights about ourselves, and how we take for granted what we have.

至于我,我就在这里停止。 因为尽管神经网络仍然存在缺陷,但这些都是需要反思的缺陷。 缺陷,我们可以从这些缺陷中获得关于自己以及我们如何理所当然拥有的宝贵见解。

结论 (Conclusion)

“Live with a steady superiority over life -don’t be afraid of misfortune, and do not yearn for happiness; it is, after all, all the same: the bitter doesn’t last forever, and the sweet never fills the cup to overflowing.” — Aleksandr Solzhenitsyn

“在生活中始终拥有优越的生活-不要害怕不幸,也不要渴望幸福; 毕竟是一样的:苦味不会永远持续下去,甜食也永远不会填满杯子而溢出。” — 亚历山大·索尔仁尼琴 ( Aleksandr Solzhenitsyn)

When was the last time you realized you have been mistakenly unhappy? Did it ever dawn upon you that although you barely had what you need, you had something, and that is plenty more than nothing? You would not want to fall into the sad predicament of the wrong pessimist. Just as much as you would not want to float away unawares with ignorant bliss.

您最后一次意识到自己是错误地感到不快乐的时候是什么时候? 有没有想到,尽管您几乎没有所需的东西,但是却拥有一些东西,这比没有东西还重要吗? 您不想陷入错误的悲观主义者的悲惨困境。 就像您不想带着无知的幸福浮出水面一样。

Life has to be about way more than happiness. We are beings aware of our mortality, prone to illness, subject to aging, exposed to the arbitrary whims of our circumstances. The pursuit has to be towards meaning rather than happiness. Meaning that sustains us throughout our hardships. Because we are also beings strong enough to bear our tragic conditions of existence.

生活比幸福更重要。 我们意识到自己的死亡率,易患疾病,易衰老,易受我们情况的影响。 追求必须朝向意义而不是幸福。 这意味着我们在整个艰辛中都可以维持下去。 因为我们也足够强大,可以承受我们悲惨的生存条件。

翻译自: https://towardsdatascience.com/the-pursuit-of-happiness-for-the-confined-artificial-neural-network-bd39f04c6313

网络受限


http://www.taodudu.cc/news/show-2892191.html

相关文章:

  • 如何在Android上玩经典复古游戏
  • reddits股票线程成为华尔街必读的内容
  • E03.04 Blue-Collar Boom: How China Bounced Back From the Virus
  • 网络虚拟(包括overlay、underlay介绍)
  • 加强【圣域2】各个技能的打击感-华丽的击飞效果
  • 亲历NSDI 2013
  • python语言开发效率高吗_12种高效率热门编程语言,你会用几个?
  • 靶机渗透练习58-digitalworld.local:VENGEANCE
  • 7.25 10figting!
  • Software-Defined Networking:A comprehensive Survey
  • 【自动驾驶】Frenet坐标系与Cartesian坐标系(二)
  • Frenetic QuickInstall
  • Frenetic Python实验(二)
  • Frenetic Python实验(一)
  • Frenetic HelloSDNWorld
  • 英语写作——议论文
  • 高效率完成工作的12种热门编程语言,你会用几个?
  • Frenetic Python实验(三)
  • 大数据开发职位要求
  • 三本毕业后,选择了大数据开发职业
  • 大数据开发工程师要求高么?有前景么
  • 大数据开发跟大数据分析的区别是什么?
  • 什么是大数据开发?看完我终于懂了......
  • 大数据开发就业:大数据开发有哪些岗位
  • 大数据开发学习路径
  • 大数据开发,真的这么香吗?
  • 大数据开发薪资水平怎么样?
  • 想转行做大数据开发,求各路大神给指条明路?
  • 成为一个大数据开发工程师的学习步骤--文字版
  • 某某证券大数据开发工程师招聘笔试题

网络受限_受限人工神经网络对幸福的追求相关推荐

  1. 人工神经网络_用人工神经网络控制猴子大脑,MIT科学家做到了

    机器之心报道 机器之心编辑部 MIT 的三位科学家成功地用自己创建的人工神经网络控制了猴子大脑皮层的神经活动. 三位研究者分别是 MIT 大脑与行为科学系主任 James DiCarlo.MIT 博士 ...

  2. unet是残差网络吗_基于UNet神经网络的城市人流预测

    [原创] 1 利用手机信令数据计算人口流动数据 手机信令数据是研究人口的整体流动情况的重要数据来源.移动运营商在为手机用户提供实时通讯服务时,积累了大量的基站与设备的服务配对数据.根据配对和唤醒发生的 ...

  3. 2020人工神经网络第一次作业-参考答案第二部分

    本文是 2020人工神经网络第一次作业 的参考答案第二部分 ➤02 第二题答案参考 1.问题描述 原题要求设计一个神经网络对于下面图中的3类模式进行分类.期望输出分别使用: (1,−1,−1)T,(− ...

  4. ann人工神经网络_深度学习-人工神经网络(ANN)

    ann人工神经网络 Building your first neural network in less than 30 lines of code. 用不到30行代码构建您的第一个神经网络. 1.W ...

  5. numpy找到矩阵中不同元素的种类_基于NumPy和图像分类的人工神经网络构建

    基于NumPy和图像分类的人工神经网络构建 本文利用NumPy系统在Python中构建人工神经网络,以便为Fruits360数据集执行图像分类应用程序. 本文提及的所有内容(即图像和源代码,不包括Fr ...

  6. 人工神经网络_制作属于自己的人工神经网络

    在本文中,我已经实现了具有Dropout和L2正则化的人工神经网络的完全向量化代码. 在本文中,我实现了一个在多个数据集上测试的人工神经网络的完全向量化python代码.此外,并对Dropout和L2 ...

  7. 人工神经网络_人工神经网络实践

    人工神经网络(Artificial Neural Network,ANN) 使一种受人脑生物神经网络信息处理方式启发而诞生的一种计算模型,得益于语音识别.计算机视觉和文本处理方面的许多突破性成果,人工 ...

  8. 人工神经网络简介和单层网络实现AND运算--AForge.NET框架的使用(五)

    前面4篇文章说的是模糊系统,它不同于传统的值逻辑,理论基础是模糊数学,所以有些朋友看着有点迷糊,如果有兴趣建议参考相关书籍,我推荐<模糊数学教程>,国防工业出版社,讲的很全,而且很便宜(我 ...

  9. 长短期记忆人工神经网络(LSTM)网络学习资料

    一.人工神经网络模型的分类: 1.27种神经网络的图解 地址:https://baijiahao.baidu.com/s?id=1590362274035183205&wfr=spider&a ...

最新文章

  1. JBPM4.4与SSH2的整合
  2. 顶级风投First Round Capital对创业者的30个建议
  3. 免费分享|linefriends手帐内页|横线方格|非卖品
  4. 开发转测试没人要_新人如何快速的进入融入软件测试行业?
  5. 漫画通信:一图看懂通信发展史
  6. oracle read by other session,AWR报告中,read by other session ,如何解决?
  7. 操作系统之进程管理:5、处理机调度
  8. java 适配器模式记载学习
  9. Learn OpenGL(七)——OpenGL中使用着色器的基本步骤及GLSL渲染简单示例
  10. Linux 给用户 赋某个文件夹操作的权限
  11. cvCvtPixToPlane cvCvtPlanetoPix
  12. kubernetes 集群管理平台
  13. 上海达内python 培训视频
  14. 从零开始 DIY 智能家居 - 基于 ESP32 的智能语音合成播报模块
  15. c语言程序设计实验指导实验十二,C语言程序设计实验指导
  16. 使用Flask部署机器学习模型
  17. 信息安全风险评估_一般找谁做?
  18. 从零搭建Spring Boot脚手架(2):增加通用的功能
  19. 电脑基础常识:CPU、GPU、内存、主板、电源
  20. 阶乘约数-蓝桥杯国赛java

热门文章

  1. jmeter 压力测试各种值的意思
  2. 送你个低代码福利,错过要再等一年
  3. requests设置代理ip------验证代理ip是否可用
  4. Catia V5-6R2016安装教程
  5. 计算机软件图标乱码,Win7系统桌面快捷图标名称显示乱码如何解决
  6. 51Nod 1278 相离的圆(好题)
  7. KDD 2020阿里巴巴论文一分钟秒读
  8. 修改计算机ip地址cmd,win7系统通过命令提示符将系统修改为静态IP地址的方法【图文】...
  9. Win11图标变暗怎么办?Win11图标变暗的解决方法
  10. 【思考】人脸认证真的准确吗?通过身份证的人脸对比有哪些问题?