人工神经网络方法学习步长

内容 (Contents)

  1. Artificial Neural Network人工神经网络
  2. Activation functions激活功能
  3. Loss functions损失函数

人工神经网络 (Artificial Neural Network)

The human brain is the most sophisticated of all supercomputers.An artificial neural network (ANN) is a technique designed to simulate the way the human brain analyzes and processes information. As a human brain learns through experiences so does an ANN . An ANN has self learning capabilities ie. as more and more data becomes available an ANN can improve its’ predictive/modelling capabilities.

人脑是所有超级计算机中最复杂的。人工神经网络(ANN)是一种旨在模拟人脑分析和处理信息的方式的技术。 当人类的大脑通过经验学习时,人工神经网络也是如此。 人工神经网络具有自我学习的能力,即。 随着越来越多的数据变得可用,人工神经网络可以提高其预测/建模能力。

Artificial neural networks are designed to function like the human brain, with neuron nodes interconnected like a web.

人工神经网络被设计为像人的大脑一样运作,神经元节点像网络一样相互连接。

An ANN has hundreds or thousands of artificial neurons called processing units, which are interconnected by nodes. These processing units are composed of input and output units. The input units receive various forms and structures of information based on an internal weighting system, and the neural network attempts to learn about the information presented to produce one output report.

一个人工神经网络有数百或数千个称为处理单元的人工神经元,它们通过节点相互连接。 这些处理单元由输入和输出单元组成。 输入单元基于内部加权系统接收各种形式和结构的信息,而神经网络则试图了解所呈现的信息以生成一份输出报告。

Just like humans need a set of rules and guidelines to process information into a result, ANNs are programmed with a set of learning rules called backpropagation, (backward propagation of error), to improve their output results.

就像人类需要一套规则和准则来将信息处理成结果一样,人工神经网络也被编程有一套称为反向传播(错误的向后传播)的学习规则,以改善其输出结果。

An ANN initially goes through a training phase where it learns to recognize patterns in data, whether visually, aurally, or textually. During this supervised phase, the network compares its actual output produced with what it was meant to produce — the desired output. The difference between both outcomes is adjusted using backpropagation. This means that the network works backward, going from the output unit to the input units to adjust the weight of its connections between the units until the difference between the actual and desired outcome produces the lowest possible error.

人工神经网络最初会经历一个训练阶段,在该阶段中,它将学会从视觉,听觉或文本方面识别数据中的模式。 在此监督阶段,网络将其实际产生的输出与预期产生的输出(期望的输出)进行比较。 两种结果之间的差异可通过反向传播进行调整。 这意味着网络会反向工作 ,从输出单元到输入单元,以调整其在单元之间的连接权重,直到实际结果与期望结果之间的差值产生最小的错误为止。

Let us deep dive into what exactly is an ANN structure!!!

让我们深入了解什么是人工神经网络结构!!!

ANN Structure

人工神经网络结构

Perceptron
感知器

The above structure represent an ANN in it’s most basic forms also called as a Perceptron

上面的结构以最基本的形式表示了人工神经网络,也称为Perceptron

A set of inputs denoted as {x1, x2 ,…..xm) each fed into its own connection with a weight denoted as (w1, w2 ,………wm).Every connection has a weight attached which may have either a positive or a negative value associated with it. The neuron sums all the signals it receives, with each signal being multiplied by its associated weights on the connection.

一组表示为{x1,x2,..... xm)的输入,每个输入馈入其自己的连接,权重表示为(w1,w2,...... wm)。每个连接都附加有权重,该权重可以为正或与此相关的负值。 神经元将接收到的所有信号相加,每个信号 乘以其在连接上的关联权重。

This output is then passed through a transfer /activation function, g(y), that is normally non-linear to give the final output ..

然后,此输出通过传递/激活函数g(y)传递该函数通常是非线性的,以给出最终输出..

The back-propagation ANN is a feed-forward neural network structure that takes the input to the network and multiplies it by the weights on the connections between neurons or nodes; summing their products before passing it through a threshold function to produce an output. The back-propagation algorithm works by minimizing the error between the output and the target (actual) by propagating the error back into the network. The weights on each of the connections between the neurons are changed according to the size of the initial error. The input data are then fed forward again, producing a new output and error. The process is reiterated until an acceptable minimized error is obtained. Each of the neurons uses a transfer/activation function and is fully connected to nodes on the next layer. Once the error reaches the desired value, the training is stopped. The final model is thus a function that is a internally representing of the output in terms of the inputs at that point. A more detailed discussion of the back-propagation algorithm will be carried out in upcoming articles.

反向传播ANN是前馈神经网络结构 ,它将输入输入到网络,并将其乘以神经元或节点之间连接的权重。 对它们的乘积求和,然后再将其通过阈值函数以产生输出。 反向传播算法通过将错误传播回网络来最小化输出和目标(实际)之间的错误。 神经元之间每个连接的权重根据初始误差的大小而变化。 输入数据然后再次被前馈,产生新的输出和错误。 重复该过程,直到获得可接受的最小化误差为止。 每个神经元都使用转移/激活功能,并完全连接到下一层的节点。 一旦误差达到期望值,训练就会停止。 因此,最终模型是一个函数,该函数根据该点的输入在内部表示输出。 反向传播算法的更详细讨论将在以后的文章中进行。

激活功能: (Activation functions:)

( Kindly re-read this topic post covering all the posts as it contains terms which would be explained later but however covering these here since it would add to the understanding now as well.)

(请重新阅读涵盖所有帖子的主题帖子,因为其中包含一些术语,稍后将对其进行解释,但是在此将其覆盖,因为它也会增加现在的理解。)

Let us take an example of a binary classification, what would the activation function be, any guesses??

让我们以二进制分类为例,激活函数是什么,有任何猜测吗?

Binary classification/sigmoid function
二进制分类/ S形函数

The above model is an exact replica of the logistic regression model.The sigmoid/Logistic function is used in the above case.

上面的模型是logistic回归模型的精确复制品。在上述情况下使用了S形/ Logistic函数。

Activation functions are mathematical equations that determine the output of a neural network. The function is attached to each neuron in the network, and determines whether it should be activated (“fired”) or not, based on whether each neuron’s input is relevant for the model’s prediction. Activation functions also help normalize the output of each neuron to a range between 1 and 0 or between -1 and 1.

激活函数是确定神经网络输出的数学方程式。 该功能附加到网络中的每个神经元,并根据每个神经元的输入是否与模型的预测相关来确定是否应激活(“触发”)该功能。 激活功能还有助于将每个神经元的输出标准化为1到0或-1到1之间的范围。

An additional aspect of activation functions is that they must be computationally efficient because they are calculated across thousands or even millions of neurons for each data sample. Modern neural networks use a technique called backpropagation to train the model, which places an increased computational strain on the activation function, and its derivative function.

激活函数的另一个方面是,它们必须具有高效的计算能力,因为对于每个数据样本而言,它们是在数千乃至数百万个神经元中计算得出的。 现代神经网络使用一种称为反向传播的技术来训练模型,该模型在激活函数及其导数函数上增加了计算压力。

Linear activation functions:

线性激活功能:

二进制步长函数 (Binary Step Function)

A binary step function is a threshold-based activation function. If the input value is above or below a certain threshold, the neuron is activated and sends exactly the same signal to the next layer.

二进制步进函数是基于阈值的激活函数。 如果输入值高于或低于某个阈值,则神经元被激活并将完全相同的信号发送到下一层。

The problem with a step function is that it does not allow multi-value outputs — for example, it cannot support classifying the inputs into one of several categories.

步进功能的问题在于它不允许多值输出-例如,它不支持将输入分类为几种类别之一。

线性激活功能 (Linear Activation Function)

A linear activation function takes the form:

线性激活函数的形式为:

A = cx

A = cx

It takes the inputs, multiplied by the weights for each neuron, and creates an output signal proportional to the input. In one sense, a linear function is better than a step function because it allows multiple outputs, not just yes and no.

它获取输入,然后乘以每个神经元的权重,然后创建与输入成比例的输出信号。 从某种意义上说,线性函数优于步进函数,因为它允许多个输出,而不仅仅是yes和no。

However, a linear activation function has two major problems:

但是,线性激活函数有两个主要问题:

1. Not possible to use backpropagation (gradient descent) to train the model — the derivative of the function is a constant, and has no relation to the input, X. So it’s not possible to go back and understand which weights in the input neurons can provide a better prediction.

1.无法使用反向传播 (梯度下降)来训练模型-函数的导数是常数,并且与输入X无关。因此,无法返回并了解输入神经元中的权重可以提供更好的预测。

2. All layers of the neural network collapse into one — with linear activation functions, no matter how many layers in the neural network, the last layer will be a linear function of the first layer (because a linear combination of linear functions is still a linear function). So a linear activation function turns the neural network into just one layer.

2.神经网络的所有层都折叠为一个 —具有线性激活函数,无论神经网络中有多少层,最后一层都是第一层的线性函数(因为线性函数的线性组合仍然是线性函数)。 因此,线性激活函数将神经网络变成一层。

A neural network with a linear activation function is simply a linear regression model. It has limited power and ability to handle complexity varying parameters of input data.

具有线性激活函数的神经网络就是线性回归模型。 它具有有限的能力和能力来处理输入数据的复杂度变化的参数。

Note: Backpropagation will be covered in depth later

注意:反向传播将在稍后深入介绍

非线性激活功能 (Non-Linear Activation Functions)

Modern neural network models use non-linear activation functions. They allow the model to create complex mappings between the network’s inputs and outputs, which are essential for learning and modeling complex data, such as images, video, audio, and data sets which are non-linear or have high dimensionality.

现代的神经网络模型使用非线性激活函数。 它们使模型可以在网络的输入和输出之间创建复杂的映射,这对于学习和建模复杂的数据(例如图像,视频,音频和非线性或高维数据集)至关重要。

Almost any process imaginable can be represented as a functional computation in a neural network, provided that the activation function is non-linear.

只要激活函数是非线性的,几乎可以想象的任何过程都可以表示为神经网络中的函数计算。

Non-linear functions address the problems of a linear activation function:

非线性函数解决了线性激活函数的问题:

  1. They allow backpropagation because they have a derivative function which is related to the inputs.它们允许反向传播,因为它们具有与输入有关的微分函数。
  2. They allow “stacking” of multiple layers of neurons to create a deep neural network. Multiple hidden layers of neurons are needed to learn complex data sets with high levels of accuracy.它们允许“堆叠”多层神经元来创建一个深层的神经网络。 需要多个隐藏的神经元层才能以较高的准确性学习复杂的数据集。

非线性激活函数以及如何选择它们 (Nonlinear Activation Functions and How to Choose them)

Sigmoid
乙状结肠

乙状结肠/物流 (Sigmoid / Logistic)

Advantages

优点

  • Smooth gradient, preventing “jumps” in output values.

    平滑的渐变 ,防止输出值“跳跃”。

  • Output values bound between 0 and 1, normalizing the output of each neuron.

    输出值介于0和1之间,对每个神经元的输出进行标准化。

  • Clear predictions — For X above 2 or below -2, tends to bring the Y value (the prediction) to the edge of the curve, very close to 1 or 0. This enables clear predictions.

    清晰的预测 -对于大于2或小于-2的X,往往会将Y值(预测)带到曲线的边缘,非常接近1或0。这可以实现清晰的预测。

Disadvantages

缺点

  • Vanishing gradient (Will be covered in depth later) — for very high or very low values of X, there is almost no change to the prediction, causing a vanishing gradient problem. This can result in the network refusing to learn further, or being too slow to reach an accurate prediction.

    消失梯度 (稍后将深入介绍)-对于非常高或非常低的X值,预测几乎没有变化,从而导致了消失梯度问题。 这可能导致网络拒绝进一步学习,或者太慢而无法获得准确的预测。

  • Outputs not zero centered.

    输出中心不是零

  • Computationally expensive

    计算上昂贵

TanH /双曲正切 (TanH / Hyperbolic Tangent)

Advantages

优点

  • Zero centered — making it easier to model inputs that have strongly negative, neutral, and strongly positive values.

    零位居中 -使具有强负,中性和强正值的输入建模更为容易。

  • Otherwise like the Sigmoid function.否则就像Sigmoid函数。

Disadvantages

缺点

  • Like the Sigmoid function像Sigmoid函数
ReLU
ReLU

ReLU(整流线性单位) (ReLU (Rectified Linear Unit))

Advantages

优点

  • Computationally efficient — allows the network to converge very quickly

    计算效率高 -允许网络快速融合

  • Non-linear — although it looks like a linear function, ReLU has a derivative function and allows for backpropagation

    非线性-尽管ReLU看起来像线性函数,但它具有导数函数并允许反向传播

Disadvantages

缺点

  • The Dying ReLU problem — when inputs approach zero, or are negative, the gradient of the function becomes zero, the network cannot perform backpropagation and cannot learn.

    Dyling ReLU问题 -当输入接近零或为负时,函数的梯度变为零,网络无法执行反向传播,也无法学习。

Leaky ReLU
泄漏的ReLU

泄漏的ReLU (Leaky ReLU)

Advantages

优点

  • Prevents dying ReLU problem — this variation of ReLU has a small positive slope in the negative area, so it does enable backpropagation, even for negative input values

    防止即将死去的ReLU问题-ReLU的这种变化在负区域具有小的正斜率,因此即使对于负输入值,它也确实可以进行反向传播

  • Otherwise like ReLU否则像ReLU

Disadvantages

缺点

  • Results not consistent — leaky ReLU does not provide consistent predictions for negative input values.

    结果不一致 -泄漏的ReLU无法为负输入值提供一致的预测。

Parametric ReLU
参数ReLU

参数ReLU (Parametric ReLU)

Advantages

优点

  • Allows the negative slope to be learned — unlike leaky ReLU, this function provides the slope of the negative part of the function as an argument. It is, therefore, possible to perform backpropagation and learn the most appropriate value of α.

    允许学习负斜率 -与泄漏的ReLU不同,此函数提供函数负数部分的斜率作为参数。 因此,可以进行反向传播并学习最合适的α值。

  • Otherwise like ReLU否则像ReLU

Disadvantages

缺点

  • May perform differently for different problems.

    对于不同的问题可能会有所不同。

Softmax
软最大

软最大 (Softmax)

Advantages

优点

  • Able to handle multiple classes only one class in other activation functions — normalizes the outputs for each class between 0 and 1, and divides by their sum, giving the probability of the input value being in a specific class.

    能够在其他激活函数中仅处理一个类别的多个类别 -将每个类别的输出归一化在0和1之间,并除以它们的总和,从而得出输入值属于特定类别的可能性。

  • Useful for output neurons — typically Softmax is used only for the output layer, for neural networks that need to classify inputs into multiple categories.

    对于输出神经元很有用 -通常Softmax仅用于输出层,用于需要将输入分类为多个类别的神经网络。

Swish
挥舞

挥舞 (Swish)

Swish is a new, self-gated activation function discovered by researchers at Google. According to their paper, it performs better than ReLU with a similar level of computational efficiency. In experiments on ImageNet with identical models running ReLU and Swish, the new function achieved top -1 classification accuracy 0.6–0.9% higher.

Swish是Google研究人员发现的一种新的自门激活功能。 根据他们的论文 ,它在类似的计算效率水平上比ReLU更好。 在具有运行ReLU和Swish的相同模型的ImageNet上进行的实验中,新功能将top -1分类精度提高了0.6–0.9%。

Finding the best weights/coefficients (The loss function)

寻找最佳权重/系数(损失函数)

A loss function is a method of evaluating how well s specific algorithm models the given data. If predictions deviate too much from the actual values , the loss function would cough up a very large number. therefore we define a goodness metric (optimization function) to define how good the fit(for regression problems) or separation (for classification problems)

损失函数是一种评估特定算法对给定数据建模的程度的方法。 如果预测与实际值相差太大,则损失函数会咳嗽很多。 因此,我们定义了优良度指标(优化函数),以定义拟合度(对于回归问题)或分离度(对于分类问题)

Ideal properties of an loss function

损失函数的理想性质

  1. Robust: The result does not drastically explode due to the presence of outliers.稳健:由于存在异常值,结果不会急剧爆炸。
  2. Non-ambiguous: Multiple co-efficient values should not give the same error.明确:多个系数值不应产生相同的误差。
  3. Sparse : Should use as little data as possible.稀疏:应使用尽可能少的数据。
  4. Convexity: It should be convex.凸度:应该是凸的。
Convexity of a loss function
损失函数的凸性
Loss functions cheat sheet
损失功能备忘单

Regression loss functions:

回归损失函数:

As can be seen in the diagram below regression losses are simple and self explanatory ,the squared loss(l2) is less robust than absolute loss (l1)due the the presence of squared terms, L2 loss is easily differentiable as compared to L1, Huber’s loss is more robust and differentiable as it combines the best of L1 and L2 losses.

如下图所示,由于存在平方项,回归损失很简单且易于解释,平方损失(l2)不如绝对损失(l1)强健,L2损失与Huber的L1相比容易区分损耗结合了L1和L2的最佳损耗,因此更加健壮和易于区分。

Common loss functions regression
常见损失函数回归

Classification loss functions:

分类损失函数:

Binary classification:

二进制分类:

Exponential Loss:

指数损失:

Logistic Loss:

物流损失:

Logistic loss function
物流损失函数

Binary Hinge loss:

二进制铰链损失:

Hinge Loss
铰链损失

To better understand the concept of hinge loss , let us take actual and predicted values, and let us chose the margin as K=0.20

为了更好地理解铰链损耗的概念,让我们获取实际值和预测值,并选择余量为K = 0.20

The table above illustrates hinge loss for a hypothetical SVM(subject vector machines). The goal is binary classification. Items can be class -1 or +1 (for example, male / female, or live / die, etc.). An SVM classifier accepts predictor values and emits a value between -1.0 and +1.0 for example +0.3872 or -0.4548. (Actually, that’s not entirely true, but assume it is — the following explanation doesn’t change).

上表说明了假设的SVM(主题向量机)的铰链损耗。 目标是二进制分类。 项目可以是-1或+1类(例如,男性/女性,或生活/死亡等)。 SVM分类器接受预测值,并发出-1.0到+1.0之间的值,例如+0.3872或-0.4548。 (实际上,这并不完全正确,但请假设是正确的,以下说明不会改变)。

If the computed output value is any positive value, the prediction is class +1 and vice versa.

如果计算出的输出值为任何正值,则预测为+1类,反之亦然。

But, SVM has a notion of a margin. Suppose the margin is 0.2 and a set of actual and computed values is as shown in the table. Here’s what’s going on:

但是,SVM具有余量的概念。 假设边距为0.2,并且一组实际值和计算值如表中所示。 这是怎么回事:

For item [0], the actual is +1 and the computed is +0.55 so this is a correct prediction and because the computed value is greater than the margin of 0.2 there is no hinge loss error.

对于项目[0],实际值为+1,计算的值为+0.55,因此这是正确的预测,并且由于计算的值大于0.2的余量,因此没有铰链损耗误差。

For item [1], the actual is +1 and the computed is +0.25 so the same situation occurs.

对于项目[1],实际为+1,计算为+0.25,因此发生相同的情况。

For item [3], that actual is +1 and the computed is -0.25 so the classification is wrong and there’s a large hinge loss.

对于项[3],实际为+1,计算为-0.25,因此分类错误,并且铰链损耗很大。

For item [6], the actual is -1 and the computed is -0.05 the classification is correct but there is a moderate hinge loss because the computed is too close to zero.

对于项目[6],实际值为-1,计算得出的值为-0.05,分类是正确的,但由于计算值太接近零,因此铰链损失较小。

For item [7], the actual is -1 and the computed is +0.25 so the classification is wrong and there’s a large hinge loss. Notice the symmetry with item [3].

对于项目[7],实际值为-1,计算得出的值为+0.25,因此分类错误并且铰链损耗很大。 注意项目[3]的对称性。

Multiclass Classification:

多类别分类:

Hinge Loss/Multi class SVM Loss

铰链损耗/多类SVM损耗

In simple terms, the score of correct category should be greater than sum of scores of all incorrect categories by some safety margin (usually one). And hence hinge loss is used for maximum-margin classification, most notably for SVM’s. Although not differnentiable, it’s a convex function which makes it easy to work with usual convex optimizers used in machine learning domain.

简而言之,正确类别的分数应大于所有不正确类别的分数总和一个安全系数(通常为一个)。 因此,铰链损耗可用于最大利润分类,尤其是用于SVM。 尽管没有区别,但它是一个凸函数,可轻松使用机器学习领域中常用的凸优化器。

Cross Entropy Loss
交叉熵损失

Gradient Descent:

梯度下降:

For the mathematical intuition and understanding of Gradient descent , kindly go through the below link (Ann] excellent article on gradient descent)

对于梯度下降的数学直觉和理解,请通过以下链接(有关梯度下降的出色文章)

翻译自: https://medium.com/analytics-vidhya/artificial-neural-networks-an-intuitive-approach-part-1-890efac210f0

人工神经网络方法学习步长


http://www.taodudu.cc/news/show-1873964.html

相关文章:

  • 机器学习 声音 分角色_机器学习对儿童电视节目角色的痴迷
  • 遗传算法是一种进化算法_一种算法的少量更改可以减少种族主义的借贷
  • 无监督模型 训练过程_监督使用训练模型
  • 端到端车道线检测_弱监督对象检测-端到端培训管道
  • feynman1999_AI Feynman 2.0:从数据中学习回归方程
  • canny edge_Canny Edge检测器简介
  • 迄今为止2020年AI的奋斗与成功
  • 机器学习算法应用_机器学习:定义,类型,算法,应用
  • 索尼爱立信k510驱动_未来人工智能驱动的电信网络:爱立信案例研究
  • ai驱动数据安全治理_利用AI驱动的自动协调器实时停止有毒信息
  • ai人工智能_古典AI的简要史前
  • 正确的特征点匹配对_了解如何正确选择特征
  • 在Covid-19期间测量社交距离
  • nlp gpt论文_GPT-3是未来。 但是NLP目前可以做什么?
  • ai人工智能软件_您应该了解的5家创新AI软件公司
  • 深度学习 个性化推荐_生产中的深度强化学习第2部分:个性化用户通知
  • opencv 识别火灾_使用深度学习和OpenCV早期火灾探测系统
  • 与Maggy统一单主机和分布式机器学习
  • 极速火箭网络助手怎么用_在检测火箭队方面,神经网络比灰烬更好吗? 如果是这样,如何?...
  • nlu 意图识别_在NLU中,您无视危险的意图
  • BERT-从业者的观点
  • 检测和语义分割_分割和对象检测-第4部分
  • 工业革命 书_工业革命以来最重大的变化
  • 实现无缝滑屏怎么实现_无缝扩展人工智能以实现分布式大数据
  • colab 数据集_Google Colab上的YOLOv4:轻松训练您的自定义数据集(交通标志)
  • 人工智能和机器学习的前五门课程
  • c语言儿童教学_五岁儿童的自然语言处理
  • 星球大战telnet_重制星球大战:第四集(1977)
  • ai人工智能的数据服务_建立AI系统的规则-来自数据科学家
  • 语音库构建_推动数据采用,以通过语音接口构建更好的产品

人工神经网络方法学习步长_人工神经网络-一种直观的方法第1部分相关推荐

  1. 人力资源之选人方法学习笔记_职位胜任素质模型

    续接上篇:人力资源之选人方法学习笔记_建立科学的人才观   本篇主要讲解关于职位胜任素质模型课程的学习笔记. 什么是职位胜任素质模型 就是用行为方式来定义员工为了完成某项工作应该具备的知识.技能等特质 ...

  2. 缓解办公疲劳的方法有很多,这里介绍几种常用的方法...

    缓解办公疲劳的方法有很多,这里介绍几种常见的方法. 1.增加运动量:办公室的工作环境往往是静止的,所以增加运动量对于缓解疲劳非常有帮助.可以在办公室内安排一些健身活动,例如每隔一段时间就站起来活动一下 ...

  3. 人工神经网络评价法案例_人工神经网络应用实例

    1 / 5 人工神经网络在蕨类植物生长中的应用 摘要 :人工神经网络 (ARTIFICIAL NEURAL NETWORK ,简称 ANN) 是目前国际上一门发展迅速的 前沿交叉学科. 为了模拟大脑的 ...

  4. 神经网络 顾晓东_基于神经网络的图像边缘检测方法

    ! D J 1 $ 0 ' ' " 0 ' " ' / ' & / ) - %+ $ ( ' "- %% ' > , $ 2 % ' / 5 - , N Y ...

  5. 前馈神经网络中的前馈_前馈神经网络在基于趋势的交易中的有效性(1)

    前馈神经网络中的前馈 This is a preliminary showcase of a collaborative research by Seouk Jun Kim (Daniel) and ...

  6. 人工鱼群算法python代码_人工鱼群算法python_鱼群算法 - Brillou的个人空间 - OSCHINA - 中文开源技术交流社区......

    本算法是参照李晓磊博士的论文实现的,详细的算法原理可阅读<一种新型的智能优化方法_人工鱼群算法_李晓磊> 算法基于鱼群的生存行为:在一片水域中,鱼存在的数目最多的地方就是本水域中富含营养物 ...

  7. 神经网络中的最小二乘_深度神经网络:噪声中解读出科学

    该研究介绍了一种基于深度神经网络的基本新方法,以基于已知的物理模型将函数形式拟合到噪声数据.来自美国橡树林国家实验室的Stephen Jesse领导的团队,提出了一种新的方法,可用来逆向解决问题,可从 ...

  8. 神经网络 mse一直不变_利用神经网络寻找超新星

    简介 天文学是研究天体的学科,其研究对象包含恒星.星系.黑洞等.研究天体有点像在自然物理实验室做实验.在这个实验室里,会发生自然界中最为极端的变化过程,而这些过程中的绝大部分都无法在地球上重现.通过比 ...

  9. java按两列输出_有没有一种简单的方法可以将两列输出到Java中的控制台? - java...

    如标题所述,是否有一种简单的方法可以将两列输出到Java中的控制台? 我知道\t,但是在使用printf时,我还没有找到基于特定列进行空间分配的方法. 参考方案 使用宽度和精度说明符,将其设置为相同的 ...

  10. 设计模式 工厂方法_工厂设计模式–一种有效的方法

    设计模式 工厂方法 如您所知,"工厂方法模式"或俗称"工厂设计模式"是"创意设计模式"类别下的一种设计模式. 模式背后的基本原理是,在运行时 ...

最新文章

  1. Linux MySQl 5.7.17 MySQL ERROR 1366(HY000):Incorrect string value 解决方法
  2. 我的第一个python web开发框架(5)——开发前准备工作(了解编码前需要知道的一些常识)...
  3. 「SVN」Linux下svn命令使用的实践,个人记录~=傻瓜教程
  4. ubuntu16.04中成功安装ROS后,小海龟示例
  5. ORACLE导出导入意外终止导致 ORACLE initialization or shutdown in progress 问题解决
  6. php统计字符个数,php中3种方法统计字符串中每种字符的个数并排序
  7. 蓝牙遥控开关c语言程序,单片机蓝牙控制开关制作(程序源码+安卓APP分享)
  8. 使用Emacs执行外部shell命令
  9. 关于跨域获取cookie问题的解决
  10. HTTP1.0/1.1/2.0特性对比_转
  11. 17、手势(Gesture)
  12. 【2018盘点VR一体机那些事】手机VR眼镜和VR一体机有什么区别?AR,VR眼镜和VR一体机哪个好?
  13. 野火stm32f407学习笔记----核心板USB转TTL下载
  14. 用python实现关机程序_python实现重启关机程序
  15. Falsy Bouncer 过滤数组假值 Array.filter()方法
  16. java 二维卡尔曼滤波_卡尔曼滤波 – Kalman Filtering
  17. JButton:按钮组件
  18. php 实现每日持续签到,累计签到,送积分
  19. html使三角形渐变色,CSS3 简单的三角形渐变效果
  20. Errors集锦-数据库-file /usr/share/mysql/czech/errmsg.sys from install of mysql-community-common-5.7.16-1.

热门文章

  1. linux -- CW8.8 编译 提示缺少libstdc++.so.5的error
  2. VS2008启动调试,出现“ 已经找到网站 正在等待回应”
  3. 数据库优化之MySQL
  4. unity 将虚拟相机的视角局部放大,显示在一个平面上
  5. Atitit 写的计算机技术类的书 与it类紧密的学科 atiitt it学科体系化 体系树与知识点概念大总结 v3 t88.xlsx 门类 学科一级分类 专业、二级学科分类 课程 书籍 工学
  6. Atitit nlp用到的技术与常见类库 目录 1. 常用的技术 1 1.1. 语言处理基础技术 分词 相似度等 1 1.2. 新闻摘要 2 1.3. 情感倾向分析 2 1.4. 文章标签 2 1.
  7. paip.提升效率----更改数组LIST对象值for与FOREACH
  8. 如何“加密”你的email地址
  9. 我们来做做公募基金数量的“人口”大普查
  10. 从AWS到阿里云: 产品体系差异分析 | 凌云时刻