判断和推论

数据科学经常涉及形成和检验假设。(Data Science frequently involves forming and testing hypotheses.)

What will we do with all this statistics and probability theory? The science part of data science frequently involves forming and testing hypotheses about our data and the processes that generate it.

我们将如何处理所有这些统计和概率论? 数据科学的科学部分经常涉及形成和检验关于我们的数据及其生成过程的假设。

统计假设检验 (Statistical Hypothesis Testing)

Often, as data scientists, we’ll want to test whether a certain hypothesis is likely to be true. For our purposes, hypotheses are assertions like “this coin is fair” or “data scientists prefer Python to R” or “people are more likely to navigate away from the page without ever reading the content if we pop up an irritating interstitial advertisement with a tiny, hard-to-find close button” that can be translated into statistics about data. Under various assumptions, those statistics can be thought of as observations of random variables from known distributions, which allows us to make statements about how likely those assumptions are to hold.

通常,作为数据科学家,我们将要检验某个假设是否可能是正确的。 就我们的目的而言,假设是诸如“这枚硬币是公平的”或“数据科学家更喜欢Python而不是R”之类的断言,或者“如果我们弹出带有标题的令人讨厌的插页式广告,则人们更有可能浏览页面而不阅读内容”小而难以找到的关闭按钮”,可以将其转换为有关数据的统计信息。 在各种假设下,这些统计数据都可以看作是对已知分布中随机变量的观察,这使我们能够做出关于这些假设成立的可能性的陈述。

In the classical setup, we have a null hypothesis H0 that represents some default position, and some alternative hypothesis H1 that we’d like to compare it with. We use statistics to decide whether we can reject H0 as false or not. This will probably make more sense with an example.

在经典设置中,我们有一个代表某些默认位置的零假设H0,以及一个我们想要与之比较的替代假设H1。 我们使用统计信息来决定是否可以拒绝H0为假。 举个例子,这可能更有意义。

示例:翻转硬币 (Example: Flipping a Coin)

Imagine we have a coin and we want to test whether it’s fair. We’ll make the assumption that the coin has some probability p of landing heads, and so our null hypothesis is that the coin is fair — that is, that p = 0 . 5. We’ll test this against the alternative hypothesis p ≠ 0 . 5.

假设我们有一枚硬币,我们想测试它是否公平。 我们将假设硬币具有一定的落地概率p,因此我们的零假设是硬币是公平的,即p = 0。 5.我们将针对替代假设p≠0进行检验。 5,

In particular, our test will involve flipping the coin some number n times and counting the number of heads X. Each coin flip is a Bernoulli trial, which means that X is a Binomial(n,p) random variable, which (as we saw in previous article) we can approximate using the normal distribution:

特别地,我们的测试将涉及将硬币翻转n次并计算X的正面数。每次硬币翻转都是Bernoulli试验,这意味着X是Binomial(n,p)随机变量,如我们所见在上一篇文章中),我们可以使用正态分布进行近似:

Whenever a random variable follows a normal distribution, we can use normal_cdf to figure out the probability that its realized value lies within (or outside) a particular interval:

每当随机变量遵循正态分布时,我们都可以使用normal_cdf来计算其实现值位于特定区间内(或之外)的概率:

We can also do the reverse — find either the nontail region or the (symmetric) interval around the mean that accounts for a certain level of likelihood. For example, if we want to find an interval centered at the mean and containing 60% probability, then we find the cutoffs where the upper and lower tails each contain 20% of the probability (leaving 60%):

我们也可以做相反的工作-在非均值区域或均值周围找到(对称)区间,以说明一定程度的可能性。 例如,如果我们想找到一个以平均值为中心并包含60%概率的区间,那么我们会找到截止点,其中上,下尾巴各包含20%的概率(保留60%):

In particular, let’s say that we choose to flip the coin n = 1000 times. If our hypothesis of fairness is true, X should be distributed approximately normally with mean 50 and standard deviation 15.8:

特别是,假设我们选择将硬币翻转n = 1000次。 如果我们的公平假设是正确的,那么X应该以正态分布,平均值为50,标准差为15.8:

We need to make a decision about significance — how willing we are to make a type 1 error (“false positive”), in which we reject H0 even though it’s true. For reasons lost to the annals of history, this willingness is often set at 5% or 1%. Let’s choose 5%. Consider the test that rejects H0 if X falls outside the bounds given by:

我们需要做出关于重要性的决定-我们将如何犯类型1错误(“假阳性”),即使H0是真的,我们也拒绝其中的H0。 由于遗忘的历史原因,这种意愿通常设定为5%或1%。 让我们选择5%。 考虑如果X超出以下给定范围,则拒绝H0的测试:

Assuming p really equals 0.5 (i.e., H0 is true), there is just a 5% chance we observe an X that lies outside this interval, which is the exact significance we wanted. Said differently, if H0 is true, then, approximately 19 times out of 20, this test will give the correct result.

假设p确实等于0.5(即H0为真),那么我们观察到X超出此间隔的机会只有5%,这就是我们想要的确切含义。 换句话说,如果H0为真,那么大约20的19次,该测试将给出正确的结果。

We are also often interested in the power of a test, which is the probability of not making a type 2 error, in which we fail to reject H0 even though it’s false. In order to measure this, we have to specify what exactly H0 being false means. (Knowing merely that p is not 0.5 doesn’t give you a ton of information about the distribution of X.) In particular, let’s check what happens if p is really 0.55, so that the coin is slightly biased toward heads.

我们也经常对测试的功效感兴趣,这就是不产生2类错误的可能性,在这种情况下,即使错误,我们也无法拒绝H0。 为了衡量这一点,我们必须指定确切的H0为假是什么意思。 (仅知道p不是0.5并不能为您提供有关X分布的大量信息。)特别是,让我们检查一下p确实为0.55时会发生什么,以便使硬币稍微偏向正面。

In that case, we can calculate the power of the test with:

在这种情况下,我们可以使用以下方法计算测试的功效:

Imagine instead that our null hypothesis was that the coin is not biased toward heads, or that p ≤ 0 . 5. In that case we want a one-sided test that rejects the null hypothesis when X is much larger than 50 but not when X is smaller than 50. So a 5%-significance test involves using normal_probability_below to find the cutoff below which 95% of the probability lies:

相反,请想象一下我们的零假设是硬币没有偏向正面,或者p≤0。 5.在那种情况下,我们想要一个单边检验,当X远大于50时拒绝零假设,而当X小于50时不接受零假设。因此,5%有效度检验涉及使用normal_probability_below查找低于95的临界值%的概率在于:

This is a more powerful test, since it no longer rejects H0 when X is below 469 (which is very unlikely to happen if H1 is true) and instead rejects H0 when X is between 526 and 531 (which is somewhat likely to happen if H1 is true). =p-values

这是一个更强大的测试,因为它不再拒绝X小于469时的H0(如果H1为true,则不太可能发生),而是拒绝X介于526和531之间的H0(如果H1,则可能会发生)是真的)。 = p值

An alternative way of thinking about the preceding test involves p-values. Instead of choosing bounds based on some probability cutoff, we compute the probability — assuming H0 is true — that we would see a value at least as extreme as the one we actually observed.

考虑上述测试的另一种方法涉及p值。 代替基于某个概率临界值选择边界,我们计算概率(假设H0为真),我们将看到的值至少与实际观察到的值一样极端。

For our two-sided test of whether the coin is fair, we compute:

对于硬币是否公平的两面测试,我们计算:

If we were to see 530 heads, we would compute:

如果我们看到530个磁头,我们将计算:

Tip: Why did we use 529.5 instead of 530? This is what’s called a continuity correction. It reflects the fact that normal_probabil ity_between(529.5, 530.5, mu_0, sigma_0) is a better estimate of the probability of seeing 530 heads than normal_probabil ity_between(530, 531, mu_0, sigma_0) is. Correspondingly, normal_probability_above(529.5, mu_0, sigma_0) is a better estimate of the probability of seeing at least 530 heads. You may have noticed that we also used this in the code that produced.

提示:为什么我们使用529.5而不是530? 这就是所谓的连续性校正。 它反映了这样一个事实,即normal_probability_between(529.5,530.5,mu_0,sigma_0)比normal_probability_between(530,531,mu_0,sigma_0)更能看到530个头部。 相应地,normal_probability_above(529.5,mu_0,sigma_0)是对看到至少530个磁头的概率的更好估计。 您可能已经注意到,我们在生成的代码中也使用了此功能。

One way to convince yourself that this is a sensible estimate is with a simulation:

通过模拟可以说服自己这是一个明智的估计,一种方法是:

Since the p-value is greater than our 5% significance, we don’t reject the null. If we instead saw 532 heads, the p-value would be:

由于p值大于5%的显着性,因此我们不会拒绝null。 如果我们改为看到532个磁头,则p值为:

which is smaller than the 5% significance, which means we would reject the null. It’s the exact same test as before. It’s just a different way of approaching the statistics.

小于5%的显着性,这意味着我们将拒绝空值。 与以前完全相同。 这只是处理统计信息的另一种方式。

Similarly, we would have:

同样,我们将有:

For our one-sided test, if we saw 525 heads we would compute:

对于我们的单面测试,如果我们看到525个磁头,我们将计算:

which means we wouldn’t reject the null. If we saw 527 heads, the computation would be:

这意味着我们不会拒绝null。 如果我们看到527个头,则计算将为:

and we would reject the null.

我们将拒绝null。

Note: Make sure your data is roughly normally distributed before using normal_probability_above to compute p-values. The annals of bad data science are filled with examples of people opining that the chance of some observed event occurring at random is one in a million, when what they really mean is “the chance, assuming the data is distributed normally,” which is pretty meaningless if the data isn’t. There are various statistical tests for normality, but even plotting the data is a good start.

注意:在使用normal_probability_above计算p值之前,请确保您的数据大致呈正态分布。 不良数据科学的史册上充斥着许多人的例子,他们认为某些观察到的事件随机发生的机会是百万分之一,而他们真正的意思是“假设数据呈正常分布的机会”,这很漂亮。如果没有数据,则毫无意义。 有多种用于统计正态性的统计检验,但即使绘制数据也是一个好的开始。

置信区间 (Confidence Intervals)

We’ve been testing hypotheses about the value of the heads probability p, which is a parameter of the unknown “heads” distribution. When this is the case, a third approach is to construct a confidence interval around the observed value of the parameter.

我们一直在测试有关正面概率p值的假设,正面概率p是未知的“正面”分布的参数。 在这种情况下,第三种方法是围绕参数的观察值构造置信区间。

For example, we can estimate the probability of the unfair coin by looking at the average value of the Bernoulli variables corresponding to each flip — 1 if heads, 0 if tails. If we observe 525 heads out of 1,000 flips, then we estimate p equals 0.525.

例如,我们可以通过查看与每个翻转相对应的伯努利变量的平均值来估计不公平硬币的概率-如果是正面则为1,如果是反面则为0。 如果我们在1,000次翻转中观察到525个头,那么我们估计p等于0.525。

How confident can we be about this estimate? Well, if we knew the exact value of p, the central limit theorem (recall “The Central Limit Theorem” on previous article) tells us that the average of those Bernoulli variables should be approximately normal, with mean p and standard deviation:

我们对这个估计有多自信? 好吧,如果我们知道p的确切值,则中心极限定理(回忆一下上一篇文章的“中心极限定理”)告诉我们,这些伯努利变量的平均值应近似为正态,均值p为标准差:

Here we don’t know p, so instead we use our estimate:

这里我们不知道p,因此我们使用估计值:

This is not entirely justified, but people seem to do it anyway. Using the normal approximation, we conclude that we are “95% confident” that the following interval contains the true parameter p:

这不是完全合理的,但是人们似乎还是这样做。 使用法线逼近,我们得出结论,我们“ 95%确信”以下间隔包含真实参数p:

Tip: This is a statement about the interval, not about p. You should understand it as the assertion that if you were to repeat the experiment many times, 95% of the time the “true” parameter (which is the same every time) would lie within the observed confidence interval (which might be different every time).

提示:这是关于间隔的陈述,而不是关于p的陈述。 您应该将其理解为断言,即如果您要重复多次实验,则95%的“真实”参数(每次都相同)将位于观察到的置信区间内(每次可能不同) )。

In particular, we do not conclude that the coin is unfair, since 0.5 falls within our confidence interval.

特别是,我们不会得出该代币不公平的结论,因为0.5处于我们的置信区间内。

If instead we’d seen 540 heads, then we’d have:

相反,如果我们看到540个头,那么我们将:

Here, “fair coin” doesn’t lie in the confidence interval. (The “fair coin” hypothesis doesn’t pass a test that you’d expect it to pass 95% of the time if it were true.)

在这里,“公平硬币”并不在置信区间内。 (“公平硬币”假说没有通过检验,如果您认为它是真的,那么您会希望它在95%的时间内能够通过。)

P黑客 (P-hacking)

A procedure that erroneously rejects the null hypothesis only 5% of the time will — by definition — 5% of the time erroneously reject the null hypothesis:

根据定义,仅5%的时间错误地拒绝原假设的过程将错误地拒绝原假设的时间为5%。

What this means is that if you’re setting out to find “significant” results, you usually can. Test enough hypotheses against your data set, and one of them will almost certainly appear significant. Remove the right outliers, and you can probably get your p value below 0.05. (We did something vaguely similar in “Correlation” on previous article; did you notice?)

这意味着如果您打算寻找“重大”结果,通常可以。 根据您的数据集测试足够多的假设,几乎可以肯定其中之一会显得很重要。 删除正确的离群值,您的p值可能会低于0.05。 (我们在上一篇文章的“相关性”中做了模糊的相似处理;您注意到了吗?)

This is sometimes called P-hacking and is in some ways a consequence of the “inference from p-values framework.” A good article criticizing this approach is “The Earth Is Round.”

这有时称为P-hacking,在某种程度上是“来自p值框架的推断”的结果。 一篇批评这种方法的好文章是《地球是圆的》。

If you want to do good science, you should determine your hypotheses before looking at the data, you should clean your data without the hypotheses in mind, and you should keep in mind that p-values are not substitutes for common sense. (An alternative approach is “Bayesian Inference”.)

如果您想做一门好科学,则应该在查看数据之前先确定假设,清除数据时不要忘记假设,并且请记住,p值不能替代常识。 (另一种方法是“贝叶斯推断”。)

示例:运行A / B测试 (Example: Running an A/B Test)

One of your primary responsibilities at DataSciencester is experience optimization, which is a euphemism for trying to get people to click on advertisements. One of your advertisers has developed a new energy drink targeted at data scientists, and the VP of Advertisements wants your help choosing between advertisement A (“tastes great!”) and advertisement B (“less bias!”).

您在DataSciencester的主要职责之一是体验优化,这是一种委婉的说法,试图吸引人们点击广告。 您的一位广告商针对数据科学家开发了一种新的能量饮料,广告副总裁希望您的帮助在广告A(“味道很好!”)和广告B(“偏差更小!”)之间进行选择。

Being a scientist, you decide to run an experiment by randomly showing site visitors one of the two advertisements and tracking how many people click on each one.

作为科学家,您决定通过随机向网站访问者展示两个广告之一并跟踪每个人点击的人数来进行实验。

If 990 out of 1,000 A-viewers click their ad while only 10 out of 1,000 B-viewers click their ad, you can be pretty confident that A is the better ad. But what if the differences are not so stark? Here’s where you’d use statistical inference.

如果在1,000位A观看者中有990位点击了他们的广告,而在1,000位B观看者中只有10位点击了他们的广告,则您可以确信A是更好的广告。 但是,如果差异不是那么明显呢? 您将在这里使用统计推断。

Let’s say that “NA” people see ad “A”, and that “nA” of them click it. We can think of each ad view as a Bernoulli trial where “pA” is the probability that someone clicks ad “A”. Then (if “NA” is large, which it is here) we know that “nA/NA” is approximately a normal random variable with mean “pA” and standard deviation

假设“ NA”用户看到了广告“ A”,而其中的“ nA”则点击了广告。 我们可以将每个广告视图视为伯努利试验,其中“ pA”是某人点击广告“ A”的概率。 然后(如果“ NA”很大,就在这里),我们知道“ nA / NA”近似为具有均值“ pA”和标准偏差的正常随机变量

Similarly, “nB/NB” is approximately a normal random variable with mean “pB” and standard deviation

类似地,“ nB / NB”近似为具有均值“ pB”和标准偏差的正常随机变量

If we assume those two normals are independent (which seems reasonable, since the individual Bernoulli trials ought to be), then their difference should also be normal with mean “pB − pA” and standard deviation.

如果我们假设这两个法线是独立的(由于应该进行单独的伯努利试验,这是合理的),那么它们的差异也应该是均值“ pB-pA”和标准偏差的法线。

This means we can test the null hypothesis that “pA” and “pB” are the same (that is, that pA − pB is zero), using the statistic:

这意味着我们可以使用统计数据检验“ pA”和“ pB”相同(即,pA-pB为零)的原假设:

which should approximately be a standard normal.

应该大致是标准法线。

For example, if “tastes great” gets 200 clicks out of 1,000 views and “less bias” gets 180 clicks out of 1,000 views, the statistic equals:

例如,如果“口味很好”在1,000个视图中获得200次点击,而“较少偏见”在1,000个视图中获得180次点击,则该统计信息等于:

The probability of seeing such a large difference if the means were actually equal would be:

如果均值实际上相等,则看到如此大差异的可能性为:

which is large enough that you can’t conclude there’s much of a difference. On the other hand, if “less bias” only got 150 clicks, we’d have:

这足够大,您无法得出结论有很大的不同。 另一方面,如果“较少偏见”仅获得150次点击,那么我们将:

which means there’s only a 0.003 probability you’d see such a large difference if the ads were equally effective.

也就是说,如果广告效果相同,那么您看到差异的可能性只有0.003。

贝叶斯推理 (Bayesian Inference)

The procedures we’ve looked at have involved making probability statements about our tests: “there’s only a 3% chance you’d observe such an extreme statistic if our null hypothesis were true.”

我们所研究的过程涉及对我们的测试做出概率陈述:“如果我们的原假设成立,那么您只有3%的机会观察到这种极端的统计数据。”

An alternative approach to inference involves treating the unknown parameters themselves as random variables. The analyst (that’s you) starts with a prior distribution for the parameters and then uses the observed data and Bayes’s Theorem to get an updated posterior distribution for the parameters. Rather than making probability judgments about the tests, you make probability judgments about the parameters themselves.

推论的另一种方法涉及将未知参数本身视为随机变量。 分析人员(即您)从参数的先验分布开始,然后使用观察到的数据和贝叶斯定理来获取参数的更新的后验分布。 您无需对测试进行概率判断,而可以对参数本身进行概率判断。

For example, when the unknown parameter is a probability (as in our coin-flipping example), we often use a prior from the Beta distribution, which puts all its probability between 0 and 1:

例如,当未知参数是概率时(例如在我们的硬币翻转示例中),我们经常使用Beta分布中的先验值,它将其所有概率置于0和1之间:

Generally speaking, this distribution centers its weight at:

一般而言,此分布的权重集中在:

and the larger alpha and beta are, the “tighter” the distribution is.

而较大的alpha和beta,则表示分布“更紧密”。

For example, if alpha and beta are both 1, it’s just the uniform distribution (centered at 0.5, very dispersed). If alpha is much larger than beta, most of the weight is near 1. And if alpha is much smaller than beta, most of the weight is near zero. Below image shows several different Beta distributions.

例如,如果alpha和beta均为1,则仅是均匀分布(以0.5为中心,非常分散)。 如果alpha远远大于beta,则大多数权重都接近1。如果alpha远远小于beta,则大多数权重都接近零。 下图显示了几种不同的Beta分布。

So let’s say we assume a prior distribution on p. Maybe we don’t want to take a stand on whether the coin is fair, and we choose alpha and beta to both equal 1. Or maybe we have a strong belief that it lands heads 55% of the time, and we choose alpha equals 55, beta equals 45.

假设我们假设p上有先验分布。 也许我们不想就代币是否公平表示立场,我们将alpha和beta都选择为等于1。或者也许我们有强烈的信念,认为它有55%的时间停留在正面,我们选择alpha等于55,beta等于45。

Then we flip our coin a bunch of times and see h heads and t tails. Bayes’s Theorem (and some mathematics that’s too tedious for us to go through here) tells us that the posterior distribution for p is again a Beta distribution but with parameters alpha + h and beta + t.

然后,我们将硬币翻转多次,然后看到h头和t尾。 贝叶斯定理(以及一些让我们难以理解的数学)告诉我们,p的后验分布仍然是Beta分布,但参数为alpha + h和beta + t。

Let’s say you flip the coin 10 times and see only 3 heads.

假设您掷硬币10次,但只看到3个头。

If you started with the uniform prior (in some sense refusing to take a stand about the coin’s fairness), your posterior distribution would be a Beta(4, 8), centered around 0.33. Since you considered all probabilities equally likely, your best guess is something pretty close to the observed probability.

如果您从统一先验开始(在某种意义上拒绝就硬币的公平性表示立场),则您的后验分布将是一个以0.33为中心的Beta(4,8)。 由于您认为所有概率均等,因此您的最佳猜测非常接近观察到的概率。

If you started with a Beta(20, 20) (expressing the belief that the coin was roughly fair), your posterior distribution would be a Beta(23, 27), centered around 0.46, indicating a revised belief that maybe the coin is slightly biased toward tails.

如果您以Beta(20,20)开始(表示相信硬币大致公平),则您的后验分布将是以0.46为中心的Beta(23,27),这表明人们对硬币可能略有修正偏向尾巴。

And if you started with a Beta(30, 10) (expressing a belief that the coin was biased to flip 75% heads), your posterior distribution would be a Beta(33, 17), centered around 0.66. In that case you’d still believe in a heads bias, but less strongly than you did initially. These three different posteriors are plotted in below image.

并且,如果您以Beta(30,10)开始(表示相信硬币偏向于翻转75%的正面),则您的后验分布将是Beta(33,17),以0.66为中心。 在那种情况下,您仍然会相信正面偏见,但是不如最初那样强烈。 下图绘制了这三个不同的后代。

If you flipped the coin more and more times, the prior would matter less and less until eventually you’d have (nearly) the same posterior distribution no matter which prior you started with.

如果您将硬币投掷的次数越来越多,则先验的重要性将越来越小,直到最终您将拥有(几乎)相同的后验分布,无论您从哪个先验开始。

For example, no matter how biased you initially thought the coin was, it would be hard to maintain that belief after seeing 1,000 heads out of 2,000 flips (unless you are a lunatic who picks something like a Beta(1000000,1) prior).

例如,无论您最初以为硬币有多大偏见,在2,000次掷骰中看到1000个头之后,都很难保持这种信念(除非您是一个疯子,之前选择了Beta(1000000,1)之类的东西)。

What’s interesting is that this allows us to make probability statements about hypotheses: “Based on the prior and the observed data, there is only a 5% likelihood the coin’s heads probability is between 49% and 51%.” This is philosophically very different from a statement like “if the coin were fair we would expect to observe data so extreme only 5% of the time.”

有趣的是,这使我们能够做出关于假设的概率陈述:“根据先验数据和观察到的数据,硬币正面概率在49%至51%之间的可能性只有5%。” 从哲学上讲,这与“如果硬币是公平的,我们希望仅在5%的时间内观察到如此极端的数据”这样的说法有很大不同。

Using Bayesian inference to test hypotheses is considered somewhat controversial — in part because its mathematics can get somewhat complicated, and in part because of the subjective nature of choosing a prior. We won’t use it any further, but it’s good to know about.

使用贝叶斯推理来检验假设被认为有些争议-部分原因是其数学可能变得有些复杂,部分原因是选择先验的主观性质。 我们将不再使用它,但是很高兴知道这一点。

I hope you found this article useful, Thank you for reading till here. If you have any question and/or suggestions, let me know in the comments.You can also get in touch with me directly through email & LinkedIn

希望本文对您有所帮助,谢谢您的阅读。 如果您有任何疑问和/或建议,请在评论中让我知道。您也可以通过电子邮件和LinkedIn直接与我联系

References and Further Reading

参考资料和进一步阅读

Linear Algebra for Data Science

数据科学的线性代数

Statistics for Data Science

数据科学统计

Probability for Data Science

数据科学的概率

翻译自: https://medium.com/@ravivarmathotakura/hypothesis-and-inference-for-data-science-ec76a532e1d4

判断和推论


http://www.taodudu.cc/news/show-2867069.html

相关文章:

  • Excel—PAPAYA电脑教室
  • 离散随机变量和连续随机变量_随机变量深度崩溃课程
  • 记一次:9i数据库,/home目录满,tossing监控出问题,导致脑裂
  • Tossing Bad Mupd Msg Pid In The Alert.Log
  • BUPT OJ146 Coin Tossing
  • 学以致用——Java源码——抛硬币(Coin Tossing)
  • Probability Theory | Coin Tossing Problems (TBC) | 概率论中的抛硬币问题 (未完待续)
  • java抛硬币,抛硬币模拟(Coin Tossing Simulation)
  • 区块链上的随机性(一)概述与构造
  • 缓动动画_核心动画概念:缓入缓出
  • 雷达探测的要素
  • 雷达的探测能力
  • js简单实现百度地图雷达探测效果
  • 雷达威力计算 matlab,威力雷达指标
  • 基于Matlab雷达探测系统(GUI界面模拟)
  • 【雷达与对抗】【2017.06】空中目标的无源雷达探测
  • android 绘制坐标系(雷达探测界面)
  • 雷达探测项目仿真代码(Matlab代码实现)
  • 【雷达通信】雷达探测项目仿真附Matlab代码
  • 【雷达通信】雷达探测项目仿真(Matlab代码实现)
  • vue 雷达扫描_Groundvue系列地质雷达探测系统及其应用
  • 如何使用unity制作雷达探测目标效果动画
  • 【雷达通信】相控阵天气雷达探测晴空回波能力分析Matlab源码
  • [cesium] 卫星雷达传感器,雷达探测效果
  • 【雷达通信】基于matlab雷达探测威力仿真【含Matlab源码 1974期】
  • 基于Matlab使用地面雷达探测和跟踪LEO卫星星座仿真(附源码)
  • 雷达探测的系统模型
  • CAE软件安装包(百度网盘)
  • Linux设置密码dictionary,Linux中修改密码出现it is based on a dictionary word解决方法
  • 3、xx配音狂app登陆算法分析【Android逆向分析学习】

判断和推论_数据科学的假设和推论相关推荐

  1. 雷达数据 障碍物判断_数据科学的进入障碍

    雷达数据 障碍物判断 A Beacon of Comfort to the Weary Aspiring Data Scientist 疲惫的数据科学家的慰藉灯塔 Getting started in ...

  2. 无法从套接字中获取更多数据_数据科学中应引起更多关注的一个组成部分

    无法从套接字中获取更多数据 介绍 (Introduction) Data science, machine learning, artificial intelligence, those terms ...

  3. r怎么对两组数据统计检验_数据科学中最常用的统计检验是什么

    r怎么对两组数据统计检验 Business analytics and data science is a convergence of many fields of expertise. Profe ...

  4. 大数据平台蓝图_数据科学面试蓝图

    大数据平台蓝图 1.组织是关键 (1. Organisation is Key) I've interviewed at Google (and DeepMind), Uber, Facebook, ...

  5. netflix 数据科学家_数据科学和机器学习在Netflix中的应用

    netflix 数据科学家 数据科学 , 机器学习 , 技术 (Data Science, Machine Learning, Technology) Using data science, Netf ...

  6. 数据科学的发展_数据科学的发展与发展

    数据科学的发展 There's perhaps nothing that sets the 21st century apart from others more than the concept o ...

  7. 编译原理 数据流方程_数据科学中最可悲的方程式

    编译原理 数据流方程 重点 (Top highlight) Prepare a box of tissues! I'm about to drop a truth bomb about statist ...

  8. 数据库面试复习_数据科学面试复习

    数据库面试复习 大面试前先刷新 (REFRESH BEFORE THE BIG INTERVIEW) 介绍 (Introduction) I crafted this study guide from ...

  9. kong 数据库_数据科学的面Kong4

    kong 数据库 For the fourth in our "Faces of Data Science" series, I've interviewed three coll ...

最新文章

  1. 独家 | 手把手教TensorFlow(附代码)
  2. 基于直方图均衡化的激光水下图像处理
  3. hashcode()方法和equals()方法
  4. tomcat基本使用,就是这么简单
  5. 文件上传优化CommonsMultipartResolver
  6. 数据结构与算法:单链表(超详细实现)
  7. 《Essential C++》笔记之(static)静态类成员
  8. python中set集合_Python中的SET集合操作
  9. hdmi tv 信息 的edid_开发HDMI你需要了解什么
  10. 小强的HTML5移动开发之路(20)——HTML5 Web SQL Database
  11. 【学习总结】Python-3-Python数字运算与数学函数
  12. vue实现上移下移_vue.js实现组件间的上移下移
  13. linux系统的课程实践,“Linux操作系统与应用”课程教学与实践
  14. macbook解决软件无法安装的问题
  15. 错误: 在类中找不到 main 方法, 请将 main 方法定义为: public static void main(String[] args) 否则
  16. 解决NintendoSwitch安装SXPro后开机长期蓝屏问题
  17. Shopee本地店和跨境店物流及收款方式介绍
  18. 平凡程序员一年又一年的感悟(2019)
  19. Python PrettyTable 模块
  20. Android 应用的逆向和审计

热门文章

  1. python感叹号的作用_Python的作用
  2. 【天梯赛】L2-039 清点代码库** (25 point(s))
  3. 进度条媒体对象和Well组件
  4. uni-app app项目运行至夜神模拟器
  5. 不变初心数 (15 分)
  6. 京东手机销售数据分析,华为和三星的距离还有多远?
  7. JsonParseException: Unexpected character (‘sss‘ (code xxx)): was expecting a colon to separ
  8. 三极管与mos管通俗讲解
  9. 微信支付认证和ssl-https
  10. 算法——0~1之间浮点实数的二进制表示