python蒙特卡洛

If your experience with Markov Chain Monte Carlo (MCMC) methods has been anything like mine, it’s been an endless search for answers. Every time you get an answer to one question, a dozen more pop up! You’re continuously learning how much you don’t know. I recently accepted a full time role as data scientist at an organization that makes extensive use of Bayesian Statistics and by extension, MCMC methods. So I decided to go to war with this branch of mathematics once and for all! For a detailed review of Metropolis, Metropolis-Hastings, and Hamiltonian Monte Carlo, please visit my public Google Colab Notebook :)

如果您对Markov Chain Monte Carlo(MCMC)方法的经验像我的一样,那将是无穷无尽的寻找答案的方法。 每次您得到一个问题的答案时,就会弹出更多答案! 您正在不断学习不知道的地方。 我最近在一家组织中担任数据科学家的专职职位,该组织广泛使用贝叶斯统计数据,并广泛使用MCMC方法。 因此,我决定一劳永逸地与这个数学分支交战! 有关Metropolis,Metropolis-Hastings Hamiltonian Monte Carlo的详细评论,请访问我的公开Goog​​le Colab笔记本 :)

I’ll revisit the key ideas behind Metropolis-Hastings (MH) before diving into the new material because Hamiltonian Monte Carlo (HMC) is an extension of MH.

在介绍新材料之前,我将回顾Metropolis-Hastings(MH)背后的关键思想,因为汉密尔顿·蒙特卡洛(HMC)是MH的扩展。

大都市-哈丁斯 (Metropolis-Hastings)

Metropolis-Hastings is a glorified random walk. You need four elements: A starting point, a target distribution, a proposal distribution, and an impartial judge (a random event.) The target distribution can be any distribution you’d like to sample from; all you need is the probability density function (PDF). Likewise, the proposal distribution can be any PDF (although the math is simpler if the distribution is symmetric.) The whole idea is start at the random point using the proposal distribution. Append it to an array. Feed this point into the target’s PDF to get the probability density (aka likelihood). Now, generate a disturbance via the proposal distribution, add it the the current location, assigning this variable a new name, and evaluate its likelihood, as well, via the target’s PDF. Compare the likelihoods

Metropolis-Hastings是光荣的随机漫步。 您需要四个元素:起点,目标分布,建议分布和公正的判断(随机事件)。目标分布可以是您要从中采样的任何分布; 您只需要概率密度函数(PDF)。 同样,提案分发可以是任何PDF(尽管如果分发是对称的,那么数学会更简单。)整个想法是使用提案分发从随机点开始的。 将其追加到数组。 将这一点输入到目标的PDF中以获取概率密度(也就是可能性)。 现在,通过提案分配生成干扰,将其添加到当前位置,为该变量分配一个新名称,并通过目标的PDF评估其可能性。 比较可能性

acceptance = target_PDF(proposed)/target_PDF(current)

acceptance = target_PDF(proposed)/target_PDF(current)

This number will fall anywhere in the range [0, inf). If it’s less than 1, there’s an x% chance that you move. Anything above 1 is guaranteed movement. This is where the “impartial judge” comes in. Observe a random event (typically a number in [0,1] sampled from a uniform distribution.) If this number is less than or equal to acceptance, you move to the proposed point. Else, you stay at the previous location. Append the winner to the array. And that’s it to the Metropolis algorithm.

该数字将落在[0,inf)范围内的任何位置。 如果小于1,则有x%的机会移动。 任何高于1的值都可以保证运动。 这就是“公正判断”的出现。观察一个随机事件(通常是从均匀分布中抽样的[0,1]中的数字。)如果该数字小于或等于接受,则移至建议的点。 否则,您将停留在先前的位置。 将优胜者添加到数组中。 这就是Metropolis算法。

Things are slightly more complicated if the proposal distribution is asymmetric. If you sample movements from a normal distribution, uniform distribution, etc this step is unnecessary. However, if you use a distribution, such as gamma, poisson, exponential, lognormal, etc. then you’ll need to account for the inherent bias. In the Colab Notebook I linked at the beginning, I use a Beta proposal distribution with a (-0.5 constant term), biased towards 1 (as opposed to symmetric or biased toward 0.) Two things to note: First, the Beta distribution is defined [0,1] meaning we will never observe a negative disturbance, which means that our proposals will increasingly move only one direction. I added the -0.5 bias such that now positive and negative samples can disturbances can be sampled. Second, however, the distribution is still biased towards 1, meaning that it will slowly creep in only one direction. We need to account for this. We do this by getting the likelihoods of the disturbance and the reverse disturbance.

如果提案分配不对称, 事情会稍微复杂一些。 如果从正态分布,均匀分布等中采样运动,则无需执行此步骤。 但是,如果使用伽玛,泊松,指数,对数正态等分布,则需要考虑固有偏差。 在开始时我链接的Colab Notebook中,我使用Beta提案分布(-0.5常数项),偏向1(相对于对称或偏向0。)注意两点: 首先 ,Beta分布是定义为[0,1]意味着我们将永远不会观察到负面干扰,这意味着我们的提案将只会朝一个方向发展。 我添加了-0.5偏差,以便现在可以对正样本和负样本产生干扰。 其次 ,然而,分布仍然偏向1,这意味着它将一个方向上缓慢蠕变 我们需要考虑这一点。 我们通过获取干扰和反向干扰的可能性来做到这一点。

def correct(prop,curr,a=3,b=2):    x0 = curr - prop + 0.5    x1 = prop - curr + 0.5    b0 = beta.pdf(x=x0, a=a, b=b)    b1 = beta.pdf(x=x1, a=a, b=b)    return b0/b1

This correction term (as you’ll see in the Colab notebook) is multiplied by the likelihood ratio; it accounts for the asymmetry of the proposal distribution. (Note, this ratio is called the Hastings-ratio as it’s the only difference between Metropolis and MH algorithms.)

这个校正项(您将在Colab笔记本中看到)乘以似然比。 它说明了提案分配的不对称性。 (请注意,该比率称为黑斯廷斯比率,因为它是Metropolis和MH算法之间的唯一区别。)

You can sample from most distributions using MH algorithm. However, at the end of the day, it’s fundamentally a random walk. There’s no logic informing how large the jumps should be given the current position. When sampling 1 variable distributions, this isn’t an issue. But as dimensionality increases, the region of high likelihood composes only a fraction of the total area, whereas the moderate and low likelihood regions compose (exponentially) more of the total area. This effect imposes only a mild inefficiency on MH when sampling low dimensional distributions. But as dimensionality increases, MH runs the risk of returning samples that aren’t representative of the target distribution. The reason is that small jumps are appropriate near the peak(s), because over-exploration of this region, relative to its likelihood, is low risk; however, as we move towards the tails, over-exploration of these areas, relative to their likelihoods, becomes a sizable risk. (Many samples from one tail and none in the other isn’t desirable, for context.) So…how do we propose jumps with likelihood in mind?

您可以使用MH算法从大多数分布中采样。 然而,归根结底,这基本上是一个随机漫步。 没有逻辑通知应该将当前位置的跳跃幅度指定为多大。 当采样1个变量分布时,这不是问题。 但是,随着维数的增加,高似然区域仅占总面积的一小部分,而中低似然区域则占(总面积)更多。 采样低维分布时,此效果仅对MH造成轻微的无效。 但是随着维数的增加,MH冒着返回不代表目标分布的样本的风险。 原因是在峰值附近宜进行小幅跳跃,因为相对于其可能性,该区域的过度开发风险低; 然而,随着我们走向尾巴,相对于它们的可能性,这些区域的过度开发就成为了相当大的风险。 (就上下文而言,不希望从一个尾巴上抽取许多样本,而在另一尾巴上抽取任何样本。)那么……我们如何在考虑到可能性的情况下提出跳跃?

汉密尔顿蒙特卡洛 (Hamiltonian Monte Carlo)

Physics has the answer — hurray! For the remainder of this post, we’re not going to view locations as a random walk. Rather, we’re going to view the absolute peak (or peaks) of a distribution as a planet. And a satellite will orbit this planet, collecting samples. Hamiltonian mechanics uses differential equations relating kinetic energy to potential energy. These differential equations have really tricky exact solutions but friendly approximate solutions using leapfrog integration. Integration of our differential equations will give us the path which our satellite will follow. These differential equations (or Hamiltonian equations) define the energy of a system in terms of kinetic and potential energy. When you throw a ball up into the air, its kinetic energy is replaced by potential energy. As the ball falls, its potential energy is replaced by kinetic energy. Hamiltonian equations define the relationship between position and momentum:

物理学有答案-欢呼! 在本文的其余部分中,我们不会将位置视为随机游走。 相反,我们将把分布的绝对峰值(或多个峰值)视为一个行星。 卫星将绕地球飞行,收集样本。 哈密​​顿力学使用微分方程将动能与势能联系起来。 这些微分方程确实具有棘手的精确解,但使用跳越积分则具有友好的近似解。 微分方程的积分将为我们的卫星提供路径。 这些微分方程(或哈密顿方程)根据动能和势能来定义系统的能量。 当您将球扔向空中时,其动能被势能所代替。 当球掉落时,其势能被动能代替。 哈密​​顿方程定义位置和动量之间的关系:

T = timeQ = position P = momentum K = kinetic energyV = potential energydQ/dT = PdP/dT = dV/dQ 

Note the above equations are derived in the context of statistics. The physics world is necessarily, far more complicated.

请注意,上述等式是在统计的背景下得出的。 物理世界必然要复杂得多。

Of note, our differential equations are dQ/dT, the change in position wrt (with respect to) time and dP/dT, the change in momentum wrt time, which evaluates to the change in potential energy wrt position. We can define position in terms of the PDF, itself. And so, changes in momentum are a function of position in the PDF, namely distance from the peak. At the peak, the gradient of the PDF is very near 0. But for most of the distribution, excluding the tails, the gradient is much higher. It’s not obvious yet, but this is super helpful!

值得注意的是,我们的微分方程是dQ / dT,位置wrt(相对于时间)的变化和dP / dT,动量wrt时间的变化,其评估势能wrt位置的变化。 我们可以根据PDF本身来定义位置。 因此,在动量变化的位置, PDF的功能,即从峰值距离。 在峰值处,PDF的梯度非常接近0。但是对于大多数分布(不包括尾部),梯度要高得多。 这还不是很明显,但这很有帮助!

Momentum dictates our next position and the gradient of the PDF dictates changes to our momentum. When near the peak(s), this has the effect of shrinking our jump size. When far from the peak, the realized effect is bigger jump sizes — and this is precisely the problem MH lacked a solution to!

动量决定了我们的下一个位置,而PDF的梯度决定了我们的动力变化。 当接近峰值时,这会缩小我们的跳跃大小。 当远离峰值时,所实现的效果是更大的跳跃大小-这恰恰是MH缺乏解决方案的问题!

算法 (The algorithm)

The algorithm breaks down into four parts:

该算法分为四个部分:

  1. Set up: Take the previous position and copy, such that you have q0 and q1. Randomly sample a momentum from N(0,1) and copy, such that you have p0 and p1. Find the gradient of the PDF with respect to position -(x-mu)/sigma^2 for a single variable gaussian.

    设置 :移至上一个位置并进行复制,以使您拥有q0和q1。 从N(0,1)中随机采样动量并进行复制,以使您拥有p0和p1。 对于单个变量高斯,找到相对于位置-(x-mu)/sigma^2的PDF梯度。

  2. Leapfrog: Use leapfrog integration to update q1 and p1 (ie Hamiltonian motion or a particle.) In practice, this is the most sensitive part; small adjustments can induce unstable behavior.

    跨越 :使用跨越积分来更新q1和p1(即哈密顿运动或质点)。实际上,这是最敏感的部分。 小小的调整会导致不稳定的行为。

  3. MH Dance: Lastly, multiply p1 by (-1), this ensures reversibility (ie q1 can be reached from q0 given momentum p0 AND q0 can be reached from q1 given momentum -p1). This gives us the information we need for the metropolis-hastings “dance”. Keep in mind we’re using negative log probabilities, so the math is all in terms of addition and subtraction. Then accept/reject movement.

    MH舞蹈 :最后,将p1乘以(-1),可确保可逆性(即, 在给定动量p0的情况下,可以从q0到达q1,而在给定动量-p1的情况下,可以从q1到达q0 )。 这为我们提供了都市狂欢“舞蹈”所需的信息。 请记住,我们使用的是负对数概率,因此数学运算全都是加法和减法。 然后接受/拒绝运动。

  4. Repeat for fixed number of iterations.重复进行固定次数的迭代。

You might have noticed in step-1 that we sampled momentum from a gaussian. What’s that all about? Using our satellite metaphor from before, Hamiltonian motion will guide our orbital path around a specific likelihood (visiting positions associated with the same likelihood.) However, we’re not interested in exploring just one likelihood, we’d like to explore them all. In order to accomplish this effect, we sample momentum “kicks,” which can cause our satellite to jump or fall orbital distances (aka visit different likelihoods.)

您可能已经在步骤1中注意到,我们从高斯采样了动量。 那是什么意思 使用以前的卫星比喻,哈密顿运动将围绕特定的可能性(访问与相同可能性相关的位置)引导我们的轨道路径。但是,我们对仅探索一种可能性不感兴趣,我们希望对它们进行探索。 为了实现此效果,我们对动量“踢”进行采样,这可能导致我们的卫星跳出或掉下轨道距离(也就是访问不同的可能性)。

代码 (The code)

import numpy as np
import random
import scipy.stats as st
import matplotlib.pyplot as pltdef normal(x,mu,sigma):numerator = np.exp(-1*((x-mu)**2)/(2*sigma**2))denominator = sigma * np.sqrt(2*np.pi)return numerator/denominatordef neg_log_prob(x,mu,sigma):return -1*np.log(normal(x=x,mu=mu,sigma=sigma))def HMC(mu=0.0,sigma=1.0,path_len=1,step_size=0.25,initial_position=0.0,epochs=1_000):# setupsteps = int(path_len/step_size) # path_len and step_size are tricky parameters to tune...samples = [initial_position]momentum_dist = st.norm(0, 1) # generate samplesfor e in range(epochs):q0 = np.copy(samples[-1])q1 = np.copy(q0)p0 = momentum_dist.rvs()        p1 = np.copy(p0) dVdQ = -1*(q0-mu)/(sigma**2) # gradient of PDF wrt position (q0) aka potential energy wrt position# leapfrog integration beginfor s in range(steps): p1 += step_size*dVdQ/2 # as potential energy increases, kinetic energy decreases, half-stepq1 += step_size*p1 # position increases as function of momentum p1 += step_size*dVdQ/2 # second half-step "leapfrog" update to momentum    # leapfrog integration end        p1 = -1*p1 #flip momentum for reversibility     #metropolis acceptanceq0_nlp = neg_log_prob(x=q0,mu=mu,sigma=sigma)q1_nlp = neg_log_prob(x=q1,mu=mu,sigma=sigma)        p0_nlp = neg_log_prob(x=p0,mu=0,sigma=1)p1_nlp = neg_log_prob(x=p1,mu=0,sigma=1)# Account for negatives AND log(probabiltiies)...target = q0_nlp - q1_nlp # P(q1)/P(q0)adjustment = p1_nlp - p0_nlp # P(p1)/P(p0)acceptance = target + adjustment event = np.log(random.uniform(0,1))if event <= acceptance:samples.append(q1)else:samples.append(q0)return samplesimport matplotlib.pyplot as plt
mu = 0
sigma = 1
trial = HMC(mu=mu,sigma=sigma,path_len=1.5,step_size=0.25)lines = np.linspace(-6,6,10_000)
normal_curve = [normal(x=l,mu=mu,sigma=sigma) for l in lines]plt.plot(lines,normal_curve)
plt.hist(trial,density=True,bins=20)
plt.show()

And let’s take a look at performance!

让我们来看看性能!

N(0,1) HMC sampler output
N(0,1)HMC采样器输出

扩展名 (Extensions)

If you’d like to run my code, please hop onto the Google Colab link from the beginning. You’ll only have viewer access, not editor access — but you can copy to your own Google Drive for experimentation, download the iPython Notebook file, etc. If you play around with the code, you’ll notice that path length and step size are very sensitive hyper-parameters. The slightest adjustment might result in nonsensical samples. This is the curse of gradient-based MCMC samplers. However, there’s hope — newer methods, such as No-U-Turn-Sampler (NUTS) builds upon HMC by dynamically choosing path length and step size.

如果您想运行我的代码,请从头开始跳至Google Colab链接。 您只有查看者访问权限,而没有编辑者访问权限-但您可以复制到自己的Google云端硬盘中进行实验,下载iPython Notebook文件等。如果您玩弄这些代码,则会注意到路径长度和步长是非常敏感的超参数。 最小的调整可能会导致无意义的样本。 这就是基于梯度的MCMC采样器的诅咒。 但是,还是有希望的-诸如No-U-Turn-Sampler(NUTS)之类的新方法是通过动态选择路径长度和步长在HMC上构建的。

But why is this a problem? Well, from our hypothetical satellite orbit example, Hamiltonian equations will guide the motion of the satellite, “orbiting” a particular likelihood, meaning it proposes new locations near similar likelihoods. But the orbital path around any given likelihood varies in length. At higher likelihoods, the circumference is quite small but larger when orbiting lower likelihoods (closer to the tails.) We can visually confirm this by simply looking at the gaussian distribution. What happens when me make a 360 orbit around a given likelihood? We propose the same point we’re currently at. This is called a “U-Turn” in HMC lingo, meaning that we come back to where we’ve just been. Dynamic adjustments to path length and step size control the extent of integration (the orbital path.) And as you’ll see if you play with the code a bit, these hyper-parameters are quite sensitive — hence you’ll likely use NUTS over HMC in practice.

但是为什么这是一个问题呢? 好吧,从我们假设的卫星轨道示例来看,哈密顿方程将指导卫星的运动,“绕行”特定的可能性,这意味着它将在相似的可能性附近提出新的位置。 但是围绕任何给定可能性的轨道路径的长度是变化的。 在较高的可能性下,周长很小,但在绕行较低的可能性(靠近尾巴)时,周长较大。我们可以通过简单地查看高斯分布来从视觉上确认这一点。 当我绕给定的可能性绕360圈飞行时会发生什么? 我们提出与目前相同的观点。 这在HMC术语中称为“掉头”,这意味着我们回到了以前的状态。 动态调整路径长度和步长可控制集成程度(轨道路径)。正如您所看到的,如果您稍微使用一下代码,这些超参数非常敏感,因此您可能会在上面使用NUTS在实践中使用HMC。

I’ve got to give credit where it’s due. Colin Carroll, a software engineer at Google, wrote a more scalable implementation of HMC. (Mine can only be used to sample one-variable gaussians.) The scalability came at a price — interpretability. I found that to really “get” the concept, I had to dumb it down and build one from scratch. Hopefully, you’ve enjoyed the read!

我得归功于应得的。 Google的软件工程师Colin Carroll编写了HMC的更具扩展性的实现。 (我的只能用于采样一元高斯。)可扩展性是有代价的-可解释性。 我发现,要真正“理解”这个概念,我必须将其简化并从头开始构建一个概念。 希望您喜欢阅读!

Thank you for reading — If you think my content is alright, please subscribe! :)

感谢您的阅读-如果您认为我的内容还可以,请订阅! :)

翻译自: https://towardsdatascience.com/python-hamiltonian-monte-carlo-from-scratch-955dba96a42d

python蒙特卡洛


http://www.taodudu.cc/news/show-4142483.html

相关文章:

  • “创业吃过饼,国企养过老,android开发零基础
  • 从雀书无代码应用——浅谈零代码开发平台(上)
  • 零基础学平面设计要从哪入手
  • 冒死推荐一些值得推荐的 Java 练手项目
  • 除了性以外,有没有快速、高效的释放压力、清空大脑的方式?
  • 编程难?零基础如何学好编程?
  • 《从0到1 开启商业与未来的秘密》阅读心得
  • 谈谈Linux发行版的入门选择
  • 用PS修改GIF动图循环播放次数
  • 教你用PS制作gif动态图
  • PS如何修改gif动图 播放速度 - 本地方法篇
  • 如何在html中加入动图,如何在PS图像中插入动图(gif)?
  • Java 添加Word文本水印、图片水印
  • java导出word加水印(已实现)
  • Python给Word加水印
  • aspose导出word转pdf并加水印
  • java word 加水印_java如何给office加水印
  • Java使用poi给Word加水印(目前自己了解的仅支持后缀为.docx格式的,.doc仍在研究)开源、免费。
  • word加水印
  • winfrom给word加水印
  • 等额本息公式推导------玩一下等比数列
  • java是面向过程的编程语言_1. 下列关于JAVA语言特点的叙述中,错误的是[   ] A、Java是面向过程的编程语言...
  • C++语言的特点
  • 现行各主流语言的特点
  • Python语言特色集锦
  • 微信多开方法
  • 小技巧之微信多开
  • Linux多开微信,在Deepin V20/15.11系统下实现微信多开的方法
  • C++ sort函数(升序降序排列)
  • 降序链表合并

python蒙特卡洛_Python:从零开始的汉密尔顿蒙特卡洛相关推荐

  1. 好想学python下载_Python | 从零开始学(1)

    为什么学python?目的很简单,想抓数据,想更好地抓取数据,希望抓到的数据更加精确且多.目前工作里遇到的最头疼的问题就是数据问题.来源不明,结构混乱,而且还很难得到.之前有看到别人分享的,通过pyt ...

  2. 卡方检验python程序_Python从零开始第二章(1)卡方检验(python)

    如果我们想确定两个独立分类数据组的统计显着性,会发生什么?这是卡方检验独立性有用的地方. Chi-Square检验 我们将在1994年查看人口普查数据.具体来说,我们对"性别和"每 ...

  3. python实验原理_Python实现蒙特卡洛算法小实验过程详解

    蒙特卡洛算法思想 蒙特卡洛(Monte Carlo)法是一类随机算法的统称,提出者是大名鼎鼎的数学家冯·诺伊曼,他在20世纪40年代中期用驰名世界的赌城-摩纳哥的蒙特卡洛来命名这种方法. 通俗的解释一 ...

  4. python面向对象编程从零开始_Python面向对象编程从零开始,从没对象到有对象

    原标题:Python面向对象编程从零开始,从没对象到有对象 欢迎关注天善智能 hellobi.com,我们是专注于商业智能BI,大数据,数据分析领域的垂直社区,学习.问答.求职,一站式搞定! 对商业智 ...

  5. python面向对象编程从零开始_Python面向对象编程从零开始(4)—— 小姐姐请客下篇

    前言 前文传送门: Python面向对象编程从零开始(2)-- 与对象相互了解:Python面向对象编程从零开始(2)-- 与对象相互了解 Python面向对象编程从零开始(3)-- 小姐姐请客上篇: ...

  6. python面向对象编程从零开始_Python面向对象编程从零开始(3)—— 小姐姐请客上篇

    前言 好了,接着昨天的故事继续. 上文说到我和小姐姐相互自我介绍了 今天我们继续讲我与小姐姐的故事: self的作用 class Car: def drive(self): print('我正在开车' ...

  7. python画代码-Python教程_Python画Mandelbrot集 代码

    Python教程_Python画Mandelbrot集 代码 作者:Comet 来源: 课课家 www.kokojia.com点击数:278发布时间:2015-06-19 11:17:19 曼德勃罗集 ...

  8. python新手教程 从零开始-Python零基础从零开始学习Python十分钟快速入门

    原标题:Python零基础从零开始学习Python十分钟快速入门 学习Python的,都知道Python 是一个高层次的结合了解释性.编译性.互动性和面向对象的脚本语言.Python是一种动态解释型的 ...

  9. python异常处理_Python 工匠: 异常处理的三个好习惯

    " 如果你用 Python 编程,那么你就无法避开异常,因为异常在这门语言里无处不在.打个比方,当你在脚本执行时按 ctrl+c 退出,解释器就会产生一个 KeyboardInterrupt ...

  10. 用python三角形_python 三角形

    <从问题到程序:用Python学编程和计算>--2.6 简单脚本程序 本节书摘来自华章计算机<从问题到程序:用Python学编程和计算>一书中的第2章,第2.6节,作者:裘宗燕 ...

最新文章

  1. 使用Python,OpenCV计算图像直方图(cv2.calcHist)
  2. tracepro应用实例详解_建筑安装工程造价,高清PPT图文详解,小白也能学会的简单步骤...
  3. git命令之git tag 给当前分支打标签
  4. Spring IoC — 基于XML的配置
  5. 在windows 实现执行 makefile
  6. linux raw限制端口访出,使用Linux raw socket时需要注意的一些问题
  7. mysql 触发器 for each row 理解_“for each row”如何在mysql中的触发器中工作?
  8. python多线程没有java_Java 多线程启动为什么调用 start() 方法而不是 run() 方法?...
  9. unity 打开vs没有解决方案_VS找不到UnityEngine、UnityEngine.UI等引用的解决办法
  10. 太损了!“特斯拉刹车失灵”同款白T恤已上架电商平台
  11. js判断json对象中是否含有某个属性
  12. rk3399_android7.1调试USB接口的TP记录
  13. java_if_else__的应用1
  14. 【鱼眼镜头3】[鱼眼畸变模型]:除法模型(核密度估计径向失真校正方法)
  15. 从jsp页面到servlet传值的不同方式
  16. Python量化基础:时间序列的平稳性检验
  17. 苹果官方下载_苹果官方发布的2018年应用榜单里,有安卓也能下载的抠图神器!...
  18. 微信公众号微信网页开发网页授权/回调自定义参数问题处理方法。
  19. STL库:string
  20. 红石外汇|每日汇评:鲍威尔讲话助推黄金走出困境

热门文章

  1. 机器人聊天软件c#_我的C#之路之简单的聊天机器人。
  2. 计算性和复杂度理论2
  3. 不变初心数 (15 分) C语言
  4. 用php上传头像的步骤,php怎么上传头像
  5. 开源图像数据集管理工具fiftyone使用
  6. linux编辑vim指令,Linux系统文本编辑器vim指令大全
  7. 实时语音视频通话SDK如何实现立体声(二)
  8. 重力传感事件应用之一 手机摇一摇(摇一次得一分)
  9. 原生js高仿浏览器ctrf+f
  10. PHP做大转盘抽奖的思路,PHP实现大转盘抽奖算法(代码实例)