论文信息

题目:
DGM: A deep learning algorithm for solving partial differential equations

作者及单位:
Justin Sirignano∗ and Konstantinos Spiliopoulos

期刊、会议:

时间:18

论文地址:论文链接

代码:

基础

论文动机

  • High-dimensional partial differential equations (PDEs) are used in physics, engineering, and finance. Their numerical solution has been a longstanding challenge
  • This quickly becomes computationally intractable when the dimension d becomes even moderately large. We propose to solve high-dimensional PDEs using a meshfree deep learning algorithm.
  • The method is similar in spirit to the Galerkin method, but with several key changes using ideas from machine learning.
  • The Galerkin method is a widely-used computational method which seeks a reduced-form solution to a PDE as a linear combinationof basis functions.
  • DGM is a natural merger of Galerkin methods and machine learning

问题背景

本文方法

Approximation Power of Neural Networks for PDEs

∂tu(t,x)+Lu(t,x)=0,(t,x)∈[0,T]×Ωu(0,x)=u0(x),x∈Ωu(t,x)=g(t,x),x∈[0,T]×∂Ω\begin{array}{ll} \partial_{t} u(t, x)+\mathcal{L} u(t, x)=0, & (t, x) \in[0, T] \times \Omega \\ u(0, x)=u_{0}(x), & x \in \Omega \\ u(t, x)=g(t, x), & x \in[0, T] \times \partial \Omega \end{array}∂t​u(t,x)+Lu(t,x)=0,u(0,x)=u0​(x),u(t,x)=g(t,x),​(t,x)∈[0,T]×Ωx∈Ωx∈[0,T]×∂Ω​

The error function:
J(f)=∥∂tf+Lf∥2,[0,T]×Ω2+∥f−g∥2,[0,T]×∂Ω2+∥f(0,⋅)−u0∥2,Ω2J(f)=\left\|\partial_{t} f+\mathcal{L} f\right\|_{2,[0, T] \times \Omega}^{2}+\|f-g\|_{2,[0, T] \times \partial \Omega}^{2}+\left\|f(0, \cdot)-u_{0}\right\|_{2, \Omega}^{2}J(f)=∥∂t​f+Lf∥2,[0,T]×Ω2​+∥f−g∥2,[0,T]×∂Ω2​+∥f(0,⋅)−u0​∥2,Ω2​

This paper explores several new innovations:

  • First, we focus on high-dimensional PDEs and apply deep learning advances of the past decade to this problem.
  • Secondly, to avoid ever forming a mesh, we sample a sequence of random spatial points.
  • Thirdly, the algorithm incorporates a new computational scheme for the efficient computationof neural network gradients arising from the second derivatives of high-dimensional PDEs.

DGM

A Monte Carlo Method for Fast Computation of Second Derivatives

包含二阶导数项得在计算高维时计算代价可能很昂贵,计算二阶导数代价(以4.2网络结构为例)是O(d2×N)\mathcal{O}\left(d^{2} \times N\right)O(d2×N),其中d为x的空间维度,N为batch size,相比之下,一阶导数为O(d×N)\mathcal{O}(d \times N)O(d×N),而且含有二阶导数项计算花费甚至更多,因为在梯度算法优化还会求一阶导数(相当于三阶导数),考虑到这些情况,二阶导数采用Monte Carlo方法.

假设Lf(t,x,;θ)\mathcal{L} f(t, x, ; \theta)Lf(t,x,;θ)中二阶导数形式为12∑i,j=1dρi,jσi(x)σj(x)∂2f∂xixj(t,x;θ)\frac{1}{2} \sum_{i, j=1}^{d} \rho_{i, j} \sigma_{i}(x) \sigma_{j}(x) \frac{\partial^{2} f}{\partial x_{i} x_{j}}(t, x ; \theta)21​∑i,j=1d​ρi,j​σi​(x)σj​(x)∂xi​xj​∂2f​(t,x;θ),假设[ρi,j]i,j=1d\left[\rho_{i, j}\right]_{i, j=1}^{d}[ρi,j​]i,j=1d​是正定矩阵同时定义σ(x)=(σ1(x),…,σd(x))\sigma(x)=\left(\sigma_{1}(x), \ldots, \sigma_{d}(x)\right)σ(x)=(σ1​(x),…,σd​(x)).这里σ\sigmaσ代表什么?例如,在考虑随机微分方程的期望的时候会产生偏微分方程时代表扩散系数(diffusion coefficient)。二阶项考虑如下计算:
∑i,j=1dρi,jσi(x)σj(x)∂2f∂xixj(t,x;θ)=lim⁡Δ→0E[∑i=1d∂f∂xi(t,x+σ(x)WΔ;θ)−∂f∂xi(t,x;θ)Δσi(x)WΔi]\sum_{i, j=1}^{d} \rho_{i, j} \sigma_{i}(x) \sigma_{j}(x) \frac{\partial^{2} f}{\partial x_{i} x_{j}}(t, x ; \theta)=\lim _{\Delta \rightarrow 0} \mathbb{E}\left[\sum_{i=1}^{d} \frac{\frac{\partial f}{\partial x_{i}}\left(t, x+\sigma(x) W_{\Delta} ; \theta\right)-\frac{\partial f}{\partial x_{i}}(t, x ; \theta)}{\Delta} \sigma_{i}(x) W_{\Delta}^{i}\right]i,j=1∑d​ρi,j​σi​(x)σj​(x)∂xi​xj​∂2f​(t,x;θ)=Δ→0lim​E[i=1∑d​Δ∂xi​∂f​(t,x+σ(x)WΔ​;θ)−∂xi​∂f​(t,x;θ)​σi​(x)WΔi​]
其中,Wt∈RdW_{t} \in \mathbb{R}^{d}Wt​∈Rd 是一个布朗运动(Brownian motion),Δ∈R+\Delta \in \mathbb{R}_{+}Δ∈R+​是步长,二阶替代后收敛速率为O(Δ)\mathcal{O}(\sqrt{\Delta})O(Δ​)

从而一阶导数算子:
L1f(tn,xn;θn):=Lf(tn,xn;θn)−12∑i,j=1dρi,jσi(xn)σj(xn)∂2f∂xixj(tn,xn;θ)\mathcal{L}_{1} f\left(t_{n}, x_{n} ; \theta_{n}\right):=\mathcal{L} f\left(t_{n}, x_{n} ; \theta_{n}\right)-\frac{1}{2} \sum_{i, j=1}^{d} \rho_{i, j} \sigma_{i}\left(x_{n}\right) \sigma_{j}\left(x_{n}\right) \frac{\partial^{2} f}{\partial x_{i} x_{j}}\left(t_{n}, x_{n} ; \theta\right)L1​f(tn​,xn​;θn​):=Lf(tn​,xn​;θn​)−21​i,j=1∑d​ρi,j​σi​(xn​)σj​(xn​)∂xi​xj​∂2f​(tn​,xn​;θ)

前面DGM定义的损失
G1(θn,sn):=(∂f∂t(tn,xn;θn)+Lf(tn,xn;θn))2G2(θn,sn):=(f(τn,zn;θn)−g(τn,zn))2G3(θn,sn):=(f(0,wn;θn)−u0(wn))2G(θn,sn):=G1(θn,sn)+G2(θn,sn)+G3(θn,sn)\begin{array}{l} G_{1}\left(\theta_{n}, s_{n}\right):=\left(\frac{\partial f}{\partial t}\left(t_{n}, x_{n} ; \theta_{n}\right)+\mathcal{L} f\left(t_{n}, x_{n} ; \theta_{n}\right)\right)^{2} \\ G_{2}\left(\theta_{n}, s_{n}\right):=\left(f\left(\tau_{n}, z_{n} ; \theta_{n}\right)-g\left(\tau_{n}, z_{n}\right)\right)^{2} \\ G_{3}\left(\theta_{n}, s_{n}\right):=\left(f\left(0, w_{n} ; \theta_{n}\right)-u_{0}\left(w_{n}\right)\right)^{2} \\ G\left(\theta_{n}, s_{n}\right):=G_{1}\left(\theta_{n}, s_{n}\right)+G_{2}\left(\theta_{n}, s_{n}\right)+G_{3}\left(\theta_{n}, s_{n}\right) \end{array}G1​(θn​,sn​):=(∂t∂f​(tn​,xn​;θn​)+Lf(tn​,xn​;θn​))2G2​(θn​,sn​):=(f(τn​,zn​;θn​)−g(τn​,zn​))2G3​(θn​,sn​):=(f(0,wn​;θn​)−u0​(wn​))2G(θn​,sn​):=G1​(θn​,sn​)+G2​(θn​,sn​)+G3​(θn​,sn​)​

DGM算法中用到梯度∇θG1(θn,sn)\nabla_{\theta} G_{1}\left(\theta_{n}, s_{n}\right)∇θ​G1​(θn​,sn​)近视为G~1\tilde{G}_{1}G~1​估计,有固定项Δ>0\Delta>0Δ>0

G~1(θn,sn):=2(∂f∂t(tn,xn;θn)+L1f(tn,xn;θn)+12∑i=1d∂f∂xi(t,xn+σ(xn)WΔ;θ)−∂f∂xi(t,xn;θ)Δσi(xn)WΔi)×∇θ(∂f∂t(tn,xn;θn)+L1f(tn,xn;θn)+12∑i=1d∂f∂xi(t,xn+σ(xn)W~Δ;θ)−∂f∂xi(t,xn;θ)Δσi(xn)W~Δi)\begin{aligned} \tilde{G}_{1}\left(\theta_{n}, s_{n}\right):=& 2\left(\frac{\partial f}{\partial t}\left(t_{n}, x_{n} ; \theta_{n}\right)+\mathcal{L}_{1} f\left(t_{n}, x_{n} ; \theta_{n}\right)+\frac{1}{2} \sum_{i=1}^{d} \frac{\frac{\partial f}{\partial x_{i}}\left(t, x_{n}+\sigma\left(x_{n}\right) W_{\Delta} ; \theta\right)-\frac{\partial f}{\partial x_{i}}\left(t, x_{n} ; \theta\right)}{\Delta} \sigma_{i}\left(x_{n}\right) W_{\Delta}^{i}\right) \\ \times & \nabla_{\theta}\left(\frac{\partial f}{\partial t}\left(t_{n}, x_{n} ; \theta_{n}\right)+\mathcal{L}_{1} f\left(t_{n}, x_{n} ; \theta_{n}\right)+\frac{1}{2} \sum_{i=1}^{d} \frac{\frac{\partial f}{\partial x_{i}}\left(t, x_{n}+\sigma\left(x_{n}\right) \tilde{W}_{\Delta} ; \theta\right)-\frac{\partial f}{\partial x_{i}}\left(t, x_{n} ; \theta\right)}{\Delta} \sigma_{i}\left(x_{n}\right) \tilde{W}_{\Delta}^{i}\right) \end{aligned}G~1​(θn​,sn​):=×​2(∂t∂f​(tn​,xn​;θn​)+L1​f(tn​,xn​;θn​)+21​i=1∑d​Δ∂xi​∂f​(t,xn​+σ(xn​)WΔ​;θ)−∂xi​∂f​(t,xn​;θ)​σi​(xn​)WΔi​)∇θ​⎝⎛​∂t∂f​(tn​,xn​;θn​)+L1​f(tn​,xn​;θn​)+21​i=1∑d​Δ∂xi​∂f​(t,xn​+σ(xn​)W~Δ​;θ)−∂xi​∂f​(t,xn​;θ)​σi​(xn​)W~Δi​⎠⎞​​
其中,WΔW_{\Delta}WΔ​是d维正态随机变量,且满足E[WΔ]=0\mathbb{E}\left[W_{\Delta}\right]=0E[WΔ​]=0以及Cov⁡[(WΔ)i,(WΔ)i]=ρi,iΔ\operatorname{Cov}\left[\left(W_{\Delta}\right)_{i},\left(W_{\Delta}\right)_{i}\right]=\rho_{i, i} \DeltaCov[(WΔ​)i​,(WΔ​)i​]=ρi,i​Δ. W~Δ\tilde{W}_{\Delta}W~Δ​有和WΔW_{\Delta}WΔ​一样的分布.G~1(θn,sn)\tilde{G}_{1}\left(\theta_{n}, s_{n}\right)G~1​(θn​,sn​)是,KaTeX parse error: Expected '\right', got 'EOF' at end of input: …eft(\theta_{n}是\nabla_{\theta} G_{1}\left(\theta_{n}, s_{n}\right)KaTeX parse error: Expected 'EOF', got '\right' at position 23: …Carlo的估计, s_{n}\̲r̲i̲g̲h̲t̲)有O(Δ)\mathcal{O}(\sqrt{\Delta})O(Δ​)偏差. 这种误差可以根据下面的估计方法改进
G~1(θn,sn):=G~1,a(θn,sn)+G~1,b(θn,sn)\tilde{G}_{1}\left(\theta_{n}, s_{n}\right) \quad:=\quad \tilde{G}_{1, a}\left(\theta_{n}, s_{n}\right)+\tilde{G}_{1, b}\left(\theta_{n}, s_{n}\right)G~1​(θn​,sn​):=G~1,a​(θn​,sn​)+G~1,b​(θn​,sn​)
G~1,a(θn,sn):=(∂f∂t(tn,xn;θn)+L1f(tn,xn;θn)+12∑i=1a∂J∂xi(t,xn+σ(xn)WΔ;θ)−σj∂xi(t,xn;θ)Δσi(xn)WΔi)×∇θ(∂f∂t(tn,xn;θn)+L1f(tn,xn;θn)+12∑i=1d∂f∂xi(t,xn+σ(xn)W~Δ;θ)−∂f∂xi(t,xn;θ)Δσi(xn)W~Δi)\begin{aligned} \tilde{G}_{1, a}\left(\theta_{n}, s_{n}\right): &=\left(\frac{\partial f}{\partial t}\left(t_{n}, x_{n} ; \theta_{n}\right)+\mathcal{L}_{1} f\left(t_{n}, x_{n} ; \theta_{n}\right)+\frac{1}{2} \sum_{i=1}^{a} \frac{\frac{\partial J}{\partial x_{i}}\left(t, x_{n}+\sigma\left(x_{n}\right) W_{\Delta} ; \theta\right)-\frac{\sigma_{j}}{\partial x_{i}}\left(t, x_{n} ; \theta\right)}{\Delta} \sigma_{i}\left(x_{n}\right) W_{\Delta}^{i}\right) \\ & \times \nabla_{\theta}\left(\frac{\partial f}{\partial t}\left(t_{n}, x_{n} ; \theta_{n}\right)+\mathcal{L}_{1} f\left(t_{n}, x_{n} ; \theta_{n}\right)+\frac{1}{2} \sum_{i=1}^{d} \frac{\frac{\partial f}{\partial x_{i}}\left(t, x_{n}+\sigma\left(x_{n}\right) \tilde{W}_{\Delta} ; \theta\right)-\frac{\partial f}{\partial x_{i}}\left(t, x_{n} ; \theta\right)}{\Delta} \sigma_{i}\left(x_{n}\right) \tilde{W}_{\Delta}^{i}\right) \end{aligned}G~1,a​(θn​,sn​):​=(∂t∂f​(tn​,xn​;θn​)+L1​f(tn​,xn​;θn​)+21​i=1∑a​Δ∂xi​∂J​(t,xn​+σ(xn​)WΔ​;θ)−∂xi​σj​​(t,xn​;θ)​σi​(xn​)WΔi​)×∇θ​⎝⎛​∂t∂f​(tn​,xn​;θn​)+L1​f(tn​,xn​;θn​)+21​i=1∑d​Δ∂xi​∂f​(t,xn​+σ(xn​)W~Δ​;θ)−∂xi​∂f​(t,xn​;θ)​σi​(xn​)W~Δi​⎠⎞​​
KaTeX parse error: Undefined control sequence: \su at position 623: …t)-\frac{1}{2} \̲s̲u̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲m_{i=1}^{d} \fr…
上面的估计方法有O(Δ)\mathcal{O}(\Delta)O(Δ)的偏差

  • The computational cost for calculating second derivatives


The modified algorithm here is computationally less expensive than the original algorithm

Relevant literature

  • Recently, Raissi [41, 42] develop physics informed deep learning models. They estimate deep neural network models which merge data observations with PDE models. This allows for the estimation of physical models from limited data by leveraging a priori knowledge that the physical dynamics should obey a class of PDEs. Their approach solves PDEs in one and two spatial dimensions using deep neural networks.
    [33] developed an algorithm for the solution of a discrete-time version of a class of free boundary PDEs.
  • Their algorithm, commonly called the Longstaff-Schwartz method", uses dynamic programming and approximates the solution using a separate function approximator at each discrete time (typically a linear combination of basis functions). Our algorithm directly solves the PDE, and uses a single function approximator for all space and all time.

数值实验

The Free Boundary PDE

Free boundary PDE and will satisfy

0=∂u∂t(t,x)+μ(x)⋅∂u∂x(t,x)+12∑i,j=1dρi,jσ(xi)σ(xj)∂2u∂xi∂xj(t,x)−ru(t,x),∀{(t,x):u(t,x)>g(x)}u(t,x)≥g(x),∀(t,x)u(t,x)∈C1(R+×Rd),∀{(t,x):u(t,x)=g(x)}u(T,x)=g(x),∀x\begin{aligned} 0 &=\frac{\partial u}{\partial t}(t, x)+\mu(x) \cdot \frac{\partial u}{\partial x}(t, x)+\frac{1}{2} \sum_{i, j=1}^{d} \rho_{i, j} \sigma\left(x_{i}\right) \sigma\left(x_{j}\right) \frac{\partial^{2} u}{\partial x_{i} \partial x_{j}}(t, x)-r u(t, x), \quad \forall\{(t, x): u(t, x)>g(x)\} \\ u(t, x) & \geq g(x), \quad \forall(t, x) \\ u(t, x) & \in C^{1}\left(\mathbb{R}_{+} \times \mathbb{R}^{d}\right), \quad \forall\{(t, x): u(t, x)=g(x)\} \\ u(T, x) &=g(x), \quad \forall x \end{aligned}0u(t,x)u(t,x)u(T,x)​=∂t∂u​(t,x)+μ(x)⋅∂x∂u​(t,x)+21​i,j=1∑d​ρi,j​σ(xi​)σ(xj​)∂xi​∂xj​∂2u​(t,x)−ru(t,x),∀{(t,x):u(t,x)>g(x)}≥g(x),∀(t,x)∈C1(R+​×Rd),∀{(t,x):u(t,x)=g(x)}=g(x),∀x​

Implementation details for the algorithm

在训练过程使用了多块GPU,为了加速训练,使用了异步随机梯度下降(asynchronous stochastic gradient descent),这是机器学习模型中应用广泛的并行训练方法. Figure 1 displays the computational setup

A High-dimensional Free Boundary PDE with a Semi-Analytic Solution

The accuracy of our deep learning algorithm is evaluated in up to 200 dimensions


We present in Figure 2 contour plots of the absolute error and percent error across time and space for the American option PDE in 20 dimensions.

Figure 2 reports both the absolute error and the percent error. The percent error ∣f(t,x;θ)−u(t,x)∣∣u(t,x)∣×100%\frac{|f(t, x ; \theta)-u(t, x)|}{|u(t, x)|} \times 100 \%∣u(t,x)∣∣f(t,x;θ)−u(t,x)∣​×100% is reported for points where ∣u(t,x)∣>0.05|u(t, x)|>0.05∣u(t,x)∣>0.05. The absolute error becomes relatively large in a few areas;however, the solution u(t; x) also grows large in these areas and therefore the percent error remains small

Burgers’ equation

∂u∂t(t,x;p)=Lpu(t,x;p),(t,x)∈[0,T]×Ωu(t,x;p)=gp(x),(t,x)∈[0,T]×∂Ωu(t=0,x;p)=hp(x),x∈Ω\begin{aligned} \frac{\partial u}{\partial t}(t, x ; p) &=\mathcal{L}_{p} u(t, x ; p), \quad(t, x) \in[0, T] \times \Omega \\ u(t, x ; p) &=g_{p}(x), \quad(t, x) \in[0, T] \times \partial \Omega \\ u(t=0, x ; p) &=h_{p}(x), \quad x \in \Omega \end{aligned}∂t∂u​(t,x;p)u(t,x;p)u(t=0,x;p)​=Lp​u(t,x;p),(t,x)∈[0,T]×Ω=gp​(x),(t,x)∈[0,T]×∂Ω=hp​(x),x∈Ω​
A traditional approach would be to discretize the P-space and re-solve the PDE many times for many different points p. However, the total number of grid points (and therefore the number of PDEs that must be solved) grows exponentially with the number of dimensions, and P is typically high-dimensional.

We use DGM to solve the PDE as follows:

We consider the following burge equation:
∂u∂t=ν∂2u∂x2−αu∂u∂x,(t,x)∈[0,1]×[0,1]\frac{\partial u}{\partial t}=\nu \frac{\partial^{2} u}{\partial x^{2}}-\alpha u \frac{\partial u}{\partial x}, \quad(t, x) \in[0,1] \times[0,1]∂t∂u​=ν∂x2∂2u​−αu∂x∂u​,(t,x)∈[0,1]×[0,1]
u(t,x=0)=au(t, x=0)=au(t,x=0)=a
u(t,x=1)=bu(t, x=1)=bu(t,x=1)=b
u(t=0,x)=g(x),x∈[0,1]u(t=0, x)=g(x), \quad x \in[0,1]u(t=0,x)=g(x),x∈[0,1]
The problem setup space is P=(ν,α,a,b)∈R4\mathcal{P}=(\nu, \alpha, a, b) \in \mathbb{R}^{4}P=(ν,α,a,b)∈R4, the entire space (t,x,ν,α,a,b)∈[0,1]×[0,1]×[10−2,10−1]×[10−2,1]×[−1,1]×[−1,1](t, x, \nu, \alpha, a, b) \in[0,1] \times[0,1] \times[10^{-2}, 10^{-1}] \times [10^{-2}, 1] \times[-1,1] \times[-1,1](t,x,ν,α,a,b)∈[0,1]×[0,1]×[10−2,10−1]×[10−2,1]×[−1,1]×[−1,1]

Figure 5 compares the deep learning solution with the exact solution for
several different problem setups p


Figure 5 compares the deep learning solution with the exact solution for several different problem setups p. The solutions are very close; in several cases, the two solutions are visibly indistinguishable. The deep learning algorithm is able to accurately capture the shock layers and boundary layers.

Neural Network Approximation Theorem for PDEs

Figure 6 presents the accuracy of the deep learning algorithm for different times t and different choices of ν.

总结:

  • We prove that the neural network converges to the solution of the partial differential equation as the number of hidden units increases.
  • Stability analysis of deep learning and machine learning algorithms for solving PDEs is also an important question. It would certainly be interesting to study machine learning algorithms that use a more direct variational formulation of the involved PDEs. We
    leave these questions for future work

DGM: A deep learning algorithm for solving partial differential equations相关推荐

  1. 《A Discussion on Solving Partial Differential Equations using Neural Networks》梳理

    (读本论文,需要有深度学习的基础,并了解偏微分方程的求解) 摘要 提出问题:神经网络可以学习求解偏微分方程(PDE)吗? 研究问题:本文,用泊松方程和稳定的Navier-Stokes方程来研究这个问题 ...

  2. 1.3读论文笔记:M. Raissi a等人的Physics-informed neural networks:A deep learning framework for solving forw..

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems ...

  3. 《DeepXDE:a deep learning library for solving differential equations》梳理

    本论文向我们介绍了一个求解微分方程的神经网络PINNs,和PINNs的Python库DeepXDE 摘要 1.PINNs(Physics-informed neural networks物理信息神经网 ...

  4. Transfer learning for deep neural network-based partial differential equations solving论文笔记

    Introduction 基于DNN的替代模型的迁移学习效果尚未得到充分研究 Related works 首先介绍了PINN:控制方程以及初始和边界条件作为惩罚项嵌入损失函数中,以指导梯度下降方向:下 ...

  5. cauchy problem of 1st order PDE from Partial Differential Equations

    pure math 加一个一个Episodes pde进可攻退可守 pure math f:R→R,y=f(x),dy=f′(x)dxf:\mathbb{R}\rightarrow\mathbb{R} ...

  6. Fourier Neural Operator for Parametric Partial Differential Equations

    https://arxiv.org/abs/2010.08895 这篇文章提出了一个包含Fourier变换的网络结构,并使用这种网络学习PDE的解算子(从初值/参数直接映射到PDE的解).

  7. paper survey ——deep learning or machine learing and optical communication

    machine learning 或者说deep learning已经被广泛应用于各种领域,之前本人也发表了几篇ML或者DL跟VLC相结合的论文.本博文主要是对16年后ML或DL跟optical co ...

  8. 机器学习——深度学习(Deep Learning)

    Deep Learning是机器学习中一个非常接近AI的领域,其动机在于建立.模拟人脑进行分析学习的神经网络,近期研究了机器学习中一些深度学习的相关知识,本文给出一些非常实用的资料和心得. Key W ...

  9. Deep Learning for Computer Vision with MATLAB and cuDNN

    转载自:Deep Learning for Computer Vision with MATLAB and cuDNN | Parallel Forall http://devblogs.nvidia ...

最新文章

  1. python 命令行参数-Python 中最好用的命令行参数解析工具
  2. xiaocms php,XiaoCms PHP企业网站模板, ,后台可备份 WEB(ASP,PHP,...) 238万源代码下载- www.pudn.com...
  3. 2d的公式_西师大版六年级数学上册全册必背公式+高清版电子课文,收藏预习
  4. 11尺寸长宽 iphone_弱电工程LED显示屏尺寸规格及计算方法
  5. Android MVP模式简单易懂的介绍方式 (一)
  6. Python格式化字符串f-string常用用法
  7. 大数据_Flink_Java版_数据处理_流处理API_Transform(5)_connect合流---Flink工作笔记0033
  8. 学习笔记41—ttest误区
  9. ArcGIS Bathymetry 管理水深数据的方法
  10. ssm整合之配置applicationContext-service.xml
  11. /usr/include/X11/Shell.h:51:26: 致命错误:X11/SM/SMlib.h:没有那个文件或目录
  12. 毕设题目:Matlab优化调度
  13. 计算机操作系统第二章测试题及答案
  14. python计算ln与log,python计算以e为底的对数
  15. 总结一些pr的快捷键,让你的剪辑速度翻倍~
  16. 并发编程——Forkjoin设计模式原理
  17. Color Models (RGB, CMY, HSI)
  18. php微信调用天气api,微信公众号接口开发--snoweek测试
  19. DesignWare USB 2.0 OTG Controller (DWC_otg) Device Driver File List
  20. OpenSSL 使用openssl工具搭建私有CA

热门文章

  1. Unity2D学习笔记Day12:敌人统一死亡动画+Class的继承(含虚函数virtual,重写override)
  2. 使用Cronjobs的综合指南
  3. MT6755/HelioP10处理器性能,MT6755芯片规格资料
  4. 添加网络计算机后打印乱码,Windows7系统打印机无法打印出现乱码的解决方法
  5. 时间序列预测框架--Darts--快速开始(下)
  6. 程序员情人节脱单指南
  7. 公众号开发分享-参数
  8. 快牛策略——PowerPoint 2003:红头文件的制作及标准
  9. 工程机械租赁商如何对世界各地设备进行统一集中管理
  10. 2021年12月电子学会图形化四级编程题解析含答案:聪明的小猫