论文信息

题目:Learning and Meta-Learning of Stochastic Advection-Diffusion-Reaction Systems from Sparse Measurements(对流扩散响应)

作者:Xiaoli Chen(a,b), Jinqiao Duan©, George Em Karniadakis(b,d)

期刊、会议:computational physics

单位:
a Center for Mathematical Sciences & School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China
b Division of Applied Mathematics, Brown University, Providence, RI 02912, USA
c Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616, USA
d Pacific Northwest National Laboratory, Richland, WA 99354, USA

基础

Polynomial chaos (PC), also called Wiener chaos expansion, is a non-sampling-based method to determine the evolution of uncertainty in a dynamical system when there is probabilistic uncertainty in the system parameters.

  • Arbitrary polynomial chaos:Recently chaos expansion received a generalization towards the arbitrary polynomial chaos expansion (aPC),[1] which is a so-called data-driven generalization of the PC. Like all polynomial chaos expansion techniques, aPC approximates the dependence of simulation model output on model parameters by expansion in an orthogonal polynomial basis. The aPC generalizes chaos expansion techniques towards arbitrary distributions with arbitrary probability measures, which can be either discrete, continuous, or discretized continuous and can be specified either analytically (as probability density/cumulative distribution functions), numerically as histogram or as raw data sets.

论文动机

  • 通过建立适当的目标函数和使用必要的正则化技术来获得一些未知的参数或与空间/时间有关的材料性质
  • 然而,在许多实际问题中,例如在地下输运[3,4]中,我们必须处理一个混合问题,因为我们通常要对材料性质进行一些测量,对状态变量进行一些测量
  • 这里,考虑描述浓度场的非线性平流-扩散-反应(ADR)的这种“混合”问题,并根据机器学习的最新发展提出了新的算法。

问题定义

Problem set

  • determine the entire stochastic diffusivity and stochastic concentration fields as well three (deterministic) parameters from a few multi-fidelity mea- surements of the concentration field at random points in space-time.

本文方法

Here, we first employ the standard PINN and a stochastic ver- sion, sPINN, to solve forward and inverse problems governed by a nonlinear advection-diffusion-reaction (ADR) equation,

  • Assuming we have some sparse measurements of the concentration field at random or pre-selected locations.
  • Subsequently, we attempt to optimize the hyper-parameters of sPINN by using the Bayesian optimization method (meta-learning), and compare the results with the empirically selected hyper-parameters of sPINN

The PINN is trained using a composite multi- fidelity network, first introduced in [2], that learns the correlations between the multi-fidelity data and predicts the unknown values

sPINN:Physics-informed neural networks for stochastic PDEs

sPINN是基于任意多项混沌[17,18]表示随机性,并将其与PINN相结合

We consider the following stochastic PDE (SPDE)
ut+N[u(x,t;ω);k(x;ω)]=0,x∈D,t∈(0,T],ω∈Ωu_{t}+\mathscr{N}[u(x, t ; \omega) ; k(x ; \omega)]=0, x \in \mathcal{D}, t \in(0, T], \omega \in \Omegaut​+N[u(x,t;ω);k(x;ω)]=0,x∈D,t∈(0,T],ω∈Ω
with the initial and boundary conditions:
u(x,0;ω)=u0(x),x∈DBX[u(x,t;ω)]=0,x∈∂D,t∈(0,T]\begin{array}{ll} u(x, 0 ; \omega)=u_{0}(x), & x \in \mathcal{D} \\ \mathbb{B}_{X}[u(x, t ; \omega)]=0, & x \in \partial \mathcal{D}, t \in(0, T] \end{array}u(x,0;ω)=u0​(x),BX​[u(x,t;ω)]=0,​x∈Dx∈∂D,t∈(0,T]​
Here Ω\OmegaΩ is the random space


the diffusion term k(x;ω)k(x ; \omega)k(x;ω) can be approximated by:
kNN(x;ωj)=k0(x)+∑Mki(x)λiξi,jk_{N N}\left(x ; \omega_{j}\right)=k_{0}(x)+\sum^{M} k_{i}(x) \sqrt{\lambda_{i}} \xi_{i, j}kNN​(x;ωj​)=k0​(x)+∑M​ki​(x)λi​​ξi,j​

Correspondingly, the solution u at the j-th snapshot can be approximated
by
uNN(x,t;ωj)≈∑α=0Puα(x,t)ψα(ξj)u_{N N}\left(x, t ; \omega_{j}\right) \approx \sum_{\alpha=0}^{P} u_{\alpha}(x, t) \psi_{\alpha}\left(\xi_{j}\right)uNN​(x,t;ωj​)≈α=0∑P​uα​(x,t)ψα​(ξj​)
{ψα}α=1P\left\{\psi_{\alpha}\right\}_{\alpha=1}^{P}{ψα​}α=1P​为多元正交多项式基的集合,最高多项式阶为r.

The loss function is defined as:
MSE=MSEu+MSEk+MSEfM S E=M S E_{u}+M S E_{k}+M S E_{f}MSE=MSEu​+MSEk​+MSEf​
where
MSEu=1N∗Nu∑j=1N∑i=1Nu[uNN(xu(i),tu(i);ωj)−u(xu(i),tu(i);ωj)]2MSEk=1N∗Nk∑j=1N∑i=1Nk[kNN(xk(i);ωj)−k(xk(i);ωj)]2MSEf=1N∗Nf∑i=1N∑i=1Nf[fNN(xf(i),tf(i);ωj)]2\begin{aligned} M S E_{u} &=\frac{1}{N * N_{u}} \sum_{j=1}^{N} \sum_{i=1}^{N_{u}}\left[u_{N N}\left(x_{u}^{(i)}, t_{u}^{(i)} ; \omega_{j}\right)-u\left(x_{u}^{(i)}, t_{u}^{(i)} ; \omega_{j}\right)\right]^{2} \\ M S E_{k} &=\frac{1}{N * N_{k}} \sum_{j=1}^{N} \sum_{i=1}^{N_{k}}\left[k_{N N}\left(x_{k}^{(i)} ; \omega_{j}\right)-k\left(x_{k}^{(i)} ; \omega_{j}\right)\right]^{2} \\ M S E_{f} &=\frac{1}{N * N_{f}} \sum_{i=1}^{N} \sum_{i=1}^{N_{f}}\left[f_{N N}\left(x_{f}^{(i)}, t_{f}^{(i)} ; \omega_{j}\right)\right]^{2} \end{aligned}MSEu​MSEk​MSEf​​=N∗Nu​1​j=1∑N​i=1∑Nu​​[uNN​(xu(i)​,tu(i)​;ωj​)−u(xu(i)​,tu(i)​;ωj​)]2=N∗Nk​1​j=1∑N​i=1∑Nk​​[kNN​(xk(i)​;ωj​)−k(xk(i)​;ωj​)]2=N∗Nf​1​i=1∑N​i=1∑Nf​​[fNN​(xf(i)​,tf(i)​;ωj​)]2​

方法对比

实验结果

Results for the deterministic PDE

We consider the following nonlinear ADR equation:
{ut=ν1uxx−ν2ux+g(u),(x,t)∈(0,π)×(0,1]u(x,0)=u0(x),x∈(0,π)u(0,t)=1,ux(π,t)=0,t∈(0,1]\left\{\begin{array}{ll} u_{t}=\nu_{1} u_{x x}-\nu_{2} u_{x}+g(u), & (x, t) \in(0, \pi) \times(0,1] \\ u(x, 0)=u_{0}(x), & x \in(0, \pi) \\ u(0, t)=1, u_{x}(\pi, t)=0, & t \in(0,1] \end{array}\right.⎩⎨⎧​ut​=ν1​uxx​−ν2​ux​+g(u),u(x,0)=u0​(x),u(0,t)=1,ux​(π,t)=0,​(x,t)∈(0,π)×(0,1]x∈(0,π)t∈(0,1]​
We define the residual f=ut−ν1uxx+ν2ux−g(u)f=u_{t}-\nu_{1} u_{x x}+\nu_{2} u_{x}-g(u)f=ut​−ν1​uxx​+ν2​ux​−g(u),The L2L_{2}L2​ error of a function hhh is defined as Eh=∥hNN−htrue∥L2E_{h}=\left\|h_{N N}-h_{\text {true}}\right\|_{L 2}Eh​=∥hNN​−htrue​∥L2​, and the relative L2L_{2}L2​ error is defined as Eh=∥hNN−htrue∥L2∥htrue∥L2E_{h}=\frac{\left\|h_{N N}-h_{\text {true}}\right\|_{L 2}}{\left\|h_{\text {true}}\right\|_{L 2}}Eh​=∥htrue​∥L2​∥hNN​−htrue​∥L2​​

Single-fidelity data

u0(x)=exp⁡(−10x)u_{0}(x)=\exp (-10 x)u0​(x)=exp(−10x),g(u)=λ1uλ2g(u)=\lambda_{1} u^{\lambda_{2}}g(u)=λ1​uλ2​

We aim to infer the parameters ν1; ν2; λ1; λ2 given some sparse measurements of u in addition to initial and boundary conditions.The correct values for the unknown parameters are: ν1=1,ν2=1,λ1=−1,λ2=2\nu_{1}=1, \nu_{2}=1, \lambda_{1}=-1, \lambda_{2}=2ν1​=1,ν2​=1,λ1​=−1,λ2​=2

We employ the following loss function in the PINN:
MSE=MSEu+w∇u∗MSE∇u+MSEfM S E=M S E_{u}+w_{\nabla_{u}} * M S E_{\nabla u}+M S E_{f}MSE=MSEu​+w∇u​​∗MSE∇u​+MSEf​
where
MSEu=1Nu∑i=1Nu∣uNN(tui,xui)−ui∣2MSE∇u=1Nu∑i=1Nu∣∇uNN(tui,xui)−∇ui∣2MSEf=1Nf∑i=1Nf∣fNN(tfi,xfi)∣2\begin{aligned} M S E_{u} &=\frac{1}{N_{u}} \sum_{i=1}^{N_{u}}\left|u_{N N}\left(t_{u}^{i}, x_{u}^{i}\right)-u^{i}\right|^{2} \\ M S E_{\nabla u} &=\frac{1}{N_{u}} \sum_{i=1}^{N_{u}}\left|\nabla u_{N N}\left(t_{u}^{i}, x_{u}^{i}\right)-\nabla u^{i}\right|^{2} \\ M S E_{f} &=\frac{1}{N_{f}} \sum_{i=1}^{N_{f}}\left|f_{N N}\left(t_{f}^{i}, x_{f}^{i}\right)\right|^{2} \end{aligned}MSEu​MSE∇u​MSEf​​=Nu​1​i=1∑Nu​​∣∣​uNN​(tui​,xui​)−ui∣∣​2=Nu​1​i=1∑Nu​​∣∣​∇uNN​(tui​,xui​)−∇ui∣∣​2=Nf​1​i=1∑Nf​​∣∣​fNN​(tfi​,xfi​)∣∣​2​
Nu=64,Nf=1089N_{u}=64, N_{f}=1089Nu​=64,Nf​=1089,and is the reference solution. uiu^{i}ui The error of the parameters is defined as Eν1=ν1train⁡−ν1ν1,Eν2=ν2train⁡−ν2ν2,Eλ1=λ1train⁡−λ1λ1E_{\nu_{1}}=\frac{\nu_{1 \operatorname{train}}-\nu_{1}}{\nu_{1}}, E_{\nu_{2}}=\frac{\nu_{2 \operatorname{train}}-\nu_{2}}{\nu_{2}}, E_{\lambda_{1}}=\frac{\lambda_{1 \operatorname{train}}-\lambda_{1}}{\lambda_{1}}Eν1​​=ν1​ν1train​−ν1​​,Eν2​​=ν2​ν2train​−ν2​​,Eλ1​​=λ1​λ1train​−λ1​​ and Eλ2=λ2train⁡−λ2λ2E_{\lambda_{2}}=\frac{\lambda_{2 \operatorname{train}}-\lambda_{2}}{\lambda_{2}}Eλ2​​=λ2​λ2train​−λ2​​

选了四种采样方式

  • t = 0:1 and t = 0:9
  • t = 0:1, t = 0:9, and x = π
  • training data randomly
  • training data on a regular lattice in the x − t domain
    研究了w∇u=0w_{\nabla u}=0w∇u​=0和w∇u=1w_{\nabla u}=1w∇u​=1情况


  • The convergence is faster if we include the gradient penalty termw∇u=1w_{\nabla_{u}}=1w∇u​​=1

Multi-fidelity data

In many real-world applications, the training data is small and possibly inadequate to obtain even a rough estimation of the parameters.demonstrate how we can resolve this issue by resorting to supplementary data of lower fidelity that may come from cheaper instruments of lower resolution or from some computational models. We will refer to such data as \lowfidelity" and we will assume that we have a large number of such data points unlike the high-fidelity data.Here, we will employ a composite network inspired by the recent work on multi-fidelity NNs in [2].

The estimator of the high-fidelity model (HF) using the correlation structure to correct the low-fidelity model (LF), can be expressed as
uHF(x,t)=h(uLF(x,t),x,t)u_{H F}(x, t)=h\left(u_{L F}(x, t), x, t\right)uHF​(x,t)=h(uLF​(x,t),x,t)
where h is a correlation map to be learned, which is based on the correlation between the HF and LF data. We have two NN for low- and highfidelity, respectively, as follows:
uLF=NNLF(xLF,tLF,wLF,bLF)uHF=NNHF(xHF,tHF,uLF,wHF,bHF)\begin{aligned} u_{L F} &=\mathcal{N} \mathcal{N}_{L F}\left(x_{L F}, t_{L F}, w_{L F}, b_{L F}\right) \\ u_{H F} &=\mathcal{N} \mathcal{N}_{H F}\left(x_{H F}, t_{H F}, u_{L F}, w_{H F}, b_{H F}\right) \end{aligned}uLF​uHF​​=NNLF​(xLF​,tLF​,wLF​,bLF​)=NNHF​(xHF​,tHF​,uLF​,wHF​,bHF​)​

Through minimizing the mean-squared-error loss function
MSE=MSEuLF+MSEuHF+MSEfHFM S E=M S E_{u_{L F}}+M S E_{u_{H F}}+M S E_{f H F}MSE=MSEuLF​​+MSEuHF​​+MSEfHF​
where
MSEuLF=1NLF∑i=1NLF∣uLF(tuLHi,xuLFi)−uLHi∣2MSEuHF=1NHF∑i=1NHF∣uHF(tuHFi,xuHFi)−uHFi∣2MSEfHF=1Nf∑i=1Nf∣fHF(tfHFi,xfHFi)∣2\begin{aligned} M S E_{u_{L F}} &=\frac{1}{N_{L F}} \sum_{i=1}^{N_{L F}}\left|u_{L F}\left(t_{u_{L H}}^{i}, x_{u_{L F}}^{i}\right)-u_{L H}^{i}\right|^{2} \\ M S E_{u_{H F}} &=\frac{1}{N_{H F}} \sum_{i=1}^{N_{H F}}\left|u_{H F}\left(t_{u_{H F}}^{i}, x_{u_{H F}}^{i}\right)-u_{H F}^{i}\right|^{2} \\ M S E_{f_{H F}} &=\frac{1}{N_{f}} \sum_{i=1}^{N_{f}}\left|f_{H F}\left(t_{f_{H F}}^{i}, x_{f_{H F}}^{i}\right)\right|^{2} \end{aligned}MSEuLF​​MSEuHF​​MSEfHF​​​=NLF​1​i=1∑NLF​​∣∣​uLF​(tuLH​i​,xuLF​i​)−uLHi​∣∣​2=NHF​1​i=1∑NHF​​∣∣​uHF​(tuHF​i​,xuHF​i​)−uHFi​∣∣​2=Nf​1​i=1∑Nf​​∣∣​fHF​(tfHF​i​,xfHF​i​)∣∣​2​

Set the true parameters
ν1=1,ν2=1,λ1=−1and λ2=2\nu_{1}=1, \nu_{2}=1, \lambda_{1}=-1 \text { and } \lambda_{2}=2ν1​=1,ν2​=1,λ1​=−1 and λ2​=2

  • The low-fidelity training data is obtained by the second-order finite difference solution ith erroneous parameter values ν1=1.25,ν2=1.25,λ1=−0.75,and λ2=2.5\nu_{1}=1.25, \nu_{2}=1.25, \lambda_{1}=-0.75, \text { and } \lambda_{2}=2.5ν1​=1.25,ν2​=1.25,λ1​=−0.75, and λ2​=2.5, we choose 64 point of low-fidelity, NLF=64N_{L F}=64NLF​=64
  • hHgh-fidelity data is obtained by the numerical solution when ν1=1,ν2=1,λ1=−1and λ2=2\nu_{1}=1, \nu_{2}=1, \lambda_{1}=-1 \text { and } \lambda_{2}=2ν1​=1,ν2​=1,λ1​=−1 and λ2​=2, (a),NHF=12N_{H F}=12NHF​=12,(b),NHF=6N_{H F}=6NHF​=6

As we can see, the parameter inference using the multi-fidelity PINN is much better than the single-fidelity predictions. Moreover, if we have a small number of HF data, e.g. NHF=6N_{H F}=6NHF​=6, the results of the multi-fidelity PINN are still quite accurate

Results for the stochastic case

We consider the following stochastic nonlinear ADR equation:
{ut=(k(x;ω)ux)x−ν2ux+g(u)+f(x,t),(x,t,ω)∈(x0,x1)×(0,T]×Ωu(x,0)=1−x2,x∈(x0,x1)u(x0,t)=0,u(x1,t)=0,t∈(0,T]\left\{\begin{array}{ll}u_{t}=\left(k(x ; \omega) u_{x}\right)_{x}-\nu_{2} u_{x}+g(u)+f(x, t), & (x, t, \omega) \in\left(x_{0}, x_{1}\right) \times(0, T] \times \Omega \\ u(x, 0)=1-x^{2}, & x \in\left(x_{0}, x_{1}\right) \\ u\left(x_{0}, t\right)=0, u\left(x_{1}, t\right)=0, & t \in(0, T]\end{array}\right.⎩⎨⎧​ut​=(k(x;ω)ux​)x​−ν2​ux​+g(u)+f(x,t),u(x,0)=1−x2,u(x0​,t)=0,u(x1​,t)=0,​(x,t,ω)∈(x0​,x1​)×(0,T]×Ωx∈(x0​,x1​)t∈(0,T]​
g(u)=λ1uλ2and f(x,t)=2g(u)=\lambda_{1} u^{\lambda_{2}} \text { and } f(x, t)=2g(u)=λ1​uλ2​ and f(x,t)=2

Forward problem

Mean-squared-error loss function:
MSE=MSEI+MSEB+MSEfM S E=M S E_{I}+M S E_{B}+M S E_{f}MSE=MSEI​+MSEB​+MSEf​
where
MSEI=1N∗Nu∑s=1N∑i=1NI[uNN(xu(i),0;ωs)−u(xu(i),0;ωs)]2MSEB=1N∗NB∑s=1N∑i=1NB[uNN(x0,tu(i);ωs)−u(x0,tu(i);ωs)]2+1N∗NB∑s=1N∑i=1NB[uNN(x1,tu(i);ωs)−u(x1,tu(i);ωs)]2MSEf=1N∗Nf∑s=1N∑i=1Nf[fNN(xf(i),tf(i);ωs)−f(xf(i),tf(i);ωs)]2\begin{aligned} M S E_{I}=& \frac{1}{N * N_{u}} \sum_{s=1}^{N} \sum_{i=1}^{N_{I}}\left[u_{N N}\left(x_{u}^{(i)}, 0 ; \omega_{s}\right)-u\left(x_{u}^{(i)}, 0 ; \omega_{s}\right)\right]^{2} \\ M S E_{B}=& \frac{1}{N * N_{B}} \sum_{s=1}^{N} \sum_{i=1}^{N_{B}}\left[u_{N N}\left(x_{0}, t_{u}^{(i)} ; \omega_{s}\right)-u\left(x_{0}, t_{u}^{(i)} ; \omega_{s}\right)\right]^{2} \\ &+\frac{1}{N * N_{B}} \sum_{s=1}^{N} \sum_{i=1}^{N_{B}}\left[u_{N N}\left(x_{1}, t_{u}^{(i)} ; \omega_{s}\right)-u\left(x_{1}, t_{u}^{(i)} ; \omega_{s}\right)\right]^{2} \\ M S E_{f}=& \frac{1}{N * N_{f}} \sum_{s=1}^{N} \sum_{i=1}^{N_{f}}\left[f_{N N}\left(x_{f}^{(i)}, t_{f}^{(i)} ; \omega_{s}\right)-f\left(x_{f}^{(i)}, t_{f}^{(i)} ; \omega_{s}\right)\right]^{2} \end{aligned}MSEI​=MSEB​=MSEf​=​N∗Nu​1​s=1∑N​i=1∑NI​​[uNN​(xu(i)​,0;ωs​)−u(xu(i)​,0;ωs​)]2N∗NB​1​s=1∑N​i=1∑NB​​[uNN​(x0​,tu(i)​;ωs​)−u(x0​,tu(i)​;ωs​)]2+N∗NB​1​s=1∑N​i=1∑NB​​[uNN​(x1​,tu(i)​;ωs​)−u(x1​,tu(i)​;ωs​)]2N∗Nf​1​s=1∑N​i=1∑Nf​​[fNN​(xf(i)​,tf(i)​;ωs​)−f(xf(i)​,tf(i)​;ωs​)]2​

Inverse problem

we will infer the stochastic process k(x,ω)k(x, \omega)k(x,ω) as well as the parameters
ν2,λ1,λ2\nu_{2}, \lambda_{1}, \lambda_{2}ν2​,λ1​,λ2​ and the solution u(x,t,ω)u(x, t, \omega)u(x,t,ω)

We minimize the following mean-squared-error loss function:
MSE=10∗(MSEu+MSEk)+MSEfM S E=10 *\left(M S E_{u}+M S E_{k}\right)+M S E_{f}MSE=10∗(MSEu​+MSEk​)+MSEf​

Meta-learning

We employ Bayesian Optimization (BO) to learn the optimum structure of the NNs. We use dK hidden layers and dKd_{K}dK​ neurons per layer for wKw_{K}wK​ mean neural network, and dUd_{U}dU​hidden layers and wUw_{U}wU​ neurons per layer for
ki(x); (i = 1; :::M) and uα(x; t); (α = 0; 1; ::

Learning and Meta-Learning of Stochastic Advection-Diffusion-Reaction Systems from Sparse Measuremen相关推荐

  1. 【李宏毅2020 ML/DL】P97-98 More about Meta Learning

    我已经有两年 ML 经历,这系列课主要用来查缺补漏,会记录一些细节的.自己不知道的东西. 本节内容综述 本节课由助教 陈建成 讲解. 本节 Outline 见小细节. 首先是 What is meta ...

  2. 强化学习-把元学习(Meta Learning)一点一点讲给你听

    目录 0 Write on the front 1 What is meta learning? 2 Meta Learning 2.1 Define a set of learning algori ...

  3. Meta Learning/Learning to Learn, 到底我们要学会学习什么?||介绍了几篇元学习文章

    https://www.zhihu.com/question/62482926/answer/625352436 转载:https://zhuanlan.zhihu.com/p/32270990 1 ...

  4. meta learning(李宏毅

    meta  元 meta learning: learn to learn 学习如何学习 大部分的时候deep learning就是在调hyperparameter.调hyperparameter真的 ...

  5. 元学习(meta learning)和小样本学习(few-shot learning)

    Meta learning few-shot learning是meta learning中的一种.可将few-shot learning看做是meta leaning即可. Meta learnin ...

  6. 百家争鸣的Meta Learning/Learning to learn

    1 前言 Meta Learning 元学习或者叫做 Learning to Learn 学会学习 已经成为继Reinforcement Learning 增强学习之后又一个重要的研究分支(以后仅称为 ...

  7. 深度学习基础——简单了解meta learning(来自李宏毅课程笔记)

    知乎同名账号同步发布 目录 一.初步了解 二.和ML的差异 三.应用了解 一.初步了解 我们以分类问题为例,以前,学习的目的是学习一个二元分类器 f ∗ f^* f∗:现在,学习的目的是学习一个学习算 ...

  8. 理解Meta Learning 元学习,这篇文章就够了!

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 AI编辑:我是小将 本文作者:谢杨易 1 什么是meta lear ...

  9. 初识元学习-Meta learning

    目录: meta learning的出现 1.meta learning的出现 Meta Learning,元学习,也叫 Learning to Learn(学会学习).是继Reinforcement ...

最新文章

  1. 网友写的ELK安装步骤
  2. 【学习笔记】利用Excel实现简易版大屏
  3. set python_使用dict和set
  4. oracle 10g r2 for solaris x86,Oracle10g for Solaris9(x86)安装指南
  5. bootstrp-table 获取checkbox选中行的数据id
  6. android应用开发(16)---AndroidManifest.xml
  7. Swagger2 生成API文档时泛型总是显示不出来的问题
  8. log4j不生成日志文件的问题
  9. vue计算属性与监听器的区别
  10. LeetCode 1004. 最大连续1的个数 III(双指针+滑动窗口)
  11. atitit.身份认证解决方案attilax总结
  12. Altium Designer入门与进阶教程系列
  13. pla3d打印材料密度_3D打印材料:透明PLA材料
  14. (一)CAD2014+VS2010+ObjectARX2014环境搭建(适用于非常非常小白的人)
  15. python怎么计算圆_python根据圆的参数方程求圆上任意一点的坐标
  16. 泰课在线夜猫的贪食蛇
  17. 车速与档位匹配关系_驾考科目三车速和档位如何匹配
  18. 腾讯视频评论爬虫实战
  19. 插槽的使用教程(普通插槽、具名插槽、域名插槽)
  20. mysql n8_mysql几种引擎和使用场景

热门文章

  1. 【运维】记一次yapi安全漏洞导致服务器被木马入侵的处理过程
  2. 2014中国计量学院matlab考试卷,南京信息工程大学2011普本电路分析期末试卷A及答案...
  3. js打开服务器缓存文件夹路径,浅谈微信页面入口文件被缓存解决方案
  4. 新iPhone 12泄漏
  5. 用python能制作毕业相册吗_毕业相册如何制作?
  6. 网络攻防原理及应用 知识梳理
  7. 2022年ACM杰出会员名单公布:23位华人学者入选
  8. java汇编工具使用
  9. 业务员与客户沟通的礼仪
  10. GB/T 20272-2006与GB/T 20008-2005