PointNet求解NS方程

参考文献:Kashefi, A. , and T. Mukerji . “Physics-Informed PointNet: A Deep Learning Solver for Steady-State Incompressible Flows and Thermal Fields on Multiple Sets of Irregular Geometries.” (2022).

PointNet网络的特点在于中间通过pool提取过一次全局特征,这个特点导致一个问题。假如我们想要得到某N个点(N,2)(N,2)(N,2)的取值,两种做法,一种是直接传入(N,2)(N,2)(N,2)这个矩阵,另一种是写一个for循环,每次传入(1,2)(1,2)(1,2)形状的矩阵,会发现这两种做法得到的解完全不一样。
另一方面,PointNet网络在训练的过程中收敛特别快。
说明:PointNet是在中间某一层比如说(N,k)(N,k)(N,k),其中在上图中k=1024k=1024k=1024,针对第一个维度做了pool,得到了(1,k)(1,k)(1,k)这个向量,然后做boardcast重新得到(N,k)(N,k)(N,k)这个矩阵,然而把这个矩阵和前面某一层比如说上图中(N,64)(N,64)(N,64)进行扩充得到(N,1088)(N,1088)(N,1088)向量,之后再接着利用FNN的思路得到近似解。现在针对PointNet有两种改进FNN的思想。
第一种:针对第二个维度做pool,得到了(N,1)(N,1)(N,1)向量,然后直接和前面的(N,64)(N,64)(N,64)结合得到一个(N,65)(N,65)(N,65)的向量,然后继续。这种做法可以有效避免上面PointNet提到的评估不一致的问题,事实上,这种做法跟FNN类似,本人实现过程中发现这种做法也和FNN一样收敛速度很慢。
第二种:到了池化的时候,提前在某一层做一下升维,保证池化前得到的是一个三维张量比如说(N,1024,L)(N,1024,L)(N,1024,L),然后对于后面两个维度做pool得到(N,1)(N,1)(N,1)向量,然后和前面的(N,64)(N,64)(N,64)结合得到(N,65)(N,65)(N,65)向量,接着做FNN的那一套。这种做法本人尚未实现,但是如果PointNet收敛快的原因是提取了全局特征,那么有理由怀疑这种做法也是难以快速收敛的。
只需要在原来PINN的基础上修改网络定义方式即可做到POintNet,缺乏泛化能力

PointNet求解CVD问题

对于u,v,w,Tu,v,w,Tu,v,w,T,设置NN的层数以及神经元数目为[2,15,10,25,1][2,15,10,25,1][2,15,10,25,1],由于这里有pool层,本人在网络中设定在倒数第二个个隐藏层做pool,然后和第一个隐藏层粘贴,因此第第一个隐藏层神经元数目+倒数第二个隐藏层神经元数目=最后一个隐藏层数目,内点使用均匀采样,剖分为[64,64][64,64][64,64],边界点剖分为[100,100][100,100][100,100],上面提到了15个边界条件,不过这里本人设置近似解定义如下:
KaTeX parse error: {equation} can be used only in display mode.
通过上面这种定义,近似解可以直接满足7个边界条件,引入损失函数定义:
L=α1Lu+α2Lv+α3Lw+α4LT+α5Lvar+∑i=15βiLbdiL = \alpha_1L_u + \alpha_2 L_v + \alpha_3L_{w} + \alpha_4L_T + \alpha_5L_{var}+\sum_{i=1}^{5}\beta_iL_{bdi}L=α1​Lu​+α2​Lv​+α3​Lw​+α4​LT​+α5​Lvar​+∑i=15​βi​Lbdi​
其中:
KaTeX parse error: {equation} can be used only in display mode.
使用PointNet,损失函数下降很快,这里只给出迭代30个epoch的结果。
特别注意:这里训练完神经网络以后,计算NN在整个区域的解是通过逐点评估得到的,也就是每一次传入一个坐标点得到一个近似解(如果一次把所有点传入进行评估,得到的解十分奇怪),不过这里也把两种结果展示出来,测试集剖分是[64,64][64,64][64,64]。

class Net(torch.nn.Module):def __init__(self, layers, dtype):super(Net, self).__init__()self.layers = layersself.layers_hid_num = len(layers)-2self.device = deviceself.dtype = dtypefc = []for i in range(self.layers_hid_num - 1):fc.append(torch.nn.Linear(self.layers[i],self.layers[i+1]))fc.append(torch.nn.Linear(self.layers[i+1],self.layers[i+1]))fc.append(torch.nn.Linear(self.layers[self.layers_hid_num],self.layers[self.layers_hid_num]))fc.append(torch.nn.Linear(self.layers[-2],self.layers[-1]))self.fc = torch.nn.Sequential(*fc)for i in range(self.layers_hid_num - 1):self.fc[2*i].weight.data = self.fc[2*i].weight.data.type(dtype)self.fc[2*i].bias.data = self.fc[2*i].bias.data.type(dtype)self.fc[2*i + 1].weight.data = self.fc[2*i + 1].weight.data.type(dtype)self.fc[2*i + 1].bias.data = self.fc[2*i + 1].bias.data.type(dtype)self.fc[2*(self.layers_hid_num - 1)].weight.data = self.fc[2*(self.layers_hid_num - 1)].weight.data.type(dtype)self.fc[2*(self.layers_hid_num - 1)].bias.data = self.fc[2*(self.layers_hid_num - 1)].bias.data.type(dtype)self.fc[-1].weight.data = self.fc[-1].weight.data.type(dtype)self.fc[-1].bias.data = self.fc[-1].bias.data.type(dtype)def forward(self, x):dev = x.deviceh0 = torch.sin(self.fc[2*0](x))temp = torch.eye(x.shape[-1],self.layers[1],dtype = self.dtype,device = dev)h0 = torch.sin(self.fc[2*0 + 1](h0)) + x@temptmp = h0.to(x.device)for i in range(1,self.layers_hid_num - 1):h = torch.sin(self.fc[2*i](h0))h = torch.sin(self.fc[2*i+1](h))temp = torch.eye(h0.shape[-1],self.layers[i+1],dtype = self.dtype,device = dev)h0 = h + h0@temph0 = h0.mean(0,keepdim = True)#h0 = h0.max(0,keepdim = True)[0]h0 = (torch.ones(x.shape[0],1,dtype = self.dtype,device = dev)@h0)h0 = torch.cat([tmp,h0],1)temp = torch.eye(x.shape[-1],self.layers[-2],dtype = self.dtype,device = dev)h = torch.sin(self.fc[2*(self.layers_hid_num - 1)](h0)) + x@tempreturn self.fc[-1](h) def total_para(self):#计算参数数目return sum([x.numel() for x in self.parameters()])



PointNet求解障碍流问题

import torch
import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import torch.nn as nn
import torch.nn.functional as F
import itertools
import bfgs
from scipy.stats import qmc
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#device = torch.device('cpu')nu = 1e-2
d = 0.1
def region(x):x1 = x[:,0:1]x2 = x[:,1:2]box = (x1>1-d)&(x1<1+d)&(x2>0.5-d)&(x2<0.5+d)s = (x1>0)&(x1<2)&(x2>0)&(x2<1)&(np.invert(box))return s.flatten()
hr = 1e-3
bounds = np.array([0 - hr,2 + hr,0 - hr,1 + hr]).reshape(2,2)Lx = 2.0
Ly = 1.0
def y_in(x):x2 = x[:,1:2]s = 0*xs[:,0:1] = 4*x2*(1-x2)/(Ly**2)return sdef y_d(x):return y_in(x)class Geo():def __init__(self,region,bounds):self.region = regionself.bounds = boundsself.dim = self.bounds.shape[0]def samples(self,N):x = np.zeros((N,self.dim))m=0while (m<N):pt = np.random.uniform(0,1,self.dim).reshape(1,-1)pt = pt*(self.bounds[:,1]-self.bounds[:,0])+self.bounds[:,0]if self.region(pt).all():x[m,:] = pt.ravel()m += 1return x          def quasi_samples(self,N):sampler = qmc.Sobol(d=self.dim)sample = sampler.random(n=2*N)sample = sample*(self.bounds[:,1]-self.bounds[:,0])+self.bounds[:,0]sample = sample[self.region(sample),:][:N,:]return sample   Omega = Geo(region, bounds)
class INSET():def __init__(self,n_tr,mode):if mode == 'uniform':x = np.linspace(bounds[0,0],bounds[0,1],2*n_tr)y = np.linspace(bounds[1,0],bounds[1,1],n_tr)xx0, xx1 = np.meshgrid(x,y)inp_s = (np.hstack([xx0.reshape(-1,1),xx1.reshape(-1,1)]))x = []for i in range(inp_s.shape[0]):ind = (inp_s[i,0] > 1 - d)&(inp_s[i,0] < 1 + d)&(inp_s[i,1] > 0.5 - d)&(inp_s[i,1] < 0.5 + d)if ~ind:x.append(inp_s[i,:])x = np.array(x)self.X = torch.tensor(x)else:self.X = torch.from_numpy(Omega.quasi_samples(n_tr*n_tr))self.size = self.X.shape[0]self.area = 2-d**2self.dim = 2self.yd = y_d(self.X)
class BDSET():#边界点取值def __init__(self,nx):self.dim = 2self.hx = [(bounds[0,1] - bounds[0,0])/nx[0],(bounds[1,1] - bounds[1,0])/nx[1]]self.x_in = torch.zeros(nx[1],self.dim)self.dir_in = torch.zeros(nx[1],self.dim)for i in range(nx[1]):self.x_in[i,0] = bounds[0,0]self.x_in[i,1] = bounds[1,0] + (i + 0.5)*self.hx[1]self.dir_in[i,0] = -1self.dir_in[i,1] = 0self.rig_in = torch.zeros_like(self.x_in)self.rig_in = y_in(self.x_in)self.x_out = torch.zeros(nx[1],self.dim)self.dir_out = torch.zeros(nx[1],self.dim)for i in range(nx[1]):self.x_out[i,0] = bounds[0,1]self.x_out[i,1] = bounds[1,0] + (i + 0.5)*self.hx[1]self.dir_out[i,0] = 1self.dir_out[i,1] = 0self.x_w = torch.zeros(4*nx[0] + 2*nx[1],self.dim)self.dir_w = torch.zeros(4*nx[0] + 2*nx[1],self.dim)m = 0for i in range(nx[0]):self.x_w[m,0] = bounds[0,0] + (i + 0.5)*self.hx[0]self.x_w[m,1] = bounds[1,0]self.dir_w[m,0] = 0self.dir_w[m,1] = -1m = m + 1for i in range(nx[0]):self.x_w[m,0] = bounds[0,0] + (i + 0.5)*self.hx[0]self.x_w[m,1] = bounds[1,1]self.dir_w[m,0] = 0self.dir_w[m,1] = 1m = m + 1for i in range(nx[0]):self.x_w[m,0] = 1 - d + (i + 0.5)*(2*d/nx[0])self.x_w[m,1] = 0.5 - dself.dir_w[m,0] = 0self.dir_w[m,1] = -1m = m + 1for i in range(nx[0]):self.x_w[m,0] = 1 - d + (i + 0.5)*(2*d/nx[0])self.x_w[m,1] = 0.5 + dself.dir_w[m,0] = 0self.dir_w[m,1] = 1m = m + 1for i in range(nx[1]):self.x_w[m,0] = 1 - dself.x_w[m,1] = 0.5 - d + (i + 0.5)*(2*d/nx[1])self.dir_w[m,0] = -1self.dir_w[m,1] = 0m = m + 1for i in range(nx[1]):self.x_w[m,0] = 1 + dself.x_w[m,1] = 0.5 - d + (i + 0.5)*(2*d/nx[1])self.dir_w[m,0] = 1self.dir_w[m,1] = 0m = m + 1self.x_oc = torch.cat([self.x_in,self.x_w],0)self.dir_oc = torch.cat([self.dir_in,self.dir_w],0)
np.random.seed(1234)
torch.manual_seed(1234)class Net(torch.nn.Module):def __init__(self, layers, dtype):super(Net, self).__init__()self.layers = layersself.layers_hid_num = len(layers)-2self.device = deviceself.dtype = dtypefc = []for i in range(self.layers_hid_num - 1):fc.append(torch.nn.Linear(self.layers[i],self.layers[i+1]))fc.append(torch.nn.Linear(self.layers[i+1],self.layers[i+1]))fc.append(torch.nn.Linear(self.layers[self.layers_hid_num],self.layers[self.layers_hid_num]))fc.append(torch.nn.Linear(self.layers[-2],self.layers[-1]))self.fc = torch.nn.Sequential(*fc)for i in range(self.layers_hid_num - 1):self.fc[2*i].weight.data = self.fc[2*i].weight.data.type(dtype)self.fc[2*i].bias.data = self.fc[2*i].bias.data.type(dtype)self.fc[2*i + 1].weight.data = self.fc[2*i + 1].weight.data.type(dtype)self.fc[2*i + 1].bias.data = self.fc[2*i + 1].bias.data.type(dtype)self.fc[2*(self.layers_hid_num - 1)].weight.data = self.fc[2*(self.layers_hid_num - 1)].weight.data.type(dtype)self.fc[2*(self.layers_hid_num - 1)].bias.data = self.fc[2*(self.layers_hid_num - 1)].bias.data.type(dtype)self.fc[-1].weight.data = self.fc[-1].weight.data.type(dtype)self.fc[-1].bias.data = self.fc[-1].bias.data.type(dtype)def forward(self, x):dev = x.deviceh0 = torch.sin(self.fc[2*0](x))temp = torch.eye(x.shape[-1],self.layers[1],dtype = self.dtype,device = dev)h0 = torch.sin(self.fc[2*0 + 1](h0)) + x@temptmp = h0.to(x.device)for i in range(1,self.layers_hid_num - 1):h = torch.sin(self.fc[2*i](h0))h = torch.sin(self.fc[2*i+1](h))temp = torch.eye(h0.shape[-1],self.layers[i+1],dtype = self.dtype,device = dev)h0 = h + h0@temph0 = h0.mean(0,keepdim = True)#h0 = h0.max(0,keepdim = True)[0]h0 = (torch.ones(x.shape[0],1,dtype = self.dtype,device = dev)@h0)h0 = torch.cat([tmp,h0],1)temp = torch.eye(x.shape[-1],self.layers[-2],dtype = self.dtype,device = dev)h = torch.sin(self.fc[2*(self.layers_hid_num - 1)](h0)) + x@tempreturn self.fc[-1](h) def total_para(self):#计算参数数目return sum([x.numel() for x in self.parameters()])
def length(X):return (X[:,1:2] - bounds[1,0])*(bounds[1,1] - X[:,1:2])
def pred_u(netu,X):return netu.forward(X)*(X[:,0:1] - bounds[0,0]) + 4*X[:,1:2]*(1 - X[:,1:2])
def pred_v(netv,X):return netv.forward(X)*(X[:,0:1] - bounds[0,0])*length(X)
def pred_p(netp,X):return netp.forward(X)*(X[:,0:1] - bounds[0,1])
def loadtype(inset,bdset,dtype):inset.X = inset.X.type(dtype)inset.yd = inset.yd.type(dtype)#-----------------------------bdset.x_in = bdset.x_in.type(dtype)bdset.rig_in = bdset.rig_in.type(dtype)bdset.x_out = bdset.x_out.type(dtype)bdset.x_w = bdset.x_w.type(dtype)bdset.x_oc = bdset.x_oc.type(dtype)bdset.dir_in = bdset.dir_in.type(dtype)bdset.dir_out = bdset.dir_out.type(dtype)bdset.dir_w = bdset.dir_w.type(dtype)bdset.dir_oc = bdset.dir_oc.type(dtype)
def loadcuda(netu,netv,netp,inset,bdset):    netu = netu.to(device)netv = netv.to(device)netp = netp.to(device)if inset.X.requires_grad == False:inset.X.requires_grad = Trueinset.X = inset.X.to(device)inset.yd = inset.yd.to(device)bdset.rig_in = bdset.rig_in.to(device)bdset.x_in = bdset.x_in.to(device)if bdset.x_out.requires_grad == False:bdset.x_out.requires_grad = Truebdset.x_out = bdset.x_out.to(device)bdset.x_w = bdset.x_w.to(device)if bdset.x_oc.requires_grad == False:bdset.x_oc.requires_grad = Truebdset.x_oc = bdset.x_oc.to(device)bdset.dir_in = bdset.dir_in.to(device)bdset.dir_out = bdset.dir_out.to(device)bdset.dir_w = bdset.dir_w.to(device)bdset.dir_oc = bdset.dir_oc.to(device)
def loadcpu(netu,netv,netp,inset,bdset):    netu = netu.to('cpu')netv = netv.to('cpu')netp = netp.to('cpu')if inset.X.requires_grad == False:inset.X.requires_grad = Trueinset.X = inset.X.to('cpu')inset.yd = inset.yd.to('cpu')bdset.rig_in = bdset.rig_in.to('cpu')bdset.x_in = bdset.x_in.to('cpu')bdset.x_out = bdset.x_out.to('cpu')bdset.x_w = bdset.x_w.to('cpu')bdset.x_oc = bdset.x_oc.to('cpu')bdset.dir_in = bdset.dir_in.to('cpu')bdset.dir_out = bdset.dir_out.to('cpu')bdset.dir_w = bdset.dir_w.to('cpu')bdset.dir_oc = bdset.dir_oc.to('cpu')
def Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd):inset.u = pred_u(netu,inset.X)inset.v = pred_v(netv,inset.X)inset.p = pred_p(netp,inset.X)u_x, = torch.autograd.grad(inset.u, inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))u_xx, = torch.autograd.grad(u_x[:,0:1], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))u_yy, = torch.autograd.grad(u_x[:,1:2], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))  u_lap = u_xx[:,0:1] + u_yy[:,1:2]v_x, = torch.autograd.grad(inset.v, inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))v_xx, = torch.autograd.grad(v_x[:,0:1], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))v_yy, = torch.autograd.grad(v_x[:,1:2], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))  v_lap = v_xx[:,0:1] + v_yy[:,1:2]p_x, = torch.autograd.grad(inset.p, inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))inset.res_u = (-nu*u_lap + inset.u*u_x[:,0:1] + inset.v*u_x[:,1:2] + p_x[:,0:1])**2inset.res_v = (-nu*v_lap + inset.u*v_x[:,0:1] + inset.v*v_x[:,1:2] + p_x[:,1:2])**2inset.res_div = (u_x[:,0:1] + v_x[:,1:2])**2inset.loss_u = torch.sqrt(inset.res_u.mean())inset.loss_v = torch.sqrt(inset.res_v.mean())inset.loss_div = torch.sqrt(inset.res_div.mean())inset.loss_in = penalty_in[0]*inset.loss_u + penalty_in[1]*inset.loss_v + \penalty_in[2]*inset.loss_divu_out = pred_u(netu,bdset.x_out)u_out_x, = torch.autograd.grad(u_out, bdset.x_out, create_graph=True, retain_graph=True,grad_outputs=torch.ones(bdset.x_out.shape[0],1).to(device))u_out_n = ((u_out_x*bdset.dir_out).sum(1)).reshape(-1,1)bdset.loss1 = torch.sqrt(u_out_n**2).mean()v_out = pred_v(netv,bdset.x_out)v_out_x, = torch.autograd.grad(v_out, bdset.x_out, create_graph=True, retain_graph=True,grad_outputs=torch.ones(bdset.x_out.shape[0],1).to(device))v_out_n = ((v_out_x*bdset.dir_out).sum(1)).reshape(-1,1)bdset.loss2 = torch.sqrt(v_out_n**2).mean()u_w = pred_u(netu,bdset.x_w)bdset.loss3 = torch.sqrt(u_w**2).mean()v_w = pred_v(netv,bdset.x_w)bdset.loss4 = torch.sqrt(v_w**2).mean()p_oc = pred_p(netp,bdset.x_oc)p_oc_x, = torch.autograd.grad(p_oc, bdset.x_oc, create_graph=True, retain_graph=True,grad_outputs=torch.ones(bdset.x_oc.shape[0],1).to(device))p_oc_n = ((p_oc_x*bdset.dir_oc).sum(1)).reshape(-1,1)bdset.loss5 = torch.sqrt(p_oc_n**2).mean()inset.loss_bd = penalty_bd[0]*bdset.loss1 + penalty_bd[1]*bdset.loss2 + penalty_bd[2]*bdset.loss3 +\penalty_bd[3]*bdset.loss4 + penalty_bd[4]*bdset.loss5return inset.loss_in + inset.loss_bd
def train_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd,optim,epoch,error_history,optimtype):print('Train y&p Neural Network')lossin_history,div_history,lossbd_history,lo_histroy = error_historyloss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)print('epoch_yp: %d, Lu:%.3e,Lv:%.3e,div:%.3e, time: %.2f\n'%(0,inset.loss_u.item(),inset.loss_v.item(),inset.loss_div.item(), 0.00))print('bd1:%.3e,bd2:%.3e,bd3:%.3e,bd4:%.4e,bd5:%.3e\n'%(bdset.loss1.item(),bdset.loss2.item(),bdset.loss3.item(),bdset.loss4.item(),bdset.loss5.item()))for it in range(epoch):st = time.time()if optimtype == 'Adam':for j in range(100):optim.zero_grad()loss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)loss.backward()optim.step()if optimtype == 'BFGS' or optimtype == 'LBFGS':def closure():loss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)optim.zero_grad()loss.backward()return lossoptim.step(closure) loss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)ela = time.time() - stprint('------------------------------')print('epoch_yp: %d, Loss:%.3e,Lu:%.3e,Lv:%.3e,div:%.3e, time: %.2f\n'%(it + 1,loss.item(),inset.loss_u.item(),inset.loss_v.item(),inset.loss_div.item(), ela))print('bd1:%.3e,bd2:%.3e,bd3:%.3e,bd4:%.4e,bd5:%.3e\n'%(bdset.loss1.item(),bdset.loss2.item(),bdset.loss3.item(),bdset.loss4.item(),bdset.loss5.item()))loss_u = inset.loss_u.item()loss_v = inset.loss_v.item()loss_div = inset.loss_div.item()bd1 = bdset.loss1.item()bd2 = bdset.loss2.item()bd3 = bdset.loss3.item()bd4 = bdset.loss4.item()bd5 = bdset.loss5.item()lossin_history.append([loss.item(),loss_u,loss_v,loss_div])div_history.append(loss_div)lossbd_history.append([bd1,bd2,bd3,bd4,bd5])lo_histroy.append(loss.item())error_history = [lossin_history,div_history,lossbd_history,lo_histroy]return error_historyn_tr = 64
nx_bd = [100,100]
mode = 'uniform'
#mode = 'qmc'
inset = INSET(n_tr,mode)bdset = BDSET(nx_bd)dtype = torch.float64
wid = 25
wid = 25
layer_u = [2,15,10,wid,1];netu = Net(layer_u,dtype)
layer_v = [2,15,10,wid,1];netv = Net(layer_v,dtype)
layer_p = [2,15,10,wid,1];netp = Net(layer_p,dtype)fname1 = "upoint-NSlay%dvar.pt"%(wid)
fname2 = "vpoint-NSlay%dvar.pt"%(wid)
fname3 = "ppoint-NSlay%dvar.pt"%(wid)#netu = torch.load(fname1)
#netv = torch.load(fname2)
#netp = torch.load(fname3)loadtype(inset,bdset,dtype)
loadcuda(netu,netv,netp,inset,bdset)loss_history = [[],[],[],[]]
epoch = 30
start_time = time.time()
penalty_in = [1e0,1e0,1e0]
penalty_bd = [1e0,1e0,1e0,1e0,1e0]
lr = 1e0
max_iter = 1
#optimtype = 'Adam'
optimtype = 'BFGS'
#optimtype = 'LBFGS'
number = 0
for i in range(max_iter):if optimtype == 'BFGS':optim = bfgs.BFGS(itertools.chain(netu.parameters(),netv.parameters(),netp.parameters()),lr=lr, max_iter=100,tolerance_grad=1e-16, tolerance_change=1e-16,line_search_fn='strong_wolfe')if optimtype == 'LBFGS':optim = torch.optim.LBFGS(itertools.chain(netu.parameters(),netv.parameters(),netp.parameters()),lr=lr, max_iter=100,history_size=100,line_search_fn='strong_wolfe')if optimtype == 'Adam':optim = torch.optim.Adam(itertools.chain(netu.parameters(),netv.parameters(),netp.parameters()),lr=lr)loss_history = \train_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd,optim,epoch,loss_history,optimtype)if (i + 1)%5 == 0:number += 1lr *= 0.985
elapsed = time.time() - start_time
print('Finishied! train time: %.2f\n' %(elapsed))
loadcpu(netu,netv,netp,inset,bdset)
torch.cuda.empty_cache()torch.save(netu, fname1)
torch.save(netv, fname2)
torch.save(netp, fname3)lossin_history,div_history,lossbd_history,lo_history = loss_history
np.save('lay%d-epoch%d-lossin.npy'%(wid,epoch),lossin_history)
np.save('lay%d-epoch%d-lossbd.npy'%(wid,epoch),lossbd_history)
np.save('lay%d-epoch%d-lossdiv.npy'%(wid,epoch),div_history)
np.save('lay%d-epoch%d-losslo.npy'%(wid,epoch),lo_history)
#%%
fig, ax = plt.subplots(1,2,figsize=(12,3.5))ax[0].semilogy(np.array(lossin_history))
ax[0].legend(['loss','loss_u', 'loss_v', 'loss_div'])
ax[0].set_xlabel('iters') ax[1].plot(np.array(lossbd_history))
ax[1].legend(['bd1','bd2','bd3','bd4','bd5'])
ax[1].set_xlabel('iters')
plt.yscale('log')
fig.tight_layout()
plt.show()plt.scatter(inset.X.detach().numpy()[:,0],inset.X.detach().numpy()[:,1])
plt.show()
#%%
nx_te = [40,40]
te_bd_set = BDSET(nx_te)
hx = [(bounds[0,1] - bounds[0,0])/nx_te[0],(bounds[1,1] - bounds[1,0])/nx_te[1]]
fig, ax = plt.subplots(1,3, figsize=(13,4))
x1s = np.linspace(bounds[1,0] + 0.5*hx[1],bounds[1,1] - hx[1],nx_te[1])
te_bd_set.x_in = te_bd_set.x_in.type(dtype)
te_bd_set.x_out = te_bd_set.x_out.type(dtype)
u_in = pred_u(netu,te_bd_set.x_in).detach().numpy()
v_in = pred_v(netv,te_bd_set.x_in).detach().numpy()u_out = pred_u(netu,te_bd_set.x_out).detach().numpy()
v_out = pred_v(netv,te_bd_set.x_out).detach().numpy()u_o_p = x1s*(1-x1s)*4/(Ly**2)ax[1].plot(x1s,u_out); ax[1].plot(x1s,u_in)
ax[1].legend(['outlet','inlet'])
ax[1].set_xlabel('y'); ax[1].set_ylabel('u')
ax[1].set_title('PointNet: u')ax[2].plot(x1s,v_out); ax[2].plot(x1s,v_in)
ax[2].legend(['outlet','inlet'])
ax[2].set_xlabel('y'); ax[2].set_ylabel('v')
ax[2].set_title('PointNet: v')ax[0].semilogy(np.array(lo_history))
ax[0].legend(['pde_loss'])
ax[0].set_xlabel('iters')
plt.show()
#%%
n=n_tr
xx0 = np.linspace(bounds[0,0],bounds[0,1],2*n)
xx1 = np.linspace(bounds[1,0],bounds[1,1],n)
xx0, xx1 = np.meshgrid(xx0,xx1)
ind = (xx0>1-d)&(xx0<1+d)&(xx1>0.5-d)&(xx1<0.5+d)inp_s = torch.from_numpy(np.hstack([xx0.reshape(-1,1),xx1.reshape(-1,1)]))
u_pred = torch.zeros(inp_s.shape[0],1)
v_pred = torch.zeros(inp_s.shape[0],1)
p_pred = torch.zeros(inp_s.shape[0],1)
'''
for i in range(inp_s.shape[0]):u_pred[i,0] = pred_u(netu,inp_s[i,:])v_pred[i,0] = pred_v(netv,inp_s[i,:])p_pred[i,0] = pred_p(netp,inp_s[i,:])
'''
u_pred = pred_u(netu,inp_s)
v_pred = pred_v(netv,inp_s)
p_pred = pred_p(netp,inp_s)u = u_pred.detach().numpy()
v = v_pred.detach().numpy()
p = p_pred.detach().numpy()y1_s = u.reshape(n,2*n); y1 = u.reshape(n,2*n);
y2_s = v.reshape(n,2*n); y2 = v.reshape(n,2*n);
p = p.reshape(n,2*n)y1_s[ind] = np.nan  # for streamplot mask!
y2_s[ind] = np.nan
n_ind = np.invert(ind)
y1 = y1[n_ind].flatten()
y2 = y2[n_ind].flatten()
p = p[n_ind].flatten()speed_s = np.sqrt(y1_s**2+y2_s**2)
x0 = xx0[n_ind].flatten()
x1 = xx1[n_ind].flatten()num_line = 100fig, ax = plt.subplots(2,2, figsize=(12,6))for i in range(2): for j in range(2):ax[i,j].axis('equal')ax[i,j].set_xlim([0.,2.0])ax[i,j].set_ylim([0.,1.])ax[i,j].axis('off')ax[i,j].add_artist(plt.Rectangle((1-d,0.5-d),2*d,2*d, fill=True, color='white'))ax00 = ax[0,0].tricontourf(x0, x1, y1, num_line, cmap='rainbow')
fig.colorbar(ax00,ax=ax[0,0],fraction = 0.024,pad = 0.04)
#ax[0,0].contour(ax00, linewidths=0.6, colors='black')#, cmap='jet', color=speed_s, linewidth=0.5)
ax01 = ax[0,1].tricontourf(x0, x1, y2, num_line, cmap='rainbow')
fig.colorbar(ax01,ax=ax[0,1],fraction = 0.024,pad = 0.04)
ax[0,0].set_title('PointNet: u')
ax[0,1].set_title('PointNet: v')ax10 = ax[1,0].tricontourf(x0, x1, (y1**2+y2**2)**0.5, num_line, cmap='rainbow')
ax[1,0].streamplot(xx0, xx1, y1_s, y2_s, density = 1.5)
fig.colorbar(ax10,ax=ax[1,0],fraction = 0.024,pad = 0.04)
ax[1,0].set_title('PointNet: stream')ax11 = ax[1,1].tricontourf(x0, x1, p, num_line, cmap='rainbow')
#ax[1,0].contour(ax10, linewidths=0.6, colors='black')
#ax11 = ax[1,1].tricontourf(x0, x1, p, num_line, cmap='rainbow')
ax[1,1].set_title('PointNet: p')
#ax[1,1].set_title('PINN: p')
fig.colorbar(ax11,ax=ax[1,1],fraction = 0.024,pad = 0.04)
#fig.colorbar(ax11,ax=ax[1,1])
plt.show()




一次传入一个点:

上面两个我只放一个代码,其他的读者自己填充

升维思想求解障碍流

针对复杂边界问题,根据不同的点所在位置,确定一个不同的z,得到一个三维的输入,是否可以改善效果。然而发现效果和PointNet差不多,loss下降很快,数值结果很差

import torch
import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import torch.nn as nn
import torch.nn.functional as F
import itertools
import bfgs
from scipy.stats import qmc
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#device = torch.device('cpu')nu = 1e-2
d = 0.1
def region(x):x1 = x[:,0:1]x2 = x[:,1:2]box = (x1>1-d)&(x1<1+d)&(x2>0.5-d)&(x2<0.5+d)s = (x1>0)&(x1<2)&(x2>0)&(x2<1)&(np.invert(box))return s.flatten()
hr = 1e-3
bounds = np.array([0 - hr,2 + hr,0 - hr,1 + hr]).reshape(2,2)Lx = 2.0
Ly = 1.0
def y_in(x):x2 = x[:,1:2]s = 0*xs[:,0:1] = 4*x2*(1-x2)/(Ly**2)return sdef y_d(x):return y_in(x)class Geo():def __init__(self,region,bounds):self.region = regionself.bounds = boundsself.dim = self.bounds.shape[0]def samples(self,N):x = np.zeros((N,self.dim))m=0while (m<N):pt = np.random.uniform(0,1,self.dim).reshape(1,-1)pt = pt*(self.bounds[:,1]-self.bounds[:,0])+self.bounds[:,0]if self.region(pt).all():x[m,:] = pt.ravel()m += 1return x          def quasi_samples(self,N):sampler = qmc.Sobol(d=self.dim)sample = sampler.random(n=2*N)sample = sample*(self.bounds[:,1]-self.bounds[:,0])+self.bounds[:,0]sample = sample[self.region(sample),:][:N,:]return sample   Omega = Geo(region, bounds)
class INSET():def __init__(self,n_tr,mode):if mode == 'uniform':x = np.linspace(bounds[0,0],bounds[0,1],2*n_tr)y = np.linspace(bounds[1,0],bounds[1,1],n_tr)xx0, xx1 = np.meshgrid(x,y)inp_s = (np.hstack([xx0.reshape(-1,1),xx1.reshape(-1,1)]))x = []for i in range(inp_s.shape[0]):ind = (inp_s[i,0] > 1 - d)&(inp_s[i,0] < 1 + d)&(inp_s[i,1] > 0.5 - d)&(inp_s[i,1] < 0.5 + d)if ~ind:x.append(inp_s[i,:])x = np.array(x)self.X = torch.tensor(x)else:self.X = torch.from_numpy(Omega.quasi_samples(n_tr*n_tr))self.size = self.X.shape[0]self.area = 2-d**2self.dim = 2self.yd = y_d(self.X)self.z = torch.zeros(self.size,1)self.X = torch.cat([self.X,self.z],1)
class BDSET():#边界点取值def __init__(self,nx):self.dim = 2self.hx = [(bounds[0,1] - bounds[0,0])/nx[0],(bounds[1,1] - bounds[1,0])/nx[1]]self.x_in = torch.zeros(nx[1],self.dim)self.dir_in = torch.zeros(nx[1],self.dim)for i in range(nx[1]):self.x_in[i,0] = bounds[0,0]self.x_in[i,1] = bounds[1,0] + (i + 0.5)*self.hx[1]self.dir_in[i,0] = -1self.dir_in[i,1] = 0self.rig_in = torch.zeros_like(self.x_in)self.rig_in = y_in(self.x_in)self.z_in = torch.ones(self.x_in.shape[0],1)#------------------------self.x_out = torch.zeros(nx[1],self.dim)self.dir_out = torch.zeros(nx[1],self.dim)for i in range(nx[1]):self.x_out[i,0] = bounds[0,1]self.x_out[i,1] = bounds[1,0] + (i + 0.5)*self.hx[1]self.dir_out[i,0] = 1self.dir_out[i,1] = 0self.z_out = 2*torch.ones(self.x_out.shape[0],1)#-----------------------self.x_w = torch.zeros(4*nx[0] + 2*nx[1],self.dim)self.z_w = torch.zeros(self.x_w.shape[0],1)self.dir_w = torch.zeros(4*nx[0] + 2*nx[1],self.dim)m = 0for i in range(nx[0]):self.x_w[m,0] = bounds[0,0] + (i + 0.5)*self.hx[0]self.x_w[m,1] = bounds[1,0]self.dir_w[m,0] = 0self.dir_w[m,1] = -1self.z_w[m,0] = 3m = m + 1for i in range(nx[0]):self.x_w[m,0] = bounds[0,0] + (i + 0.5)*self.hx[0]self.x_w[m,1] = bounds[1,1]self.dir_w[m,0] = 0self.dir_w[m,1] = 1self.z_w[m,0] = 4m = m + 1for i in range(nx[0]):self.x_w[m,0] = 1 - d + (i + 0.5)*(2*d/nx[0])self.x_w[m,1] = 0.5 - dself.dir_w[m,0] = 0self.dir_w[m,1] = -1self.z_w[m,0] = 5m = m + 1for i in range(nx[0]):self.x_w[m,0] = 1 - d + (i + 0.5)*(2*d/nx[0])self.x_w[m,1] = 0.5 + dself.dir_w[m,0] = 0self.dir_w[m,1] = 1self.z_w[m,0] = 5m = m + 1for i in range(nx[1]):self.x_w[m,0] = 1 - dself.x_w[m,1] = 0.5 - d + (i + 0.5)*(2*d/nx[1])self.dir_w[m,0] = -1self.dir_w[m,1] = 0self.z_w[m,0] = 5m = m + 1for i in range(nx[1]):self.x_w[m,0] = 1 + dself.x_w[m,1] = 0.5 - d + (i + 0.5)*(2*d/nx[1])self.dir_w[m,0] = 1self.dir_w[m,1] = 0self.z_w[m,0] = 5m = m + 1self.x_oc = torch.cat([self.x_in,self.x_w],0)self.dir_oc = torch.cat([self.dir_in,self.dir_w],0)#------------------self.x_in = torch.cat([self.x_in,self.z_in],1)self.x_out = torch.cat([self.x_out,self.z_out],1)self.x_w = torch.cat([self.x_w,self.z_w],1)self.x_oc = torch.cat([self.x_in,self.x_w],0)
np.random.seed(1234)
torch.manual_seed(1234)
class Net(torch.nn.Module):def __init__(self, layers, dtype):super(Net, self).__init__()self.layers = layersself.layers_hid_num = len(layers)-2self.device = deviceself.dtype = dtypefc = []for i in range(self.layers_hid_num - 1):fc.append(torch.nn.Linear(self.layers[i],self.layers[i+1]))fc.append(torch.nn.Linear(self.layers[i+1],self.layers[i+1]))fc.append(torch.nn.Linear(self.layers[self.layers_hid_num],self.layers[self.layers_hid_num]))fc.append(torch.nn.Linear(self.layers[-2],self.layers[-1]))self.fc = torch.nn.Sequential(*fc)for i in range(self.layers_hid_num - 1):self.fc[2*i].weight.data = self.fc[2*i].weight.data.type(dtype)self.fc[2*i].bias.data = self.fc[2*i].bias.data.type(dtype)self.fc[2*i + 1].weight.data = self.fc[2*i + 1].weight.data.type(dtype)self.fc[2*i + 1].bias.data = self.fc[2*i + 1].bias.data.type(dtype)self.fc[2*(self.layers_hid_num - 1)].weight.data = self.fc[2*(self.layers_hid_num - 1)].weight.data.type(dtype)self.fc[2*(self.layers_hid_num - 1)].bias.data = self.fc[2*(self.layers_hid_num - 1)].bias.data.type(dtype)self.fc[-1].weight.data = self.fc[-1].weight.data.type(dtype)self.fc[-1].bias.data = self.fc[-1].bias.data.type(dtype)def forward(self, X):dev = X.devicex = X[:,0:2];z = X[:,2:3]h0 = torch.sin(self.fc[2*0](x))temp = torch.eye(x.shape[-1],self.layers[1],dtype = self.dtype,device = dev)h0 = torch.sin(self.fc[2*0 + 1](h0)) + x@temptmp = h0.to(x.device)for i in range(1,self.layers_hid_num - 1):h = torch.sin(self.fc[2*i](h0))h = torch.sin(self.fc[2*i+1](h))temp = torch.eye(h0.shape[-1],self.layers[i+1],dtype = self.dtype,device = dev)h0 = h + h0@temp#h0 = torch.cat([h0,x],1)#h0 = h0.mean(1,keepdim = True);h0 = torch.cat([tmp,h0],1)#h0 = torch.cat([h0,z],1)h0 = torch.cat([h0,X[:,0:1]],1)temp = torch.eye(x.shape[-1],self.layers[-2],dtype = self.dtype,device = dev)h = torch.sin(self.fc[2*(self.layers_hid_num - 1)](h0)) + x@tempreturn self.fc[-1](h) def total_para(self):#计算参数数目return sum([x.numel() for x in self.parameters()])
def length(X):return (X[:,1:2] - bounds[1,0])*(bounds[1,1] - X[:,1:2])
def pred_u(netu,X):return netu.forward(X)*(X[:,0:1] - bounds[0,0]) + 4*X[:,1:2]*(1 - X[:,1:2])
def pred_v(netv,X):return netv.forward(X)*(X[:,0:1] - bounds[0,0])*length(X)
def pred_p(netp,X):return netp.forward(X)*(X[:,0:1] - bounds[0,1])
def loadtype(inset,bdset,dtype):inset.X = inset.X.type(dtype)inset.yd = inset.yd.type(dtype)#-----------------------------bdset.x_in = bdset.x_in.type(dtype)bdset.rig_in = bdset.rig_in.type(dtype)bdset.x_out = bdset.x_out.type(dtype)bdset.x_w = bdset.x_w.type(dtype)bdset.x_oc = bdset.x_oc.type(dtype)bdset.dir_in = bdset.dir_in.type(dtype)bdset.dir_out = bdset.dir_out.type(dtype)bdset.dir_w = bdset.dir_w.type(dtype)bdset.dir_oc = bdset.dir_oc.type(dtype)
def loadcuda(netu,netv,netp,inset,bdset):    netu = netu.to(device)netv = netv.to(device)netp = netp.to(device)if inset.X.requires_grad == False:inset.X.requires_grad = Trueinset.X = inset.X.to(device)inset.yd = inset.yd.to(device)bdset.rig_in = bdset.rig_in.to(device)bdset.x_in = bdset.x_in.to(device)if bdset.x_out.requires_grad == False:bdset.x_out.requires_grad = Truebdset.x_out = bdset.x_out.to(device)bdset.x_w = bdset.x_w.to(device)if bdset.x_oc.requires_grad == False:bdset.x_oc.requires_grad = Truebdset.x_oc = bdset.x_oc.to(device)bdset.dir_in = bdset.dir_in.to(device)bdset.dir_out = bdset.dir_out.to(device)bdset.dir_w = bdset.dir_w.to(device)bdset.dir_oc = bdset.dir_oc.to(device)
def loadcpu(netu,netv,netp,inset,bdset):    netu = netu.to('cpu')netv = netv.to('cpu')netp = netp.to('cpu')if inset.X.requires_grad == False:inset.X.requires_grad = Trueinset.X = inset.X.to('cpu')inset.yd = inset.yd.to('cpu')bdset.rig_in = bdset.rig_in.to('cpu')bdset.x_in = bdset.x_in.to('cpu')bdset.x_out = bdset.x_out.to('cpu')bdset.x_w = bdset.x_w.to('cpu')bdset.x_oc = bdset.x_oc.to('cpu')bdset.dir_in = bdset.dir_in.to('cpu')bdset.dir_out = bdset.dir_out.to('cpu')bdset.dir_w = bdset.dir_w.to('cpu')bdset.dir_oc = bdset.dir_oc.to('cpu')
def Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd):inset.u = pred_u(netu,inset.X)inset.v = pred_v(netv,inset.X)inset.p = pred_p(netp,inset.X)u_x, = torch.autograd.grad(inset.u, inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))u_xx, = torch.autograd.grad(u_x[:,0:1], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))u_yy, = torch.autograd.grad(u_x[:,1:2], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))  u_lap = u_xx[:,0:1] + u_yy[:,1:2]v_x, = torch.autograd.grad(inset.v, inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))v_xx, = torch.autograd.grad(v_x[:,0:1], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))v_yy, = torch.autograd.grad(v_x[:,1:2], inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))  v_lap = v_xx[:,0:1] + v_yy[:,1:2]p_x, = torch.autograd.grad(inset.p, inset.X, create_graph=True, retain_graph=True,grad_outputs=torch.ones(inset.size,1).to(device))inset.res_u = (-nu*u_lap + inset.u*u_x[:,0:1] + inset.v*u_x[:,1:2] + p_x[:,0:1])**2inset.res_v = (-nu*v_lap + inset.u*v_x[:,0:1] + inset.v*v_x[:,1:2] + p_x[:,1:2])**2inset.res_div = (u_x[:,0:1] + v_x[:,1:2])**2inset.loss_u = torch.sqrt(inset.res_u.mean())inset.loss_v = torch.sqrt(inset.res_v.mean())inset.loss_div = torch.sqrt(inset.res_div.mean())inset.loss_in = penalty_in[0]*inset.loss_u + penalty_in[1]*inset.loss_v + \penalty_in[2]*inset.loss_divu_out = pred_u(netu,bdset.x_out)u_out_x, = torch.autograd.grad(u_out, bdset.x_out, create_graph=True, retain_graph=True,grad_outputs=torch.ones(bdset.x_out.shape[0],1).to(device))u_out_n = ((u_out_x[:,0:2]*bdset.dir_out).sum(1)).reshape(-1,1)bdset.loss1 = torch.sqrt(u_out_n**2).mean()v_out = pred_v(netv,bdset.x_out)v_out_x, = torch.autograd.grad(v_out, bdset.x_out, create_graph=True, retain_graph=True,grad_outputs=torch.ones(bdset.x_out.shape[0],1).to(device))v_out_n = ((v_out_x[:,0:2]*bdset.dir_out).sum(1)).reshape(-1,1)bdset.loss2 = torch.sqrt(v_out_n**2).mean()u_w = pred_u(netu,bdset.x_w)bdset.loss3 = torch.sqrt(u_w**2).mean()v_w = pred_v(netv,bdset.x_w)bdset.loss4 = torch.sqrt(v_w**2).mean()p_oc = pred_p(netp,bdset.x_oc)p_oc_x, = torch.autograd.grad(p_oc, bdset.x_oc, create_graph=True, retain_graph=True,grad_outputs=torch.ones(bdset.x_oc.shape[0],1).to(device))p_oc_n = ((p_oc_x[:,0:2]*bdset.dir_oc).sum(1)).reshape(-1,1)bdset.loss5 = torch.sqrt(p_oc_n**2).mean()inset.loss_bd = penalty_bd[0]*bdset.loss1 + penalty_bd[1]*bdset.loss2 + penalty_bd[2]*bdset.loss3 +\penalty_bd[3]*bdset.loss4 + penalty_bd[4]*bdset.loss5return inset.loss_in + inset.loss_bd
def train_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd,optim,epoch,error_history,optimtype):print('Train y&p Neural Network')lossin_history,div_history,lossbd_history,lo_histroy = error_historyloss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)print('epoch_yp: %d, Lu:%.3e,Lv:%.3e,div:%.3e, time: %.2f\n'%(0,inset.loss_u.item(),inset.loss_v.item(),inset.loss_div.item(), 0.00))print('bd1:%.3e,bd2:%.3e,bd3:%.3e,bd4:%.4e,bd5:%.3e\n'%(bdset.loss1.item(),bdset.loss2.item(),bdset.loss3.item(),bdset.loss4.item(),bdset.loss5.item()))for it in range(epoch):st = time.time()if optimtype == 'Adam':for j in range(100):optim.zero_grad()loss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)loss.backward()optim.step()if optimtype == 'BFGS' or optimtype == 'LBFGS':def closure():loss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)optim.zero_grad()loss.backward()return lossoptim.step(closure) loss = Loss_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd)ela = time.time() - stprint('------------------------------')print('epoch_yp: %d, Loss:%.3e,Lu:%.3e,Lv:%.3e,div:%.3e, time: %.2f\n'%(it + 1,loss.item(),inset.loss_u.item(),inset.loss_v.item(),inset.loss_div.item(), ela))print('bd1:%.3e,bd2:%.3e,bd3:%.3e,bd4:%.4e,bd5:%.3e\n'%(bdset.loss1.item(),bdset.loss2.item(),bdset.loss3.item(),bdset.loss4.item(),bdset.loss5.item()))loss_u = inset.loss_u.item()loss_v = inset.loss_v.item()loss_div = inset.loss_div.item()bd1 = bdset.loss1.item()bd2 = bdset.loss2.item()bd3 = bdset.loss3.item()bd4 = bdset.loss4.item()bd5 = bdset.loss5.item()lossin_history.append([loss.item(),loss_u,loss_v,loss_div])div_history.append(loss_div)lossbd_history.append([bd1,bd2,bd3,bd4,bd5])lo_histroy.append(loss.item())error_history = [lossin_history,div_history,lossbd_history,lo_histroy]return error_historyn_tr = 64
nx_bd = [100,100]
mode = 'uniform'
#mode = 'qmc'
inset = INSET(n_tr,mode)bdset = BDSET(nx_bd)dtype = torch.float64
hid = 5
fid = 20
wid = hid + 1
layer_u = [2,fid,hid,wid,1];netu = Net(layer_u,dtype)
layer_v = [2,fid,hid,wid,1];netv = Net(layer_v,dtype)
layer_p = [2,fid,hid,wid,1];netp = Net(layer_p,dtype)fname1 = "utest-NSlay%dvar.pt"%(wid)
fname2 = "vtest-NSlay%dvar.pt"%(wid)
fname3 = "ptest-NSlay%dvar.pt"%(wid)#netu = torch.load(fname1)
#netv = torch.load(fname2)
#netw = torch.load(fname3)loadtype(inset,bdset,dtype)
loadcuda(netu,netv,netp,inset,bdset)loss_history = [[],[],[],[]]
epoch = 10
start_time = time.time()
penalty_in = [1e0,1e0,1e0]
penalty_bd = [1e0,1e0,1e0,1e0,1e0]
lr = 1e0
max_iter = 1
#optimtype = 'Adam'
optimtype = 'BFGS'
#optimtype = 'LBFGS'
number = 0
for i in range(max_iter):if optimtype == 'BFGS':optim = bfgs.BFGS(itertools.chain(netu.parameters(),netv.parameters(),netp.parameters()),lr=lr, max_iter=100,tolerance_grad=1e-16, tolerance_change=1e-16,line_search_fn='strong_wolfe')if optimtype == 'LBFGS':optim = torch.optim.LBFGS(itertools.chain(netu.parameters(),netv.parameters(),netp.parameters()),lr=lr, max_iter=100,history_size=100,line_search_fn='strong_wolfe')if optimtype == 'Adam':optim = torch.optim.Adam(itertools.chain(netu.parameters(),netv.parameters(),netp.parameters()),lr=lr)loss_history = \train_yp(netu,netv,netp,inset,bdset,penalty_in,penalty_bd,optim,epoch,loss_history,optimtype)if (i + 1)%5 == 0:number += 1lr *= 0.985
elapsed = time.time() - start_time
print('Finishied! train time: %.2f\n' %(elapsed))
loadcpu(netu,netv,netp,inset,bdset)
torch.cuda.empty_cache()torch.save(netu, fname1)
torch.save(netv, fname2)
torch.save(netp, fname3)lossin_history,div_history,lossbd_history,lo_history = loss_history
np.save('lay%d-epoch%d-lossin.npy'%(wid,epoch),lossin_history)
np.save('lay%d-epoch%d-lossbd.npy'%(wid,epoch),lossbd_history)
np.save('lay%d-epoch%d-lossdiv.npy'%(wid,epoch),div_history)
np.save('lay%d-epoch%d-losslo.npy'%(wid,epoch),lo_history)
#%%
fig, ax = plt.subplots(1,2,figsize=(12,3.5))ax[0].semilogy(np.array(lossin_history))
ax[0].legend(['loss','loss_u', 'loss_v', 'loss_div'])
ax[0].set_xlabel('iters') ax[1].plot(np.array(lossbd_history))
ax[1].legend(['bd1','bd2','bd3','bd4','bd5'])
ax[1].set_xlabel('iters')
plt.yscale('log')
fig.tight_layout()
plt.show()plt.scatter(inset.X.detach().numpy()[:,0],inset.X.detach().numpy()[:,1])
plt.show()
#%%
nx_te = [40,40]
te_bd_set = BDSET(nx_te)
hx = [(bounds[0,1] - bounds[0,0])/nx_te[0],(bounds[1,1] - bounds[1,0])/nx_te[1]]
fig, ax = plt.subplots(1,3, figsize=(13,4))
x1s = np.linspace(bounds[1,0] + 0.5*hx[1],bounds[1,1] - hx[1],nx_te[1])
te_bd_set.x_in = te_bd_set.x_in.type(dtype)
te_bd_set.x_out = te_bd_set.x_out.type(dtype)
u_in = pred_u(netu,te_bd_set.x_in).detach().numpy()
v_in = pred_v(netv,te_bd_set.x_in).detach().numpy()u_out = pred_u(netu,te_bd_set.x_out).detach().numpy()
v_out = pred_v(netv,te_bd_set.x_out).detach().numpy()u_o_p = x1s*(1-x1s)*4/(Ly**2)ax[1].plot(x1s,u_out); ax[1].plot(x1s,u_in)
ax[1].legend(['outlet','inlet'])
ax[1].set_xlabel('y'); ax[1].set_ylabel('u')
ax[1].set_title('PINN: u')ax[2].plot(x1s,v_out); ax[2].plot(x1s,v_in)
ax[2].legend(['outlet','inlet'])
ax[2].set_xlabel('y'); ax[2].set_ylabel('v')
ax[2].set_title('PINN: v')ax[0].semilogy(np.array(lo_history))
ax[0].legend(['pde_loss'])
ax[0].set_xlabel('iters')
plt.show()
#%%
n=n_tr
xx0 = np.linspace(bounds[0,0],bounds[0,1],2*n)
xx1 = np.linspace(bounds[1,0],bounds[1,1],n)
xx0, xx1 = np.meshgrid(xx0,xx1)
ind = (xx0>1-d)&(xx0<1+d)&(xx1>0.5-d)&(xx1<0.5+d)inp_s = torch.from_numpy(np.hstack([xx0.reshape(-1,1),xx1.reshape(-1,1)]))
z = torch.zeros(inp_s.shape[0],1)
eps = 1e-7
for i in range(inp_s.shape[0]):if abs(inp_s[i,0] - bounds[0,0]) < eps:#x_inz[i,0] = 1elif abs(inp_s[i,0] - bounds[0,1]) < eps:#x_outz[i,0] = 2elif abs(inp_s[i,1] - bounds[1,0]) < eps: #low x_wz[i,0] = 3elif abs(inp_s[i,1] - bounds[1,1]) < eps:# upper x_wz[i,0] = 4elif abs(inp_s[i,0] - (1 - d)) < eps and (inp_s[i,1] >= 0.5 - d and inp_s[i,1] <= 0.5 + d):z[i,0] = 5elif abs(inp_s[i,0] - (1 + d)) < eps and (inp_s[i,1] >= 0.5 - d and inp_s[i,1] <= 0.5 + d):z[i,0] = 5elif abs(inp_s[i,1] - (0.5 - d)) < eps and (inp_s[i,0] >= 1 - d and inp_s[i,0] <= 1 + d):z[i,0] = 5elif abs(inp_s[i,1] - (0.5 + d)) < eps and (inp_s[i,0] >= 1 - d and inp_s[i,0] <= 1 + d):z[i,0] = 5
inp_s = torch.cat([inp_s,z],1)
u_pred = torch.zeros(inp_s.shape[0],1)
v_pred = torch.zeros(inp_s.shape[0],1)
p_pred = torch.zeros(inp_s.shape[0],1)u_pred = pred_u(netu,inp_s)
v_pred = pred_v(netv,inp_s)
p_pred = pred_p(netp,inp_s)u = u_pred.detach().numpy()
v = v_pred.detach().numpy()
p = p_pred.detach().numpy()y1_s = u.reshape(n,2*n); y1 = u.reshape(n,2*n);
y2_s = v.reshape(n,2*n); y2 = v.reshape(n,2*n);
p = p.reshape(n,2*n)y1_s[ind] = np.nan  # for streamplot mask!
y2_s[ind] = np.nan
n_ind = np.invert(ind)
y1 = y1[n_ind].flatten()
y2 = y2[n_ind].flatten()
p = p[n_ind].flatten()speed_s = np.sqrt(y1_s**2+y2_s**2)
x0 = xx0[n_ind].flatten()
x1 = xx1[n_ind].flatten()num_line = 100fig, ax = plt.subplots(2,2, figsize=(12,6))for i in range(2): for j in range(2):ax[i,j].axis('equal')ax[i,j].set_xlim([0.,2.0])ax[i,j].set_ylim([0.,1.])ax[i,j].axis('off')ax[i,j].add_artist(plt.Rectangle((1-d,0.5-d),2*d,2*d, fill=True, color='white'))ax00 = ax[0,0].tricontourf(x0, x1, y1, num_line, cmap='rainbow')
fig.colorbar(ax00,ax=ax[0,0],fraction = 0.024,pad = 0.04)
#ax[0,0].contour(ax00, linewidths=0.6, colors='black')#, cmap='jet', color=speed_s, linewidth=0.5)
ax01 = ax[0,1].tricontourf(x0, x1, y2, num_line, cmap='rainbow')
fig.colorbar(ax01,ax=ax[0,1],fraction = 0.024,pad = 0.04)
ax[0,0].set_title('PINN: u')
ax[0,1].set_title('PINN: v')ax10 = ax[1,0].tricontourf(x0, x1, (y1**2+y2**2)**0.5, num_line, cmap='rainbow')
ax[1,0].streamplot(xx0, xx1, y1_s, y2_s, density = 1.5)
fig.colorbar(ax10,ax=ax[1,0],fraction = 0.024,pad = 0.04)
ax[1,0].set_title('PINN: stream')ax11 = ax[1,1].tricontourf(x0, x1, p, num_line, cmap='rainbow')
#ax[1,0].contour(ax10, linewidths=0.6, colors='black')
#ax11 = ax[1,1].tricontourf(x0, x1, p, num_line, cmap='rainbow')
ax[1,1].set_title('PINN: p')
#ax[1,1].set_title('PINN: p')
fig.colorbar(ax11,ax=ax[1,1],fraction = 0.024,pad = 0.04)
#fig.colorbar(ax11,ax=ax[1,1])
plt.show()

NS方程求解-PointNet和升维思想(效果很差)相关推荐

  1. 有限差分法求解不可压NS方程

    网上关于有限差分法解NS方程的程序实现不尽完备,这里是一些补充注解 现有的优秀资料 理论向 [1]如何从物理意义上理解NS方程? - 知乎 [2]NS方程数值解法:投影法的简单应用 - 知乎 [3][ ...

  2. 工程流体力学笔记暂记24 (不可压缩粘性流体的运动微分方程**N-S方程**)

    粘性流体 牛顿第二动律 分析x表面的受力情况为例 当气体各流层之间有相对运动时,存在一种阻滞这种运动的力,一旦相对运动消失,这种力也不复存在,气体的这种性质称为粘性.其作用力称为粘性力,粘性力时和它的 ...

  3. 十二步解N-S方程之第四步

    2019独角兽企业重金招聘Python工程师标准>>> 十二步解N-S方程之第四步 通过前面的学习,我们已经知道如何去解线性和非线性的一维对流方程,而且也知道CFL是如何影响数值求解 ...

  4. 12步解N-S方程之第三步

    2019独角兽企业重金招聘Python工程师标准>>> 12步解N-S方程之第三步 在前2步中我们已经学会如何使用python编写程序解决简单的一维线性/非线性对流问题.也通过调节参 ...

  5. 011 符号计算-积分、微分、极限、积分变换、方程求解

    符号微积分计算 0.符号函数 >>syms x >> f(x)=2*xf(x) = 2*x>> f(1)ans = 2 >> g=2*x %g只是符号变 ...

  6. 张乐:研发效能的升维思考与降维执行|发布会精彩回顾

    在 4 月 20 日举行的<中国企业软件研发管理白皮书>发布会上,DevOps 与研发效能资深技术专家张乐老师做了一场名为<研发效能的升维思考和降维执行>的主题演讲,阐述了如何 ...

  7. 带有不同粘性系数的稳态N-S方程的几种迭代有限元方法的latex模板

    \documentclass{article} \linespread{2.0} \usepackage{geometry}%页边距等 \geometry{left=2.5cm,right=2.5cm ...

  8. 【组合数学】递推方程 ( 无重根递推方程求解实例 | 无重根下递推方程求解完整过程 )

    文章目录 一.斐波那契数列求解 二.无重根下递推方程求解完整过程 一.斐波那契数列求解 1 . 斐波那契数列示例 : ( 1 ) 斐波那契数列 : 1,1,2,3,5,8,13,⋯1 , 1 , 2 ...

  9. matlab怎么重新打开新的代码,方程求解程序代码求助-程序代码修改或新的代码...

    很简单的方程求解程序,调用mulDNewton函数求解,之前在Matlab 2011b版本上运行成功,现在在Matlab 2018a版本上总是出错,程序代码和出错的提示如下,mulDNewton函数代 ...

  10. 解ns方程_流体动力学NS方程的哲学缺陷

    在2014年,我与好几个学友谈论过流体力学中的NS方程.眼下有空,也就把自身近几年的思考简述如下. 就NS方程的推导及其所反映的客观现象而言,NS方程是对流体微元在瞬时意义上变形运动的描述.在流体力学 ...

最新文章

  1. 净空法师认为忧郁症源于缺乏伦理教育和因果教育
  2. HDOJ-2062 :Subset sequence(DP)
  3. oracle在哪些系统运行,ORACLE 查看系统运行情况
  4. python从低到高排序_使用python对matplotlib直方图中的xaxis值从最低值到最高值排序...
  5. solaris11 format zpool
  6. delphi64位 char数组转换string中文乱码_使用位运算、值交换等方式反转java字符串-共四种方法...
  7. VBA 用 Environ 获取系统环境变量
  8. Protecting the Flowers(POJ-3262)
  9. 中小学、幼儿园校园明厨亮灶视频监控平台要求
  10. servlet监听器Listener介绍和使用
  11. 【BZOJ3942】Censoring [KMP]
  12. 苹果计算机音频无法使用,解决Mac电脑直播没有电脑内声音的问题
  13. Kali Linux上最佳安全测试工具
  14. 基于Python的随机森林(RF)回归与变量重要性影响程度分析
  15. 爱情树代码python_送男朋友礼物送什么比较有意义?
  16. Google 搜索知识
  17. [苹果解密]创新是伟大公司诞生的源泉--Apple再度成为美国最大上市公司
  18. 怎么修改PDF文字,PDF修改文字用什么方法
  19. pytorch 定义torch类型数据_PyTorch 使用 TorchText 进行文本分类
  20. Python3爬虫-04-模拟登录爬取企信宝200页数据

热门文章

  1. 关于r7000p更新专业版系统以及原版驱动无声音或无杜比音效的解决办法
  2. 获取硬盘序列号(VC)
  3. oracle存储过程插表,oracle 一张表插入另外一张表 存储过程
  4. Linux如何整数分区,硬盘整数分区怎么计算?NTFS整数分区数值表分享
  5. DirectX11 SDK 例程报错解决方法
  6. SQL SERVER2008查询分析器的使用
  7. OpenCore引导配置说明第六版
  8. 如何彻底删掉360安全卫士,删除顽固的DLL文件
  9. 如何通过Google学术快速获取参考文献引用格式-2021年
  10. python分析并爬取起点中文网的章节数据,最后保存为txt文档