Physics-informed neural networks for inverse problems in nano-optics and metamaterials
论文信息
题目:hysics-informed neural networks for inverse problems in nano-optics and metamaterials
作者:Yuyao Chen, Lu Lu, George Em Karniadakis, and Luca Dal Negro
期刊会议:Computational Physics
年份:19
论文地址:
代码:
内容
动机
动机:
- 在强多光散射条件下,复杂多粒子几何中物理驱动的光散射微分模型的逆问题成为一个本质上不定的问题,利用传统方法不能满足预测需求
- 物理信息神经网络(PINNs)是近年来发展起来的一个通用框架,用于解决偏微分方程的正问题和反问题
- PINN仅使用一个训练数据集来实现所需的解决方案,从而减轻了替代方案中所需的大量训练数据集所带来的负担
问题定义:
逆问题:
f(x;∂u^∂x1,…,∂u^∂xd;∂2u^∂x1∂x1,…,∂2u^∂x1∂xd;…;λ)=0,x∈Ωf\left(\mathbf{x} ; \frac{\partial \hat{u}}{\partial x_{1}}, \ldots, \frac{\partial \hat{u}}{\partial x_{d}} ; \frac{\partial^{2} \hat{u}}{\partial x_{1} \partial x_{1}}, \ldots, \frac{\partial^{2} \hat{u}}{\partial x_{1} \partial x_{d}} ; \ldots ; \lambda\right)=0, \quad \mathbf{x} \in \Omegaf(x;∂x1∂u^,…,∂xd∂u^;∂x1∂x1∂2u^,…,∂x1∂xd∂2u^;…;λ)=0,x∈Ω
其中λ\lambdaλ未知,其中loss定义为,Li\mathcal{L}_{i}Li,是初始点的losslossloss,Lb\mathcal{L}_{b}Lb是边界点的losslossloss
L(θ,λ)=wfLf(θ,λ;Tf)+wiLi(θ,λ;Ti)+wbLb(θ,λ;Tb)\mathcal{L}(\boldsymbol{\theta}, \lambda)=w_{f} \mathcal{L}_{f}\left(\boldsymbol{\theta}, \lambda ; \mathcal{T}_{f}\right)+w_{i} \mathcal{L}_{i}\left(\boldsymbol{\theta}, \lambda ; \mathcal{T}_{i}\right)+w_{b} \mathcal{L}_{b}\left(\boldsymbol{\theta}, \lambda ; \mathcal{T}_{b}\right)L(θ,λ)=wfLf(θ,λ;Tf)+wiLi(θ,λ;Ti)+wbLb(θ,λ;Tb)
其中
Lf(θ,λ;Tf)=1∣Tf∣∑x∈Tf∥f(x;∂u^∂x1,…,∂u^∂xd;∂2u^∂x1∂x1,…,∂2u^∂x1∂xd;…;λ)∥22Li(θ,λ;Ti)=1∣Ti∣∑x∈Ti∥u^(x)−u(x)∥22Lb(θ,λ;Tb)=1∣Tb∣∑x∈Tb∥B(u^,x)∥22\begin{aligned} \mathcal{L}_{f}\left(\boldsymbol{\theta}, \lambda ; \mathcal{T}_{f}\right) &=\frac{1}{\left|\mathcal{T}_{f}\right|} \sum_{\mathbf{x} \in \mathcal{T}_{f}}\left\|f\left(\mathbf{x} ; \frac{\partial \hat{u}}{\partial x_{1}}, \ldots, \frac{\partial \hat{u}}{\partial x_{d}} ; \frac{\partial^{2} \hat{u}}{\partial x_{1} \partial x_{1}}, \ldots, \frac{\partial^{2} \hat{u}}{\partial x_{1} \partial x_{d}} ; \ldots ; \lambda\right)\right\|_{2}^{2} \\ \mathcal{L}_{i}\left(\boldsymbol{\theta}, \lambda ; \mathcal{T}_{i}\right) &=\frac{1}{\left|\mathcal{T}_{i}\right|} \sum_{\mathbf{x} \in \mathcal{T}_{i}}\|\hat{u}(\mathbf{x})-u(\mathbf{x})\|_{2}^{2} \\ \mathcal{L}_{b}\left(\boldsymbol{\theta}, \lambda ; \mathcal{T}_{b}\right) &=\frac{1}{\left|\mathcal{T}_{b}\right|} \sum_{\mathbf{x} \in \mathcal{T}_{b}}\|\mathcal{B}(\hat{u}, \mathbf{x})\|_{2}^{2} \end{aligned}Lf(θ,λ;Tf)Li(θ,λ;Ti)Lb(θ,λ;Tb)=∣Tf∣1x∈Tf∑∥∥∥∥f(x;∂x1∂u^,…,∂xd∂u^;∂x1∂x1∂2u^,…,∂x1∂xd∂2u^;…;λ)∥∥∥∥22=∣Ti∣1x∈Ti∑∥u^(x)−u(x)∥22=∣Tb∣1x∈Tb∑∥B(u^,x)∥22
根据PINN构建如下网络:建立微分方程解的代理模型,在更加uuu求得losslossloss,最后最小化losslossloss求得参数θ\thetaθ和λ\lambdaλ
具体应用于均质有限尺寸的超材料问题:
∇2Ez(x,y)+εr(x,y)k02Ez=0\nabla^{2} E_{z}(x, y)+\varepsilon_{r}(x, y) k_{0}^{2} E_{z}=0∇2Ez(x,y)+εr(x,y)k02Ez=0
其中EzE_{z}Ez是电厂的z分量,εr(x,y)\varepsilon_{r} (x, y)εr(x,y)是空间相关的相对介电常数,k=2π/λ0k=2\pi /\lambda_{0}k=2π/λ0
创新:
结论
Physics-informed neural networks for inverse problems in nano-optics and metamaterials相关推荐
- 物理信息神经网络PINNs : Physics Informed Neural Networks 详解
本博客主要分为两部分: 1.PINN模型论文解读 2.PINN模型相关总结 第一部分:PINN模型论文解读 一.摘要 基于物理信息的神经网络(Physics-informed Neural Netwo ...
- The neural particle method – An updated Lagrangian physics informed neural network for computational
论文信息 题目: XX 作者及单位: XX 期刊.会议: XX 时间: xx 论文地址: 论文链接 代码: 代码链接 基础 摘要 论文动机 Main contributions: Related Wo ...
- Solving Inverse Problems With Deep Neural Networks
Solving Inverse Problems With Deep Neural Networks - Robustness Included 作者:Martin Genzel, Jan Macdo ...
- Paper:《Graph Neural Networks: A Review of Methods and Applications—图神经网络:方法与应用综述》翻译与解读
Paper:<Graph Neural Networks: A Review of Methods and Applications-图神经网络:方法与应用综述>翻译与解读 目录 < ...
- 神经网络,流形和拓扑Neural Networks, Manifolds, and Topology
Recently, there's been a great deal of excitement and interest in deep neural networks because they' ...
- SPINN: Synergistic Progressive Inferenceof Neural Networks over Device and Cloud
题目:SPINN: Synergistic Progressive Inferenceof Neural Networks over Device and Cloud SPINN:设备和云上神经网络的 ...
- Neural Networks and Deep Learning - 神经网络与深度学习 - Overfitting and regularization - 过拟合和正则化
Neural Networks and Deep Learning - 神经网络与深度学习 - Overfitting and regularization - 过拟合和正则化 Neural Netw ...
- 1.3读论文笔记:M. Raissi a等人的Physics-informed neural networks:A deep learning framework for solving forw..
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems ...
- Graph Neural Networks: A Review of Methods and Applications(图神经网络:方法与应用综述)
Graph Neural Networks: A Review of Methods and Applications 图神经网络:方法与应用综述 Jie Zhou , Ganqu Cui , Zhe ...
最新文章
- severity distribution: tail of distributions
- 纪中B组模拟赛总结(2020.2.09)
- C/C++ strtol 函数 - C语言零基础入门教程
- 【知识图谱系列】PairNorm、DropEdge、DAGNN、Grand和GCNII五篇2020 Over-Smoothing论文综述
- 【递归算法】递归算法的快速入门
- 【中国农业银行风险管理部总经理 田继敏】筑牢IT风险第二道防线 保障银行信息科技安全
- 短址服务实现的一些算法
- 股票入门浅学20210721
- Tableau基础图表制作
- aso是做什么的_ASOer的目标
- Vue绑定<audio>/<video>标记的muted属性无效问题随记
- jfinal save 超过9个字段以上就会出现数组下标越界
- 数字转罗马数字_理解罗马数字
- QA之道知多少(一) 初出茅庐
- 反常积分(1.反常积分概念)
- 昆仑linux软件著作权,基于开源软件著作权
- H3C 路由器智能选路NQA策略
- Python:使用f-string保留小数点位数
- WEB自动化_登录案例以及免登录的方式
- 噢易机房BOSS系统