1. Importance Sampling

Importance sampling is a variance reduction technique that exploits(利用) the fact that the
Monte Carlo estimator

converges more quickly if the samples are taken from a distribution p(x) that is similar
to the function f (x) in the integrand. The basic idea is that by concentrating work where
the value of the integrand is relatively high, an accurate estimate is computed more
efficiently.

So long as the random variables are sampled from a probability distribution that is
similar in shape to the integrand, variance is reduced.

In practice, importance sampling is one of the most frequently used variance reduction
techniques in rendering, since it is easy to apply and is very effective when good sampling
distributions are used. It is one of the variance reduction techniques of choice in pbrt,
and therefore a variety of techniques for sampling from distributions from BSDFs, light
sources, and functions related to participating media will be derived in this chapter.

(Importance sampling 用来减少variance,主要是由于采样的分布函数 p(x) 与 f(x) 相似的话,就可以减少 variance)

2. Multiple importance sampling

Monte Carlo provides tools to estimate integrals of the form
∫ f (x) dx. However, we are frequently faced with integrals that are the product of two or more functions:
∫ f (x)g(x) dx. If we have an importance sampling strategy for f (x) and a strategy for
g(x), which should we use? (Assume that we are not able to combine the two sampling
strategies to compute a PDF that is proportional to the product f (x)g(x) that can itself
be sampled easily.) As shown in the discussion of importance sampling, a bad choice of
sampling distribution can be much worse than just using a uniform distribution.

For example, consider the problem of evaluating direct lighting integrals of the form

If we were to perform importance sampling to estimate this integral according to distributions
based on either Ld or fr, one of these two will often perform poorly.

Unfortunately, the obvious solution of taking some samples from each distribution and
averaging the two estimators is hardly any better. Because the variance is additive in this
case, this approach doesn’t help—once variance has crept into an estimator, we can’t
eliminate it by adding it to another estimator even if it itself has low variance.

(例如 ∫ f (x)g(x) dx 这样的积分,用 《PBRT_V2 总结记录 <77> Monte Carlo Integration》 的公式:

,pdf 应该是"f(x)g(x)"的 pdf,如果这个pdf 使用  “f(x)” 或者是 “g(x)” 相关的 pdf的话,得到结果是不精准的,就算是 从 f(x) 和 g(x) 各自的pdf 去采样,再平均,也不能更好地去 模拟 ∫ f (x)g(x) dx 这样的积分)

Multiple importance sampling (MIS) addresses exactly these kinds of problems, with
a simple and easy-to-implement technique. The basic idea is that, when estimating an
integral, we should draw samples from multiple sampling distributions, chosen in the
hope that at least one of them will match the shape of the integrand reasonably well, even
if we don’t know which one this will be. MIS provides a method to weight the samples
from each technique that can eliminate large variance spikes due to mismatches between
the integrand’s value and the sampling density. Specialized sampling routines that only
account for unusual special cases are even encouraged, as they reduce variance when
those cases occur, with relatively little cost in general.

If two sampling distributions pf and pg are used to estimate the value of ∫f (x)g(x) dx,

the new Monte Carlo estimator given by MIS is

where nf is the number of samples taken from the pf distribution method, ng is the
number of samples taken from pg, andwf andwg are special weighting functions chosen
such that the expected value of this estimator is the value of the integral of f (x)g(x).

The weighting functions take into account all of the different ways that a sample Xi or
Yj could have been generated, rather than just the particular one that was actually used.
A good choice for this weighting function is the balance heuristic:

The balance heuristic is a provably good way to weight samples to reduce variance.

(MIS 就可以解决上面的 模拟 ∫ f (x)g(x) dx 的问题)

Here we provide an implementation of the balance heuristic for the specific case of two
distributions pf and pg.We will not need a more general multidistribution case in pbrt.

inline float BalanceHeuristic(int nf, float fPdf, int ng, float gPdf) {return (nf * fPdf) / (nf * fPdf + ng * gPdf);
}

In practice, the power heuristic often reduces variance even further. For an exponent β,
the power heuristic is

Veach determined empirically that β = 2 is a good value.We have β = 2 hard-coded into
the implementation here.

inline float PowerHeuristic(int nf, float fPdf, int ng, float gPdf) {float f = nf * fPdf, g = ng * gPdf;return (f*f) / (f*f + g*g);
}

PBRT_V2 总结记录 81 Importance Sampling相关推荐

  1. 重要性采样(Importance Sampling)详细学习笔记

    重要性采样(Importance Sampling)详细学习笔记 文章目录 重要性采样(Importance Sampling)详细学习笔记 前言: 参考主体: on-policy 和 off-pol ...

  2. 重要性采样(Importance Sampling)简介和简单样例实现

    重要性采样(Importance Sampling)简介和简单样例实现 在渲染领域,重要性采样这个术语是很常见的,但它究竟是什么呢?我们首先考虑这样的一种情况: 如果场景里有一点P,我们想计算P点的最 ...

  3. matlab重要性采样,Importance Sampling (重要性采样)介绍 | 文艺数学君

    摘要这一篇是关于重要性抽样(importance sampling)的介绍, 包括他的背景知识, 相关的数学转换和最后的例子. 简介 重要性抽样(importance sampling)是一种近似的抽 ...

  4. FastGCN: fast learning with graph convolutional networks via importance sampling 论文详解 ICLR 2018

    文章目录 1 简单介绍 概率测度 probability measure 自助法 bootstrapping GCN面临的两个挑战 解决思路(创新点) 2 相关工作 3 通过采样进行训练和推理 定理1 ...

  5. 图形学数学基础之重要性采样(Importance Sampling)

    作者:i_dovelemon 日期:2017/08/06 来源:CSDN 主题:Importance Sampling, PDF, Monte Carlo 引言 前面的文章[图形学数学基础之基本蒙特卡 ...

  6. FastGCNL:FAST LEARNING WITH GRAPH CONVOLUTIONAL NETWORKS VIA IMPORTANCE SAMPLING

    一.数据集cora介绍 Cora数据集包含2708篇科学出版物, 5429条边,总共7种类别.数据集中的每个出版物都由一个 0/1 值的词向量描述,表示字典中相应词的缺失/存在. 该词典由 1433 ...

  7. 为什么Q-learning不用重要性采样(importance sampling)?

    为什么Q-learning不用重要性采样(importance sampling)? 文章目录 为什么Q-learning不用重要性采样(importance sampling)? 前言: 参考链接: ...

  8. 重要性采样Importance Sampling

    参考: https://zhuanlan.zhihu.com/p/41217212 https://zhuanlan.zhihu.com/p/78720910?utm_source=wechat_se ...

  9. 【机器学习分支】重要性采样(Importance sampling)学习笔记

    重要性采样(importance sampling)是一种用于估计概率密度函数期望值的常用蒙特卡罗积分方法.其基本思想是利用一个已知的概率密度函数来生成样本,从而近似计算另一个概率密度函数的期望值. ...

最新文章

  1. python矩阵运算库效率_python - 布尔矩阵运算的最快方法_performance_酷徒编程知识库...
  2. ASP.NET 2.0 X64的奇怪问题
  3. package.json文件||项目依赖||开发依赖
  4. iptables如何添加容许某个端口的访问
  5. 信息技术上册教案了解计算机,信息技术上册全册教案
  6. 事件机制(事件冒泡与事件捕获)
  7. 演示:各种网络安全设备、***设备向微软证书服务器申请证书
  8. php extjs 教程,Exjs 入门篇_extjs
  9. FFMPEG源码分析:avformat_open_input()(媒体打开函数)
  10. csv 读写 python_Python CSV读写
  11. 使用scroll实现Elasticsearch数据遍历和深度分页
  12. java htmlelement_Java Element.outerHtml方法代码示例
  13. PowerBuilder从入门到精通(PB12.5)
  14. 开心网CEO程炳皓称开心微博将一周内对外公测
  15. Dissect RB-Tree
  16. 腾讯视频Node.js服务是如何支撑国庆阅兵直播高并发的?
  17. matlab 反步法,反步法的Matlab仿真学习程序
  18. 显示器分辨率的英文(XGA、SXGA、UXGA、WSXGA等等来表示)
  19. 笔记本完全卸载自带键盘
  20. 李建忠设计模式之”领域规则“模式

热门文章

  1. 社区-发表评论和回复评论
  2. 小米电脑重装系统后亮度无法调节的解决办法
  3. Mac环境下Android一键自动打包发布到蒲公英平台
  4. 通过各种实践活动 培养学生道德品质
  5. Win10 串口编程
  6. array_combine() - 创建一个数组,用一个数组的值作为其键名,另一个数组的值作为其值
  7. Centos清理内存 内存回收释放及内存使用查看的相关命令
  8. ubuntu 添加中文拼音输入法
  9. X509Certificate
  10. SQL注入-01-什么是SQL注入?