http://www.codinglabs.net/article_physically_based_rendering.aspx

Radiance 辐射率——用L表示
Irradiance 辐照度 ——光线进入,用I表示

the pursuit of realism is pushing rendering technology towards a detailed simulation of how light works and interacts with objects. physically based rendering is a catch all term of any technique that tries to achieve photorealism via physical simulation of light.

currently the best model to simulate light is captured by an equation known as the rendering equation. the rendering equation tries to describe how a “unit” of light is obtained given all the incoming light that interacts with a specific point of a given scene. we will see the details and introduce the correct terminology in a moment. it is important to notice that we will not try to solve the full rendering equation, instead we will use the following simplified version:

to understand this equation we first need to understand how the light works, and then, we will need to agree on some common terms. to give u a rough idea of what the formula means, in simple terms, we could say that the formula describes the colour of a pixel given all the incoming ‘coloured light’ and a function that tells us how to mix them.

physics terms
if we want to properly understand the rendering equation we need to capture the meaning of some physical quantities; the most important of these quantities is called radiance(represented with L in the formula).

radiance is a tricky thing to understand, as it is a combination of other physics quantities, therefore, before formally define it, we will introduce a few other quantities.

radiant flux: the radiant flux is the measure of the total amount of energy, emitted by a light source, expressed in Watts. we will represent the flux with the Greek leeter Φ.

any light source emits energy, and the amout of emitted energy is function of the wavelength.

Figure 1: Daylight spectral distribution

in figure 1 we can see the spectral distribution for day light; the radiant flux is the area of the function (to be exact, the area is the luminous flux, as the graph is limiting the wavelength to the human visible spectrum). for our purposes we will simplify the radiant flux with an RGB colour, even if this means losing a lot of information.

Solid angle: It’s a way to measure how large an object appears to an observer looking from a point. To do this we project the silhouette of the object onto the surface of a unit sphere centred in the point we are observing from. The area of the shape we have obtained is the solid angle. In Figure 2 you can see the solid angle ωω as a projection of the light blue polygon on the unit sphere.


Figure 2: Solid angle

Radiant Intensity: is the amount of flux per solid angle. If you have a light source that emits in all directions, how much of that light (flux) is actually going towards a specific direction? Intensity is the way to answer to that, it’s the amount of flux that is going in one direction passing through a defined solid angle. The formula that describes it is I=dΦdωI=dΦdω, where ΦΦ is the radiant flux and ωω is the solid angle.


Figure 3: Light intensity

Radiance: finally, we get to radiance. Radiance formula is:

where ΦΦ is the radiant flux, A is the area affected by the light, ω is the solid angle where the light is passing through and cosθcosθ is a scaling factor that “fades” the light with the angle.


Figure 4: Radiance components

we like this formula because it contains all the physical components we are intrested in, and we can use it to describe a single “ray” or light. in fact we can use radiance to describe the amount of flux, passing through an infinitely small solid angle, hitting an infinitely small area, and that describes the behaviour of a light ray. so when we talk about radiance we talk about some amount of ligh going in some direction to some area.

when we shade a point we are intrested in all the incoming light into that point, that is the sum of all the radiance that hit a hemisphere centered on the point itself; the name for this entity is irradiance. irradiance and radiance are our main physical quantities, and we will work on both of them to achieve our physically based rendering.

the rendering equation
we can now go back on the rendering equation and try to fully understand it.

we know understand that L is radiance, and it’s function of some point in the world and some direction plus the solid angle (we will always use infinitely small solid angles from now on, so think of it simply as a direction vector). the equation describes the outgoing radiance from a point Lo(p,ωo), which is all we need to colour a pixel on screen.

to calculate it we need the normal of the surface where our pixel lies on ( n), and the irradiance of the scene, which is given by Li(p,ωi)∀ωi. 任意方向wi的辐照度。to obtain the irradiance we sum them all the incoming radiance, 为了得到辐照度,我们把所有进入方向的辐射率加起来。hence the integral sign in front of the equation. note that the domain of the integral Ω is a semi-sphere centered at the point we are calculating and oriented 半球的中心点在我们要求解的P点。so that the top of the hemishphere itself can be found by moving away from the point along the normal direction. 半球的顶,可以定义为沿着法线方向移动一段距离。

the dot product n.wi is there to take into account the angle of incidence angle of the light ray. n点乘wi被视为光线的入射角。if the ray is perpendicular to the surface it will be more localized on the lit area, 如果是垂直照着表面,则会集中在照亮的区域,while if the angle is shallow it will be spread across a bigger area, 当入射的角度较大,那么传播的区域则会扩大, eventually spreading across too much to actually being visible.

now we can see that the equation is simply representing the outgoing radiance 出射方向的辐射率 given the incoming radiance weighted by the cosine of the angle between every incoming ray and the normal to the surface. 是对每个入射方向的辐射率进行缩放,缩放的值是入射光线与法线的cosine值。the bit we still need to introduce is fr(p,ωi,ωo), that is the BRDF
this function takes as input position, incoming and outgoing ray, and outputs a weight of how much the incoming ray is contributing to the final outgoing radiance. for a perfectly specular reflection, like a mirror, the BRDF function is 0 for every incoming ray apart for the one that has the same angle of the outgoing ray, in which case the function returns 1 (the angle is measured between the rays and the surface normal). it is important to notice that a physically based BRDF has to respect the law of conservation of energy, that is ∀ωi, ∫Ωfr(p,ωi,ωo)(n⋅ωi)dωo≤1, which means that the sum of reflected light must not exceed the amount of incoming light.

translate to code
so, now that we have all this usefull knowledge, how do we apply it to actually write something that renders to the screen? we have two main problems here.

  1. first of all, how can we represent all these radiance functions in the scene?
  2. and secondly, how do we solve the integral fast enough to be able to use this in a real-time engine?

the answer to the first question is simple, environment maps. for our purposes we will use environment maps (cubemaps, although spherical maps would be more suited) to encode the incoming radiance from a a specific direction towards a given point.

if we imagine that every pixel of the cubemap is a small emitter whose flux is the RGB colour, we can approximate L(p,ω) , with p being the exact center of the cubemap, to a texture read from the cubemap itself, so L(p,ω)≈texCUBE(cubemap,ω).
obviously it would be too much memory consuming to have a cubemap for every point in the scene (!), therefore we trade off some quality by creating a certain number of cubemaps in the scene and every point picks the cloest one. to reduce the error we can correct the sampling vector with the world position of the cubemap to be more accurate. this gives us a way to evaluate radiance, which is:

where wp is the sampling vector corrected by the point position and cubemap position in the world.

the answer for our second problem, how to solve the integral, is a bit more tricky, because in some cases, we will no be able to solve it quickly enough. but if the BRDF happens to depend only on the incoming radiance, or even better, on nothing (if it is constant), then we can do some nice optimization. so let us see how this happens if we plug in Lambert’s BRDF, which is a constant factor (all the incoming radiance contributes to the outgoing ray after being scaled by a constant).

Lambert
lambert’s BRDF sets fr(p,ωi,ωo)=c/π where c is the surface colour. if we plug this into the rendering equation we get:

now, the integral depends on wi and nothing else, which means we can precalcualte it (solving it with a Monte carlo intergration for example) and store the result into another cubemap. the value will be stored in wo direction, which means that knowing what output direction we have we can sample the cubemap and obtain the reflected light in that very direction. this reduces the whole rendering equation to a single sample from a pre-calcualted cubemap, specifically:

where ωop is the outgoing radiance corrected by the point position and cubemap position in the world.
So, now we have all the elements, and we can finally write a shader. I’ll show that in a moment, but for now, let’s see the results.

Quite good for a single texture read shader uh? Please note how the whole lighting changes with the change of the envinroment (the cubemap rendered is not the convolved one, which looks way more blurry as shown below).


Figure 5: Left the radiance map, right the irradiance map (Lambert rendering equation)

Now let’s present the shader’s code. Please note that for simplicity I’m not using the Monte Carlo integration but I’ve simply discretized the integral. Given infinite 无限的 samples it wouldn’t make any difference, but in a real case it will introduce more banding than Monte Carlo. In my tests it was good enough given that I’ve dropped the resolution of the cubemap to a 32x32 per face, but it’s worth bearing this in mind if you want to experiment with it.
在我的测试中,我将cubemap的分辨率降低到每个面32x32像素,这已经足够好了,但是如果你想用它做实验的话,这是值得记住的。

The first shader we need is the one that generates the blurry envmap (often referred to as the convolved envmap, since it is the result of the convolution of the radiance envmap and the kernel function (n⋅ωi) ).

Since in the shader we will integrate in spherical coordinates we will change the formula to reflect that.

You may have noticed that there is an extra sin(θi) in the formula; that is due to the fact that the integration is made of small uniform steps. When we are using the solid angle this is fine as the solid angles are evenly distributed on the integration area, but when we change to spherical coordinates we will get more samples where θ is zero and less where it goes to π/2. If you create a sphere in your favorite modeling tool and check it’s wireframe you’ll see what I mean. The sin(θi) function is there to compensate the distribution as dωi=sin(θ)dθdϕ.
The double integral is solved by applying a Monte Carlo estimator on each one; this leads to the following discrete equation that we can finally transform into shader code:

...
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR
{float3 normal = normalize( float3(input.InterpolatedPosition.xy, 1) );if(cubeFace==2)normal = normalize( float3(input.InterpolatedPosition.x,  1, -input.InterpolatedPosition.y) );else if(cubeFace==3)normal = normalize( float3(input.InterpolatedPosition.x, -1,  input.InterpolatedPosition.y) );else if(cubeFace==0)normal = normalize( float3(  1, input.InterpolatedPosition.y,-input.InterpolatedPosition.x) );else if(cubeFace==1)normal = normalize( float3( -1, input.InterpolatedPosition.y, input.InterpolatedPosition.x) );else if(cubeFace==5)normal = normalize( float3(-input.InterpolatedPosition.x, input.InterpolatedPosition.y, -1) );float3 up = float3(0,1,0);float3 right = normalize(cross(up,normal));up = cross(normal,right);float3 sampledColour = float3(0,0,0);float index = 0;for(float phi = 0; phi < 6.283; phi += 0.025){for(float theta = 0; theta < 1.57; theta += 0.1){float3 temp = cos(phi) * right + sin(phi) * up;float3 sampleVector = cos(theta) * normal + sin(theta) * temp;sampledColour += texCUBE( diffuseCubemap_Sampler, sampleVector ).rgb * cos(theta) * sin(theta);index ++;}}return float4( PI * sampledColour / index), 1 );
}
...

I’ve omitted the vertex shader and the variables definition and the source shader I’ve used is in HLSL. Running this for every face of the convolved cubemap using the normal cubemap as input gives us the irradiance map. We can now use the irradiance map as an input for the next shader, the model shader.

...
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR
{float3 irradiance= texCUBE(irradianceCubemap_Sampler, input.SampleDir).rgb;float3 diffuse = materialColour * irradiance;return float4( diffuse , 1);
}
...

Very short and super fast to evaluate.
This concludes the first part of the article on physically based rendering. I’m planning to write a second part on how to implement a more interesting BRDF like Cook-Torrance’s BRDF.

Article - Physically Based Rendering相关推荐

  1. AMD Cubemapgen for physically based rendering

    JUNE 10, 2012 57 COMMENTS Version : 1.67 – Living blog – First version was 4 September 2011 AMD Cube ...

  2. Physically Based Rendering,PBRT(光线跟踪:基于物理的渲染) 笔记

     提起PBRT(Physically Based Rendering: From Theory to Implementation)这本书, 在图形学业界可是鼎鼎大名, 该书获得2005年软件界J ...

  3. opengl-PBR基于物理的渲染(Physically Based Rendering):理论基础

    PBR文档链接 PBR-learnOpengl官方文档 理论基础 PBR概念 PBR基于物理的渲染(Physically Based Rendering),它指的是一些在不同程度上都基于与现实世界的物 ...

  4. Physically Based Rendering——史上最容易理解的BRDF中D函数NDF的中文资料

    粗糙度决定了D函数的分布,一般粗糙度是D函数的方差 本文假定读者已经对PBR即Physcially Based Rendering 基于物理的渲染有了初步的了解,对于PBR的入门有很多文章都介绍的不错 ...

  5. PBR (Physically Based Rendering)概念篇

    一.PBR是什么? Physically Based Rendering:基于物理的渲染 PBR:是一套框架,通过PBR保证整体的色调以及画面的统一 什么是基于物理渲染? 对现实世界中的一种近似,而不 ...

  6. Physically Based Rendering阅读

    关于3D渲染方面很好的一本书. 下面是它的官网, http://www.pbrt.org/index.html 以下是详细记录阅读的笔记,习题也需要做. 书本上的例子在 https://github. ...

  7. Learn OpenGL 笔记7.1 PBR Theory(physically based rendering基于物理的渲染 理论)

    PBR,或更通常称为基于物理的渲染,是一组渲染技术,它们或多或少基于与物理世界更接近的相同基础理论.由于基于物理的渲染旨在以物理上合理的方式模拟光线,因此与我们的原始光照算法(如 Phong 和 Bl ...

  8. 【《Unity 2018 Shaders and Effects Cookbook》翻译提炼】(九)Physically - Based Rendering

    制作过程中最重要的方面时效率,实时着色很昂贵,而Lambertian或BlinnPhong等技术是计算成本和现实之间的折中. 拥有更   强大的GPU允许我们逐步写更强大的光照模型和渲染引擎,目的是模 ...

  9. Moving Frostbite to Physically Based Rendering 3.0

    https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf https:// ...

最新文章

  1. LigerUI——天外飞仙
  2. 【iOS 开发】基本 UI 控件详解 (UIButton | UITextField | UITextView | UISwitch)
  3. jquery after append appendTo三个函数的区别
  4. Caffe学习(十)protobuf及caffe.proto解析
  5. zephyr 系统--- 内存池使用方法
  6. 印度不只有开挂火车,还有一开挂的数学家,凭一己之力单刷数学界
  7. SysUtils.StrLCat
  8. mysql-connector-java 5.1.13,Java连MySQL,mysql-connector-java-5.1.13-bin.jar究竟要怎
  9. 函数的参数对象$event的使用和利用他找到事件对象
  10. 使用librtmp推h264、aac实时流
  11. 基于Unity的A星算法实现
  12. 基于Python的文字生成图片系统
  13. SIP入门(一):建立SIP软电话环境
  14. 计算GPS坐标的直线距离
  15. python的对文档密码的简单破解
  16. 什么是CDN加速服务器?
  17. 公共数据库介绍~OECD经合组织数据库
  18. (转)快商通与商务通将客人网页内文本框输入的内容传到客服端对话界面显示...
  19. LeetCode-1646-获取生成数组中的最大值
  20. 国内十大正规现货交易平台排名(2021版榜单)

热门文章

  1. excel多个sheet转换成html,excel怎么把几个sheet汇总成一个表
  2. django 1.8 官方文档翻译:14-6 系统检查框架
  3. 你会招聘一个能力比自己强的人做下属吗?
  4. Day_26 正则表达式
  5. hexo之Volantis添加暗黑模式
  6. 亿级用户万台服务器背后,vivo云服务容器化如何破茧化蝶?
  7. win10系统笔记本电脑设置合盖睡眠/不休眠的方法
  8. linux 网站目录在哪里,Linux服务器如何快速定位WEB路径
  9. 哈工大编译原理期末复习(完整版)
  10. Java小白学习指南【day54】---luceneElasticSearch全文搜索