radiance is the final quantity computed by the rendering process. 渲染的最终结果是计算辐射率。我们使用反射率方程进行求解。
so far, we have been using the reflectance equation to compute it:

where Lo(p, v) is the outgoing radiance from the surface location p in the view direction v; Ω is the hemisphere of directions above p; f(l,v) is the BRDF evaluated for v and the current incoming direction l; Li(p, l) is the incoming radiance into p from l; ⊗ is the piecewise vector multiplication operator (used because both f(l, v) and Li(p, l) vary with wavelength, so are represented as RGB vectors); and θi is the angle between l and the surface normal n. The integration is over all possible l in Ω.

the reflectance equation is a restricted special case of the full rendering equation, presented by kajiya 卡集亚 in 1986. different forms have been used for the rendering equation. we will use the following one:
上面的反射率方程在某些情况下有点缺陷,所以我们使用下面的公式:


where the new elements are Le(p,v) (the emitted 首字母Le的e radiance 自发光的辐射率 from the surface location p in direction v), and the following replacement:

this replacement means that the incoming radiance into location p from direction l is equal to the outgoing radiance from some other point in the opposite direction -l. in this case, the “other point” is defined by the ray casting function r(p, l). this function returns the location of the first surface point hit by a ray cast from p in direction l.

the meaing of the rendering equation is straightforward. to shade a surface location p, we need to know the outgoing radiance Lo leaving p in the view direction v. This is equal to the emitted radiance Le 自发光的辐射率 plus the reflected radiance. Emission from light sources has been studied in previous chapters, as has reflectance. Even the ray casting operator is not as unfamiliar as it may seem. The Z-buffer computes it for rays cast from the eye into the scene.
这个有点递归的思想,就是进如的L(p,l),必然来自于另外一个点的出去的L(r(p,l),-l)的辐射率。
啥意思呢?比如进入的光线是l,点为p,这个进入的光线,必然是另外一个物体的点出去的辐射光线。如下图:

p的进入的辐射率,必然是来自于Q点出去的辐射率。
而Q点呢,又是来自于另外的点的出去的辐射率。以此递归下去。

the only new term is Lo(r(p, l),−l), which makes explicit the fact that the incoming radiance into one point must be outgoing from an other point. 这句话就是上面的解释的意思。unfortunately, this is a recusive term. 递归的过程。 that is, it is computed by yet another summation 求和 over outgoing radiance from locations r(r(p, l), l’). these in turn need to compute the outgoing radiance from locations r(r(r(p, l), l’), l’’), ad infinitum (and it is amzing that the real world can compute all this in real-time).
现实世界多么的强度,递归到无穷。

we know this intuitively, that lights illuminance a scene, and the photons bounce around the at each collision are aborbed, reflected, and refracted in a variety of ways. the rendering equation is significant in that it sums up all possible paths in a simple (looking) equation.

in real-time rendering, using just a local lighting model is the default. that is, only the surface data at the visible point is needed to compute the lighting. this is strength of the gpu pipeline, that primitives can be generated, processed, and then be discarded. transparency, reflections, and shadows, are examples of global illumination algorithms, in that they use information from other objects than the one being illuminated. these effects contribute greatly to increasing the realism in a rendered image, and also provide cues that help the viewer to understand spatical realtionships.

one way to think of the problem of illunimations is by the paths the photons take. in the local lighting model, photons travel from the light to a surface (ignoring intervening objects), then to the eye. shadowing techniques take into account these inervening object’s direct effects. with environment mapping, illumination travels from light sources to distant objects, then to local shiny objects, which mirror-reflect this light to the eye. Irradiance maps 辐照度贴图 simulate the photons again, which first travel to distant objects, but then light from all these objects is weighted and summed to compute the effect on the diffuse surface, which in turn is seen by the eye.

9.2 ambient occlusion
when a light source covers a large solid angle, it casts a soft shadow. ambient light, which illuminates evenly from all directions (see section 8.3) casts the softest shadows. since ambient light lacks any directional variation, shadows are especially important——in their absence, objects appear flat (see left side of figure 9.36).


figure 9.36 a diffuse object lit evenly from all directions. on the left, the object is rendered without any shadowing or interreflections. no details are visible, other than the outline. in the center, the object has been rendered with ambient occlusion. the image on the right was generated with a full global illumination simulation.
shadow of ambient light is referred to as ambient occlusion. unlike other types of shadowing, ambient occlusion does not depend on light direction, so it can be precomputed for static objects. this options will be discussed in more detail in section 9.10.1. here we will focus on techniques for computing ambient occlusion dynamically, which is useful for animating scenes or deforming 变形 objects.

9.2.1 ambient occlusion theory
for simplicity, we will focus on lambertian surfaces. the outgoing radiance Lo from such surfaces is proportional to the surface irradiacne E. irradiance is the cosine-weighted integral of incoming radiance, and in the general case depends on the surface position p and the surface normal n. ambient light is defined as constant incoming radiance Li(l)=LA for all incoming directions l. this results in the following equation for computing irradiance:

where the integral is performed over the hemisphere Ω of possible incoming directions. it can be seen that equation 9.12 yields irradiance values unaffected by surface position and orientation, leading to a flat appearance.
equation 9.12 does not take visibility into account. some directions will be blocked by other objects in the scene, or by other parts of the same object (see figure 9.37, for example, point p2). these directions will have some other incoming radiance, not La. assuming (for simplicity)

figure 9.37. an object under ambient illumination. three points (p0, p1, and p2) are shown. on the left, blocked directions are shown as black rays ending in intersection points (black circles). unblocked directions are shown as arrows, colored according to the cosine factor, so arrows closer t othe surface normal are lighter. on the right, each blue arrow shows the average unoccluded direction or bent normal.
that blocked directions have zero incoming radiance results in the following equation (first proposed by cook and torrance):

where v(p,l) is a visibility function that equals 0 if a ray cast from p in the direction of l is blocked, and 1 if it is not. the ambient occlusion value kA is defined thus:

possible values for kA range from 0 (representing a fully occluded surface point, only possible in degenerate cases) to 1 (representing a fully open surface point with no occlusion). once kA is defined, the equation for ambient irradiance in the presence of occlusion is simply:

9.2.2 shading with ambient occlusion
shading with ambient occlusion is best understood in the context of the full shading equation, which includes the effects of both direct and indirect (ambient) light.

9.3 reflectoins
environment mapping techniques for providing reflections of objects at a distance have been covered in section 8.4 and 8.5, with reflected rays computed using equation 7.30 on page 230. the limitation of such techniques is that they work on the assumption that the reflected objects are located far from the reflector, so that the same texture can be used by all reflection rays. generating planar reflections of nearby objects will be presented in this section, along with methods for rendering frosted glass and handling curved reflections.

9.3.1 planar reflections

Planar reflection, by which we mean reflection off a flat surface such as
a mirror, is a special case of reflection off arbitrary surfaces. As often
occurs with special cases, planar reflections are easier to implement and
can execute more rapidly than general reflections.

An ideal reflector follows the law of reflection, which states that the
angle of incidence is equal to the angle of reflection. That is, the angle
between the incident ray and the normal is equal to the angle between the
reflected ray and the normal. This is depicted in Figure 9.41, which illustrates
a simple object that is reflected in a plane. The figure also shows
an “image” of the reflected object. Due to the law of reflection, the reflected
image of the object is simply the object itself, physically reflected
through the plane. That is, instead of following the reflected ray, we could
follow the incident ray through the reflector and hit the same point, but

Figure 9.41. Reflection in a plane, showing angle of incidence and reflection, the reflected
geometry, and the reflector.

9.4 Transmittance 透光率
As discussed in Section 5.7, a transparent surface can be treated as a blend color or a filter color. When blending, the transmitter’s color is mixed with the incoming color from the objects seen through the transmitter. The over operator uses the α value as an opacity to blend these two colors. The transmitter color is multiplied by α, the incoming color by 1 −α, and the two summed. So, for example, a higher opacity means more of the transmitter’s color and less of the incoming color affects the pixel. While this gives a visual sense of transparency to a surface [554], it has little physical basis.
Multiplying the incoming color by a transmitter’s filter color is more
in keeping with how the physical world works. Say a blue-tinted filter is
attached to a camera lens. The filter absorbs or reflects light in such a
way that its spectrum resolves to a blue color. The exact spectrum is
usually unimportant, so using the RGB equivalent color works fairly well
in practice. For a thin object like a filter, stained glass, windows, etc., we
simply ignore the thickness and assign a filter color.
For objects that vary in thickness, the amount of light absorption can
be computed using the Beer-Lambert Law:

9.5 Refractions
For simple transmittance, we assume that the incoming light comes from
directly beyond the transmitter. This is a reasonable assumption when the
front and back surfaces of the transmitter are parallel and the thickness
is not great, e.g., for a pane of glass. For other transparent media, the
index of refraction plays an important part. Snell’s Law, which describes
how light changes direction when a transmitter’s surface is encountered, is
described in Section 7.5.3.
Bec [78] presents an efficient method of computing the refraction vector.
For readability (because n is traditionally used for the index of refraction
in Snell’s equation), define N as the surface normal and L as the direction
to the light:

where n = n1/n2 is the relative index of refraction, and

The resulting refraction vector t is returned normalized.
This evaluation can nonetheless be expensive. Oliveira [962] notes that
because the contribution of refraction drops off near the horizon, an approximation
for incoming angles near the normal direction is

where c is somewhere around 1.0 for simulating water. Note that the
resulting vector t needs to be normalized when using this formula.
The index of refraction varies with wavelength. That is, a transparent
medium will bend different colors of light at different angles. This
phenomenon is called dispersion, and explains why prisms work and why
rainbows occur. Dispersion can cause a problem in lenses, called chromatic
aberration. In photography, this phenomenon is called purple fringing, and
can be particularly noticeable along high contrast edges in daylight. In
computer graphics we normally ignore this effect, as it is usually an artifact
to be avoided. Additional computation is needed to properly simulate
the effect, as each light ray entering a transparent surface generates a set
of light rays that must then be tracked. As such, normally a single refracted
ray is used. In practical terms, water has an index of refraction of
approximately 1.33, glass typically around 1.5, and air essentially 1.0.
Some techniques for simulating refraction are somewhat comparable to
those of reflection. However, for refraction through a planar surface, it is
not as straightforward as just moving the viewpoint. Diefenbach [252] discusses
this problem in depth, noting that a homogeneous transform matrix

figure 9.49. refraction and reflection by a glass ball of a cubic environment map, with the map itself used as a skybox background. (image courtesy of NVIDIA corporation.)

is needed to propertly warp an image generated from a refracted viewpoint. in a similar vein静脉, Vlachos[1306] presents the shears necessary to render the refraction effect of a fish tank. Section 9.3.1 gave some techniques where the scene behind a refractor
was used as a limited-angle environment map. A more general way to give
an impression of refraction is to generate a cubic environment map from
the refracting object’s position. The refracting object is then rendered,
accessing this EM by using the refraction direction computed for the frontfacing
surfaces. An example is shown in Figure 9.49. These techniques give
the impression of refraction, but usually bear little resemblance to physical
reality. The refraction ray gets redirected when it enters the transparent
solid, but the ray never gets bent the second time, when it is supposed to
leave this object; this backface never comes into play. This flaw sometimes
does not matter, because the eye is forgiving for what the right appearance
should be.
Oliveira and Brauwers [965] improve upon this simple approximation
by taking into account refraction by the backfaces. In their scheme, the
backfaces are rendered and the depths and normals stored. The frontfaces
are then rendered and rays are refracted from these faces. The idea is to
find where on the stored backface data these refracted rays fall. Once the backface texel is found where the ray exits, the backface’s data at that point
properly refracts the ray, which is then used to access the environment map.
The hard part is to find this backface pixel. The procedure they use to trace
the rays is in the spirit of relief mapping (Section 6.7.4). The backface zdepths
are treated like a heightfield, and each ray walks through this buffer
until an intersection is found. Depth peeling can be used for multiple
refractions. The main drawback is that total internal reflection cannot be
handled. Using Heckbert’s regular expression notation [519], described at
the beginning of this chapter, the paths simulated are then L(D|S)SSE:
The eye sees a refractive surface, a backface then also refracts as the ray
leaves, and some surface in an environment map is then seen through the
transmitter.
Davis and Wyman [229] take this relief mapping approach a step farther,
storing both back and frontfaces as separate heightfield textures. Nearby
objects behind the transparent object can be converted into color and depth
maps so that the refracted rays treat these as local objects. An example
is shown in Figure 9.50. In addition, rays can have multiple bounces, and
total internal reflection can be handled. This gives a refractive light path of
L(D|S)S +SE. A limitation of all of these image-space refraction schemes
is that if a part of the model is rendered offscreen, the clipped data cannot
refract (since it does not exist).
Simpler forms of refraction ray tracing can be used directly for basic geometric objects. For example, Vlachos and Mitchell [1303] generate the refraction ray and use ray tracing in a pixel shader program to find which wall of the water’s container is hit. See Figure 9.51.

9.7.1 Subsurface Scattering Theory
Figure 9.54 shows light being scattered through an object. Scattering
causes incoming light to take many different paths through the object.
Since it is impractical to simulate each photon separately (even for offline
rendering), the problem must be solved probabilistically, by integrating
over possible paths, or by approximating such an integral. Besides scatter-ing, light traveling through the material also undergoes absorption. The
absorption obeys an exponential decay law with respect to the total travel
distance through the material (see Section 9.4). Scattering behaves similarly.
The probability of the light not being scattered obeys an exponential
decay law with distance.
The absorption decay constants are often spectrally variant (have different
values for R, G, and B). In contrast, the scattering probability constants
usually do not have a strong dependence on wavelength. That said, in certain
cases, the discontinuities causing the scattering are on the order of a
light wavelength or smaller. In these circumstances, the scattering probability
does have a significant dependence on wavelength. Scattering from
individual air molecules is an example. Blue light is scattered more than
red light, which causes the blue color of the daytime sky. A similar effect
causes the blue colors often found in bird feathers.
One important factor that distinguishes the various light paths shown
in Figure 9.54 is the number of scattering events. For some paths, the light
leaves the material after being scattered once; for others, the light is scattered
twice, three times, or more. Scattering paths are commonly grouped
into single scattering and multiple scattering paths. Different rendering
techniques are often used for each group.

9.7.2 wrap lighting
for many solid materials, the distances between scattering events are short enough that single scattering can be approximated via a BRDF. also, for some materials, single scattering is a relatively weak part of the total scattering effect, and multiple scattering predominates——skin is notable example. for these reasons, many subsurface scattering 次表面散射 rendering techniques focus on simulating multiple scattering.

perhaps the simplest of these is wrap lighting. wrap lighting as dicussed on page 294 as an an approximation of area light sources. when used to approximate subsurface scattering, it can be useful to add a color shift. this accounts for the partial absorption of light traveling through the material. for example, when rendering skin, a red color shift could be used.
https://blog.csdn.net/pianpiansq/article/details/74453602
when used in this way, wrap lighting attempts to model the effect of multiple scattering on the shading of curved surfaces. the “leakage” of light from adjacent points into the currently shaded point softens the transition area from light to dark where the surfaces curves away from the light source. kolchin points out that this effect depends on surface curvature, and he derives a physically based version. although the derived expression is somewhat expensive to evaluate, the idea behind it are uesful.

9.7.3 normal blurring
Stam人名 points out that multipe scattering can be modeld as a diffusion process. jensen et al. further develop this idea to derive an analytical BSSRDF model. the diffusion process has a spatial blurring effect on the outgoing radiance.

this blurring affects only diffuse reflectance. specular reflectance occurs at the material surface and is unaffected by substance scattering.次表面散射 since normal maps often encode small-scale variation, a useful trick for subsurface scattering is to apply normal maps to only the specular reflecance. the smooth, unperturbed 未受扰乱的 normal is used for the diffuse reflectance. since there is no added cost, it is often worthwhile to apply this technique when using other subsurface scattering methods.

for many materials, multiple scattering 多种反射 occurs over a relatively small distance. skin is an important example, where most scattering takes place over a distance of a few millimeters 毫米. for such materials, the trick of not perturbing the diffuse shading normal may be sufficient by itself. Ma et al. 人名 extend this method, based on measured data. they measured reflected light from scattering objects and found that while the specular reflectance is based on the geometric surface normals, subsurface scattering makes diffuse reflectance behave as if it uses blurred surface normals. futhermore, the amount of blurring can vary over the visible spectrum. they propose a real-time shading technique using independently acquired normal maps for the specular reflectance and for the R, G and B channels of the diffuse reflectance. since these diffuse normal maps typically resemble blurred versions of the specular map, it is straightforward to modify this technique to use a single normal map, while adjusting the mipmap level. this adjustment should be performed similarly to the adjustment of environment map mimap levels discussed on page 310.

9.7.4 texture spacing diffusion 纹理空间模糊
blurring the diffuse normals accounts for some visual effects of multiple scattering, but not for others, such as softened shadow edges. borshukov and lewis popularized the concept of texture space diffusion. they formalize 形式化 the idea of multiple scattering as a blurring process. first, the surface irradiance (diffuse lighting) is rendered into a texture. this is done by using texture coordinates as positions for rasterization (the real positions are interpolated separately for use in shading). this texture is blurred, and then used for diffuse shading when rendering. 先对图片进行模糊,然后在漫反射的时候使用。
https://www.cnblogs.com/psklf/p/9526690.html
https://www.cnblogs.com/zhanlang96/p/4941531.html

9.7.5 depth-map techniques
the techniques discussed so far model scattering over only relatively small distances. other techniques are needed for materials exhibiting large-scale scattering. many of these focus on large-scale single 大规模单个 scattering, which is easier to model than large-scale multiple 大规模多个 scattering.

the ideal simulation for large-scale single scattering can be seen on the left side of figure 9.56. the light paths change direction on entering and exiting the object, due to refraction. the effects of all the paths need to be summed to shade a single surface point. absorption also needs to be taken into account——the amount of absorption in a path depends on its length inside the material. computing all these refracted rays for a single shaded point is expensive even for offline renderers, so the refraction on entering the material is usually ignored 折射进物体的被忽略, and only the change in direction on exiting the material is taken into account. 只考虑反射的部分,离开物体表面的? this approximation is shown in the center of figure 9.56. since the rays cast are alwasy in the direction of the light, Hery 人名 points out that light space depth maps (typically used for shadowing) can be used instead of ray casting. multiple points (shown in yellow) on the refracted view ray are sampled, and a lookup into the light space depth map, or shadow map, is performed for each one. the result can be projected to get the position of the red intersection point. the sum of the distances from red to yellow and yellow to blue points is used to determine the absorption. for media that scatter light anisotropically 各向异性的材质 the scattering angle also affects the amount of scatterd light.

figure 9.56. on the left, the idea situation, in which light refracts when entering the object, then all scattering contributions that would propertly refract upon leaving the object are computed. the middle shows a computaionally simpler situation, in which the rays refract only on exit. the right shows a much simpler, and therefore faster, approximation, where only a single ray is considered.
performing depth map lookups is faster than ray casting, but the multiple samples required make Hery’s method too slow for most real-time rendering applications. Green [447] proposes a faster approximation, shown on the right side of figure 9.56. instead of multiple samples along the refracted ray, a single depth map lookup is performed at the shaded point. although this method is somewhat nonphysical, its results can be convincing. one problem is that details on the back side of the object an show through, since every change in object thickness will directly affect the shaded color. despite this, Green’s approximation is effective enough to be used by Pixar for films such as Ratatouille [460]. Pixar refers to this technique as Gummi Lights. another problem (shared with Hery’s implementation, but not Pixar’s) is that the depth map should not contain multiple objects, or highly nonconvex objects. this is because it is assumed that the entire path between the shaded (blue) point and the red intersection point lies within the object.(11)

modeling large-scale multiple scattering in real time is quite difficult, since each surface point can be influenced by light incoming from any other surface point. Dachsbacher and Stamminger [218] propose an extension of the light space depth map method, called translucent shadow maps, for modeling multiple scattering. additional information, such as irradiance and surface normal, is stored in light space textures. several samples are taken from these textures (as well as from the depth map) and combined to form an estimation of the scattered radiance. a modification of this technique was used in NVIDIA’S system [246, 247, 248]. Mertens et al. [859] propose a similar method, but using a texture in screen space, rather than light space.

9.7.6 other methods
several techniques assume that the scattering object is rigid and precalculate the proportion of light scattered among different parts of the object [499, 500, 757]. these are similar in principle to precomputed radiance transfer techniques (discussed in section 9.11). precomputed radiance transfer can be used to model small-or large-scale multiple scattering on rigid objects, under low-frequency distant lighting. Isidoro [593] discusses several pratical issues relating to the use of precomputed radiance transfer for suburface scattering, and also details how to combine it with other subsurface scattering methods, such as texture space diffusion.

modeling large scale multiple scattering is even more difficult in the case of deforming objects. Mertens et al. [860] present a technique based on a hierarchical surface representation. scattering factors are dynamically computed between hierarchial elements, based on distance. A GPU implementation is not given. in constrast, Hoberock [553] proposes a GPU-based method derived from Bunnell’s [146] hierarchical disk-based surface model, previously used for dynamic ambient occlusion.

9.8 full global illumination
so far, this chapter has presented a “piecemeal” 零碎的 approach to solving the rendering equation. individual parts or special cases from the rendering equation were solved with specialized algorithms. in this section, we will present algorithms that are designed to solve most or all the rendering equation. we will refer to these as full global illumination algorithms.

in the general case, full global illumination algorithms are too computaionally expensive for real-time applications. why do we discuss them in a book about real-time rendering? the reason is that in static or partially static scenes, full global illumination algorithms can be run as a pre-process, storing the results for later use during rendering. this is a very popular approach in games, and will be discussed in detail in sections 9.9, 9.10 and 9.11.

the second reason is that under certain restricted circumstances, full global illumination algorithms can be run at rendering time to produce particular visual effects. this is a growing trend as graphics hardware becomes more powerful and flexible.

radiosity and ray tracing are the first two algorithms introduced for global illumination in computer graphics, and are still in use today. we will also present some other techniques, including some intended for real time implementation on the gpu.

9.8.1 Radiosity 光能传递
the importance of indirect lighting to the appearance of a scene was discussed in chapter 8. 间接光增加真实感 multiple bounces of light amoung surfaces cause a subtle interplay of light and shadow that is key to a realistic appearance. interreflections also cause color bleeding, where the color of an object appears on adjacent objects. for example, walls will have a reddish tint where they are adjacent to a red carpet. see figure 9.57.

figure 9.57. color bleeding. the light shines on the beds and carpets, which in turn bounce light not only to the eye but to other surfaces in the room, which pick up their color.

Radiosity was the first computer graphics technique developed to simulate bounced light between diffuse surfaces. there have been whole books written on this algorithm, but the baisc idea is relatively simple. light bounces around an environment; u turn a light on and the illumination quickly reaches equilibrium. in this stable state, each surface can be considered as a light source in its own light. when light hits a surface, it can be absorbed, diffusely reflected, or reflected in some other fashion (specularly, anisotropically, etc.).
各向异性,Anisotropy,通俗上讲就是在各个方向上所体现出来的性质都不一样。英文维基词条的解释比较贴切,并且在各个领域中都举出了比较恰当的例子。举例来说,比如金属良导体,你无论是正接,反接,还是各个角度的侧接,它都导电,电导的物理常数也没有很大的变化,那么这个材料的导电性上我们就说它是各向同性(Isotropy)。但是有的材料,比如一些电阻原件,正接是良导体,反接就是绝缘体或者电阻很大,他各个方向上的电导的物理常数差异很大,这种材料,在导电性上就是各向异性(Anisotropy)。各向异性或者各向同性,是物质材料的自身的属性,跟材料的尺度大小,内部原子排列结构,分子相互作用等等密切相关。
https://www.zhihu.com/question/20583248/answer/15551035

basic radiosity algorithms first make the simplifying assumption that all indirect light is from diffuse surfaces. this assumption fails for places with polished marble floors or large mirrors on the walls, but for most architectural settings this is a reasonable approximation. the BRDF of a diffuse surface is a simple, uniform hemisphere, so the surface’s radiance from any direction is proportional purely to the irradiance multiplied by the reflectacne of the surface. the outgoing radiance is then

where E is the irradiance and r is the reflectance of the surface. Note that, though the hemisphere covers 2π steradians, the integration of the cosine term for surface irradiance brings this divisor down to π.
To begin the process, each surface is represented by a number of patches
(e.g., polygons, or texels on a texture). The patches do not have to match
one-for-one with the underlying polygons of the rendered surface. There
can be fewer patches, as for a mildly curving spline surface, or more patches
can be generated during processing, in order to capture features such as
shadow edges.
To create a radiosity solution, the basic idea is to create a matrix of
form factors among all the patches in a scene. Given some point or area on
the surface (such as at a vertex or in the patch itself), imagine a hemisphere
above it. Similar to environment mapping, the entire scene can be projected
onto this hemisphere. The form factor is a purely geometric value denoting
the proportion of how much light travels directly from one patch to another.
A significant part of the radiosity algorithm is accurately determining the
form factors between the receiving patch and each other patch in the scene.
The area, distance, and orientations of both patches affect this value.
The basic form of a differential form factor, fij , between a surface point
with differential area, dai, to another surface point with daj, is


where θi and θj are the angles between the ray connecting the two points
and the two surface normals. If the receiving patch faces away from the
viewed patch, or vice versa, the form factor is 0, since no light can travel
from one to the other. Furthermore, hij is a visibility factor, which is either
0 (not visible) or 1 (visible). This value describes whether the two points
can “see” each other, and d is the distance between the two points. See
Figure 9.58.


Figure 9.58. The form factor between two surface points.

Real-Time Rendering——Chapter 9 Global Illumination相关推荐

  1. 实时高清渲染:全局光照(Global Illumination)[1]

    目录 基础知识: Radiance: Irradiance: Radiant flux: Radiant Intensity: Solid Angle: Lambertian surface: Lam ...

  2. 全局光照技术解析Global Illumination Explained

    解析全局光照Global Illumination Explained 前言:Global Illumination全局光照技术是实时渲染的必然发展方向.我参考了一些研究成果,琢磨了一下,让更多的人可 ...

  3. smallpt: Global Illumination in 99 lines of C++讲解

    smallpt: Global Illumination in 99 lines of C++ 光线追踪 正向光线追踪 逆向光线追踪介绍 蒙特卡罗光线追踪算法 非透明材质 漫反射材质 镜面反射材质 透 ...

  4. Unity Global Illumination(Unity 全局光照 ) 官方手册笔记系列之Enlighten

    Enlighten 本文档主要是对Unity官方手册的个人理解与总结(其实以翻译记录为主:>) 仅作为个人学习使用,不得作为商业用途,欢迎转载,并请注明出处. 文章中涉及到的操作都是基于Unit ...

  5. Real-Time Rendering——Chapter 7 Advanced Shading

    https://blog.csdn.net/yjr3426619/article/details/81098626 介绍brdf https://zhuanlan.zhihu.com/p/213761 ...

  6. Real-Time Rendering——Chapter 15Non-Photorealistic Rendering非真实感渲染15.1 Toon Shading15.1卡通阴影

    "Using a term like 'nonlinear science' is like referring to the bulk of zoology as 'the study o ...

  7. Real-Time Rendering——Chapter 8 Light and Color光和颜色

    "Unweave a rainbow, as it erewhile made The tender-person'd Lamia melt into a shade." -Joh ...

  8. Real-Time Rendering Chapter 1~6 读书笔记

    第一章:Introduction 概述 书中符号的用法 第二章:The Graphics Rendering Pipeline 渲染管线:Application->Geometry->Ra ...

  9. UE建筑可视化全局照明学习 Unreal Engine: Global Illumination for Arch. Visualization

    虚幻引擎:建筑可视化的全局照明 你会学到: 使用轻量级地理信息引擎 聚焦胃肠计算 生成灯光贴图Uv 轻度烘焙 控制光反弹 使用环境遮挡 动画和地理信息 暴露 保存高分辨率图像 课程获取:UE建筑可视化 ...

最新文章

  1. Visual C#访问接口
  2. python 安装第三方库pygame
  3. Socket编程实践(3) --Socket API
  4. jqGrid 实现这种select - 同一列的不同行的select 的option 不同
  5. Linux 下 Open××× 安装和 Windows Open××× GUI 安装笔记
  6. 微软小冰你这么智能 .net知道吗?
  7. 免费的局域网文档协作办公方式—onlyoffice文档协作
  8. Flutter FFI实践
  9. Excel快捷键大全,没有最全,只有更全!
  10. flea-common使用之本地国际化实现
  11. 熊绎:我看软件工程师的职业规划(转载)
  12. CleanMyMac XMac苹果电脑专属系统优化工具
  13. hdu 6447YJJ's Salesman 离散化+树状数组+DP
  14. 利用trie Tree 解决查找连续子字符串的问题
  15. C语言 正序分解整数
  16. XJOI 7820 TLE
  17. 最小化最大链路利用率
  18. 微信公众号开发-菜单事件推送
  19. LIS优化 —— 队列优化 + 二分
  20. thinkcmf5 数据备份、恢复

热门文章

  1. 电竞路由器推荐指南游戏宅看过来
  2. Valine评论之Valine-admin配置攻略
  3. HIEX交易所登陆币圈市场,新版升级模式能否创造神话?
  4. CMake中target_compile_definitions的使用
  5. wordpress代码插件_5个最佳WordPress插件来检测恶意代码
  6. 平价蓝牙耳机哪个牌子好?适合学生党入手性价比高的蓝牙耳机推荐
  7. 新上线的网站需要做什么(SEO优化类)
  8. editText限制输入的4种方法
  9. 信息学奥赛一本通 (C++)上机练习
  10. 全球物联网行业2017年十大方向发展