vr图像渲染和处理

With Unity 2017.2, we released support for Stereo Instancing for XR devices running on DX11, meaning that developers will have access to even more performance optimizations for HTC Vive, Oculus Rift, and the brand new Windows Mixed Reality immersive headsets. We thought we would take this opportunity to tell you more about this exciting rendering advancement and how you can take advantage of it.

借助Unity 2017.2 ,我们发布了对在DX11上运行的XR设备的Stereo Instancing的支持,这意味着开发人员将可以使用HTC Vive,Oculus Rift和全新的Windows Mixed Reality沉浸式耳机进行更多性能优化。 我们认为我们将借此机会向您详细介绍这一激动人心的渲染功能以及如何利用它。

历史简介 (Brief history)

One of the unique, and obvious, aspects of XR rendering is the necessity to generate two views, one per eye. We need these two views in order to generate the stereoscopic 3D effect for the viewer. But before we dive deeper into how we could render two viewpoints, let’s take a look into the classic single viewpoint case.

XR渲染的独特且显而易见的方面之一是必须生成两个视图(每只眼睛一张)。 我们需要这两个视图,以便为观看者生成立体3D效果。 但是在深入研究如何渲染两个视点之前,让我们看一下经典的单视点情况。

In a traditional rendering environment, we render our scene from a single view. We take our objects and transform them into a space that’s appropriate for rendering. We do this by applying a series of transformations to our objects, where we take them from a locally defined space, into a space that we can draw on our screen.

在传统渲染环境中,我们从单个视图渲染场景。 我们将我们的对象转换为适合渲染的空间。 为此,我们对对象进行了一系列转换,将它们从局部定义的空间带入可以在屏幕上绘制的空间。

The classic transformation pipeline starts with objects in their own, local/object space. We then transform the objects with our model or world matrix, in order to bring the objects to world space. World space is a common space for the initial, relative placement of objects. Next, we transform our objects from world space to view space, with our view matrix. Now our objects are arranged relative to our viewpoint. Once in view space, we can project them onto our 2D screen with our projection matrix, putting the objects into clip space. The perspective divide follows, result in NDC (normalized device coordinate) space, and finally, the viewport transform is applied, resulting in screen space. Once we are in screen space, we can generate fragments for our render target. For the purposes of our discussion, we will just be rendering to a single render target.

经典的转换管道以对象在其自己的本地/对象空间中开始 。 然后,我们使用模型或世界矩阵对对象进行变换,以将对象带入世界空间 。 世界空间是对象相对初始放置的公共空间。 接下来,我们使用视图矩阵将对象从世界空间转换为视图空间 。 现在,我们的对象相对于我们的视点进行了排列。 进入视图空间后,我们可以使用投影矩阵将它们投影到2D屏幕上,从而将对象放入剪辑空间 。 随后进行透视划分,从而得到NDC(标准化设备坐标)空间 ,最后应用视口变换 ,从而得到屏幕空间 。 进入屏幕空间后,我们可以为渲染目标生成片段。 为了便于讨论,我们将仅渲染到单个渲染目标。

This series of transformations is sometimes referred to as the “graphics transformation pipeline”, and is a classic technique in rendering.

这一系列的转换有时称为“图形转换管道”,是渲染中的经典技术。

Additional reference resources:

其他参考资源:

  • The Direct3D Transformation Pipeline

    Direct3D转换管道

  • OpenGL Transformation

    OpenGL转换

Besides current XR rendering, there were scenarios where we wanted to present simultaneous viewpoints. Maybe we had split-screen rendering for local multiplayer. We might have had a separate mini-viewpoint that we would use for an in-game map or security camera feed. These alternative views might share scene data with each other, but they often share little else besides the final destination render target.

除了当前的XR渲染外,在某些情况下,我们还希望呈现同时的视点。 也许我们为本地多人游戏进行了分屏渲染。 我们可能有一个单独的迷你视点,可用于游戏内地图或安全摄像机供稿。 这些替代视图可能彼此共享场景数据,但是除了最终的目标渲染目标外,它们通常很少共享其他数据。

At a minimum, each view often owns distinctly unique views and projection matrices. In order to composite the final render target, we also need to manipulate other properties of the graphics transformation pipeline. In the ‘early’ days when we had only one render target, we could use viewports to dictate sub-rects on the screen to render into. As GPUs and their corresponding APIs evolved, we were able to render into separate render targets and manually composite them later.

至少,每个视图通常拥有截然不同的独特视图和投影矩阵。 为了合成最终的渲染目标,我们还需要操纵图形转换管道的其他属性。 在“早期”只有一个渲染目标的日子里,我们可以使用视口来指定要渲染到屏幕上的子区域。 随着GPU及其相应API的发展,我们能够渲染成单独的渲染目标,并在以后手动合成它们。

输入XRagon (Enter the XRagon)

Modern XR devices introduced the requirement of driving two views in order to provide the stereoscopic 3D effect that creates depth for the device wearer. Each view represents an eye. While the two eyes are viewing the same scene from a similar vantage point, each view does possess a unique set of view and projection matrices.

现代XR设备引入了驱动两个视图的要求,以提供立体3D效果,从而为设备佩戴者创造深度。 每个视图代表一只眼睛。 当两只眼睛从相似的视角观看同一场景时,每个视图的确拥有一组独特的视图投影矩阵

Before proceeding, a quick aside into defining some terminology. These aren’t necessarily industry standard terms, as rendering engineers tend to have varied terms and definitions across different engines and use cases. Treat these terms as a local convenience.

在继续之前,请先快速定义一些术语。 这些不一定是行业标准术语,因为渲染工程师倾向于在不同的引擎和用例中使用不同的术语和定义。 将这些术语视为本地便利。

Scene graph – A scene graph is a term used to describe a data structure that organizes the information needed in order to render our scene and is consumed by the renderer. The scene graph can refer to either the scene in its entirety, or the portion visible to the view, which we will call the culled scene graph.

场景图 –场景图是一个术语,用于描述数据结构,该数据结构组织了渲染场景所需的信息并被渲染器消耗。 场景图既可以指整个场景,也可以指视图可见的部分,我们将其称为剔除的场景图。

Render loop/pipeline – The render loop refers to the logical architecture of how we compose the rendered frame. A high level example of a render loop could be this:

渲染循环/管线 –渲染循环是指我们如何组成渲染帧的逻辑体系结构。 渲染循环的高级示例可能是这样的:

Culling -> Shadows -> Opaque -> Transparent -> Post Processing -> Present

剔除->阴影->不透明->透明->后处理->存在

We go through these stages every frame in order to generate an image to present to the display. We also use the term render pipeline at Unity as well, as it relates to some upcoming rendering features we are exposing (e.g. Scriptable Render Pipeline). Render pipeline can be confused with other terms such as the graphics pipeline, which refers to the GPU pipeline to process draw commands.

我们每帧都要经过这些阶段,以便生成要呈现给显示器的图像。 我们在Unity上也使用术语渲染管道,因为它与我们即将公开的一些即将出现的渲染功能(例如,可脚本化渲染管道)有关。 渲染管道可以与其他术语(例如图形管道)混淆,后者是指GPU管道来处理绘制命令。

OK, with those definitions, we can get back to VR rendering.

好的,有了这些定义,我们可以回到VR渲染。

多相机 (Multi-Camera)

In order to render the view for each eye, the simplest method is to run the render loop twice. Each eye will configure and run through its own iteration of the render loop. At the end, we will have two images that we can submit to the display device. The underlying implementation uses two Unity cameras, one for each eye, and they run through the process of generating the stereo images. This was the initial method of XR support in Unity, and is still provided by 3rd party headset plugins.

为了渲染每只眼睛的视图,最简单的方法是运行两次渲染循环。 每只眼睛将配置并运行其自己的渲染循环迭代。 最后,我们将有两个图像可以提交到显示设备。 基本的实现使用两个Unity摄像机,每只眼睛一个摄像机,并且它们贯穿生成立体图像的过程。 这是Unity中XR支持的最初方法,仍然由第三方头戴式耳机插件提供。

While this method certainly works, Multi-Camera relies on brute force, and is the least efficient as far as the CPU and GPU are concerned. The CPU has to iterate twice through the render loop completely, and the GPU is likely not able to take advantage of any caching of objects drawn twice across the eyes.

尽管此方法确实有效,但多相机依赖于蛮力,并且就CPU和GPU而言效率最低。 CPU必须在渲染循环中完全迭代两次,而GPU可能无法利用双眼绘制两次的对象的任何缓存。

()

多次通过 (Multi-Pass)

Multi-Pass was Unity’s initial attempt to optimize the XR render loop. The core idea was to extract portions of the render loop that were view-independent. This means that any work that is not explicitly reliant on the XR eye viewpoints doesn’t need to be done per eye.

Multi-Pass是Unity最初优化XR渲染循环的尝试。 核心思想是提取渲染循环中与视图无关的部分。 这意味着不需要显式依赖XR眼睛视点的任何工作都不需要每只眼睛进行。

The most obvious candidate for this optimization would be shadow rendering. Shadows are not explicitly reliant on the camera viewer location. Unity actually implements shadows in two steps: generate cascaded shadow maps and then map the shadows into screen space. For multi-pass, we can generate one set of cascaded shadow maps, and then generate two screen space shadow maps, as the screen space shadow maps are dependent on the viewer location. Because of how our shadow generation is architected, the screen space shadow maps benefit from locality as the shadow map generation loop is relatively tightly coupled. This can be compared to the remaining render workload, which requires a full iteration over the render loop before returning to a similar stage (e.g. the eye specific opaque passes are separated by the remaining render loop stages).

此优化最明显的候选对象是阴影渲染。 阴影并不明确依赖于摄像机查看器的位置。 实际上,Unity分两步实现阴影:生成级联阴影贴图,然后将阴影映射到屏幕空间。 对于多遍,我们可以生成一组级联阴影图,然后生成两个屏幕空间阴影图,因为屏幕空间阴影图取决于查看器的位置。 由于我们的阴影生成是如何构建的,因此屏幕空间阴影图受益于局部性,因为阴影图生成循环相对紧密地耦合在一起。 可以将其与剩余的渲染工作量进行比较,后者需要在返回到相似阶段之前对渲染循环进行完整的迭代(例如,特定于眼睛的不透明通道被其余的渲染循环阶段分隔开)。

The other step that can be shared between the two eyes might not be obvious at first: we can perform a single cull between the two eyes. With our initial implementation, we used frustum culling to generate two lists of objects, one per eye. However, we could create a unified culling frustum shared between our two eyes (see this post by Cass Everitt). This will mean that each eye will render a little bit extra than they would with a single eye culling frustum, but we considered the benefits of a single cull to outweigh the cost of some extra vertex shaders, clipping, and rasterization.

最初在两只眼睛之间可以共享的其他步骤可能并不明显:我们可以在两只眼睛之间执行单个剔除。 在最初的实现中,我们使用视锥剔除生成了两个对象列表,每只眼睛一个。 但是,我们可以在两只眼睛之间创建一个统一的剔除视锥(请参阅Cass Everitt的这篇文章 )。 这意味着与单眼剔除视锥相比,每只眼将渲染出更多的东西,但是我们认为单剔除的好处超过了一些额外的顶点着色器,修剪和栅格化的成本。

Multi-Pass offered us some nice savings over Multi-Camera, but there was still more to do. Which brought us to…

Multi-Pass为我们提供了比Multi-Camera更好的节省,但是还有很多事情要做。 带来了我们……

单次通过 (Single-Pass)

Single-Pass Stereo Rendering means that we will make a single traversal of the entire renderloop, instead of twice, or certain portions twice.

单遍立体声渲染意味着我们将遍历整个renderloop,而不是两次或某些部分两次。

In order to perform both draws, we need to make sure that we have all the constant data bound, along with an index.

为了执行两次绘制,我们需要确保已绑定所有常数数据以及索引。

What about the draws themselves? How can we perform each draw? In Multi-Pass, the two eyes each have their own render target, but we can’t do that for Single-Pass because the cost of toggling render targets for consecutive draw calls would be prohibitive. A similar option would be to use render target arrays, but we would need to export the slice index out of the geometry shader on most platforms, which can also be expensive on the GPU, and invasive for existing shaders.

那抽奖本身呢? 我们如何进行每次抽奖? 在“ Multi-Pass”中,两只眼睛各自具有自己的渲染目标,但是我们不能为“ Single-Pass”做到这一点,因为在连续的绘制调用中切换渲染目标的成本非常高。 一个类似的选择是使用渲染目标数组 ,但是我们需要在大多数平台上将切片索引导出到几何着色器之外,这在GPU上也可能很昂贵,并且对现有着色器具有侵入性。

The solution we settled upon was to use a Double-Wide render target, and switch the viewport between draw calls, allowing each eye to render into half of the Double-Wide render target. While switching viewports does incur a cost, it’s less than switching render targets, and less invasive than using the geometry shader (though Double-Wide presents its own set of challenges, particularly with post-processing). There is also the related option of using viewport arrays, but they have the same issue as render target arrays, in that the index can only be exported from a geometry shader. There is yet another technique that uses dynamic clipping, which we won’t explore here.

我们确定的解决方案是使用Double-Wide渲染目标 ,并在绘制调用之间切换视口 ,从而允许每只眼睛将其渲染为Double-Wide渲染目标的一半。 尽管切换视口确实会带来成本,但它比切换渲染目标要少,并且比使用几何体着色器要容易得多(尽管Double-Wide面临着一系列挑战,尤其是在后期处理方面)。 还有使用视口数组的相关选项,但是它们与渲染目标数组存在相同的问题,因为只能从几何着色器导出索引。 还有另一种使用动态裁剪的技术,我们将不在这里探讨。

Now that we have a solution to kick off two consecutive draws in order to render both eyes, we need to configure our supporting infrastructure. In Multi-Pass, because it was similar to monoscopic rendering, we could use our existing view and projection matrix infrastructure. We simply had to replace the view and projection matrix with the matrices sourced from the current eye. However, with single-pass, we don’t want to toggle constant buffer bindings unnecessarily. So instead, we bind both eyes’ view and projection matrices and index into them with unity_StereoEyeIndex, which we can flip between the draws. This allows our shader infrastructure to choose which set of view and projection matrices to render with, inside the shader pass.

现在,我们已经有了一个解决方案,可以开始两个连续绘制以渲染两只眼睛,我们需要配置我们的支持基础结构。 在Multi-Pass中,因为它类似于单视场渲染,所以我们可以使用现有的视图和投影矩阵基础结构。 我们只需要用从当前眼睛获得的矩阵替换视图和投影矩阵即可。 但是,对于单遍,我们不想不必要地切换常量缓冲区绑定。 因此,我们将眼睛的视图和投影矩阵绑定在一起,并使用unity_StereoEyeIndex对其进行索引,我们可以在绘制之间进行切换。 这使我们的着色器基础结构可以在着色器通道内部选择要渲染的一组视图和投影矩阵。

One extra detail: In order to minimize our viewport and unity_StereoEyeIndex state changes, we can modify our eye draw pattern. Instead of drawing left, right, left, right, and so on, we can instead use the left, right, right, left, left, etc. cadence. This allows us to halve the number of state updates compared to the alternating cadence.

一个额外的细节:为了最小化我们的视口和unity_StereoEyeIndex状态更改,我们可以修改我们的眼图绘制模式。 代替绘制左,右,左,右等,我们可以改为使用左,右,右,左,左等节奏。 与交替节奏相比,这使我们可以将状态更新的数量减少一半。

This isn’t exactly twice as fast as Multi-Pass. This is because we were already optimized for culling and shadows, along with the fact that we are still dispatching a draw per eye and switching viewports, which does incur some CPU and GPU cost.

这并不是Multi-Pass的两倍。 这是因为我们已经针对剔除和阴影进行了优化,而且我们仍在分派每只眼睛绘制并切换视口的事实,这确实会导致CPU和GPU的成本增加。

There is more information in the Unity Manual page for Single-Pass Stereo Rendering.

有关单通道立体声渲染的“ Unity手册”页面中有更多信息。

立体声实例化(单遍实例化) (Stereo Instancing (Single-Pass Instanced))

Previously, we mentioned the possibility of using a render target array. Render target arrays are a natural solution for stereo rendering. The eye textures share format and size, qualifying them to be used in a render target array. But using the geometry shader in order to export the array slice is a large drawback. What we really want is the ability to export the render target array index from the vertex shader, allowing for simpler integration and better performance.

之前,我们提到了使用渲染目标数组的可能性。 渲染目标数组是立体渲染的自然解决方案。 眼睛纹理共享格式和大小,使它们有资格在渲染目标数组中使用。 但是使用几何着色器导出数组切片是一个很大的缺点。 我们真正想要的是能够从顶点着色器导出渲染目标数组索引的功能,从而实现更简单的集成和更好的性能。

The ability to export render target array index from the vertex shader does actually exist on some GPUs and APIs, and is becoming more prevalent. On DX11, this functionality is exposed as a feature option, VPAndRTArrayIndexFromAnyShaderFeedingRasterizer.

从顶点着色器导出渲染目标数组索引的功能实际上在某些GPU和API中确实存在,并且正在变得越来越普遍。 在DX11上,此功能作为功能选项VPAndRTArrayIndexFromFromAnyShaderFeedingRasterizer公开 。

Now that we can dictate which slice of our render target array we will render to, how can we select the slice? We leverage the existing infrastructure from Single-Pass Double-Wide. We can use unity_StereoEyeIndex to populate the SV_RenderTargetArrayIndex semantic in the shader. On the API side, we no longer need to toggle the viewport, as the same viewport can be used for both slices of the render target array. And we already have our matrices configured to be indexed from the vertex shader.

现在我们可以决定要渲染到的渲染目标数组的哪个切片,我们如何选择该切片? 我们利用Single-Pass Double-Wide的现有基础架构。 我们可以使用unity_StereoEyeIndex在着色器中填充SV_RenderTargetArrayIndex语义。 在API方面,我们不再需要切换视口,因为同一视口可用于渲染目标数组的两个切片。 而且我们已经将矩阵配置为从顶点着色器索引。

Though we could continue to use the existing technique of issuing two draws and toggling the value unity_StereoEyeIndex in the constant buffer before each draw, there is a more efficient technique. We can use GPU Instancing in order to issue a single draw call and allow the GPU to multiplex our draws across both eyes. We can double the existing instance count of a draw (if there are no instance usage, we just set the instance count to 2). Then in the vertex shader, we can decode the instance ID in order to determine which eye we are rendering to.

尽管我们可以继续使用现有的发出两次抽签并在每次抽签之前在常量缓冲区中切换值unity_StereoEyeIndex的技术,但还有一种更有效的技术。 我们可以使用GPU实例化来发出单个绘制调用,并允许GPU在双眼之间多路复用我们的绘制。 我们可以将绘图的现有实例数加倍(如果没有实例使用,我们只需将实例数设置为2)即可。 然后,在顶点着色器中,我们可以解码实例ID,以确定我们要渲染到哪只眼睛。

The biggest impact of using this technique is we literally halve the number of draw calls we generate on the API side, saving a chunk of CPU time. Additionally, the GPU itself is able to more efficiently process the draws, even though the same amount of work is being generated, since it doesn’t have to process two individual draw calls. We also minimize state updates by not having to change the viewport between draws, like we do in traditional Single-Pass.

使用此技术的最大影响是,我们实际上将在API端生成的绘制调用数量减少了一半,从而节省了大量CPU时间。 此外,即使生成了相同数量的工作,GPU本身也能够更有效地处理绘图,因为它不必处理两个单独的绘图调用。 我们也不必像在传统的“单遍”中那样在绘制之间更改视口,从而最大限度地减少了状态更新。

Please note: This will only be available to users running their desktop VR experiences on Windows 10 or HoloLens.

请注意:这仅适用于在Windows 10或HoloLens上运行其桌面VR体验的用户。

单遍多视图 (Single-Pass Multi-View)

Multi-View is an extension available on certain OpenGL/OpenGL ES implementations where the driver itself handles the multiplexing of individual draw calls across both eyes. Instead of explicitly instancing the draw call and decoding the instance into an eye index in the shader, the driver is responsible for duplicating the draws and generating the array index (via gl_ViewID) in the shader.

多视图是某些OpenGL / OpenGL ES实施中可用的扩展,其中驱动程序本身负责处理双眼之间的各个绘制调用的多路复用。 驱动程序负责在着色器中复制绘制并生成数组索引(通过gl_ViewID),而不是显式实例化draw调用并将实例解码为着色器中的eye索引。

There is one underlying implementation detail that differs from stereo instancing: instead of the vertex shader explicitly selecting the render target array slice which will be rasterized to, the driver itself determines the render target. gl_ViewID is used to compute view dependent state, but not to select the render target. In usage, it doesn’t matter much to the developer, but is an interesting detail.

有一个与立体声实例化不同的底层实现细节:驱动程序本身确定渲染目标,而不是顶点着色器显式选择将栅格化到的渲染目标数组切片。 gl_ViewID用于计算视图相关状态,但不用于选择渲染目标。 在使用中,对开发人员来说并不重要,但它是一个有趣的细节。

Because of how we use the Multi-View extension, we are able to use the same infrastructure that we built for Single-Pass Instancing. Developers are able to use the same scaffolding to support both Single-Pass techniques.

由于我们使用多视图扩展的方式,我们能够使用与为单次通过实例构建的相同基础结构。 开发人员可以使用相同的支架来支持两种单次通过技术。

高级性能概述 (High level performance overview)

At Unite Austin 2017, the XR Graphics team presented on some of the XR Graphics infrastructure, and had a quick discussion on the performance impact of the various stereo rendering modes (you can watch the talk here). A proper performance analysis could belong in its own blog, but we can quickly go over this chart.

在Unite Austin 2017上,XR Graphics团队介绍了一些XR Graphics基础结构,并就各种立体声渲染模式对性能的影响进行了快速讨论(您可以在此处观看演讲 )。 适当的性能分析可能属于其自己的博客,但我们可以快速浏览该图表。

As you can see, Single-Pass and Single-Pass Instancing represent a significant CPU advantage over Multi-Pass. However, the delta between Single-Pass and Single-Pass Instancing is relatively small. The reasoning is that the bulk of the CPU overhead is already saved by switching to Single-Pass. Single-Pass Instancing does reduce the number of draw calls, but that cost is quite low compared to processing the scene graph. And when you consider most modern graphics drivers are multi-threaded, issuing draw calls can be quite fast on the dispatching CPU thread.

如您所见,单次通过和单次通过实例化代表了CPU与多次通过相比的显着优势。 但是,单通道实例化和单通道实例化之间的差异相对较小。 原因是通过切换到单次通过已节省了大量CPU开销。 单遍实例化确实减少了绘制调用的数量,但是与处理场景图形相比,该成本相当低。 而且,当您考虑大多数现代图形驱动程序是多线程的时,在调度CPU线程上发出绘制调用可能会非常快。

翻译自: https://blogs.unity3d.com/2017/11/21/how-to-maximize-ar-and-vr-performance-with-advanced-stereo-rendering/

vr图像渲染和处理

vr图像渲染和处理_如何通过高级立体声渲染最大化AR和VR性能相关推荐

  1. CR渲染器全景图如何渲染颜色通道_【扮家家云渲染】3Dmax干货技巧|设置高质量室内模型渲染参数...

    云渲染已更新至最新版本,渲染更极致! ☑ 全面支持 VR2.0 - VR5.0 版本: ☑ 已支持 CR动画渲染: ☑ 已支持multitexture.FloorGenerator插件2010-202 ...

  2. 3dmax体积雾渲染不出来_【扮家家云渲染效果图】3Dmax体积光制作丛林光束|干货教程...

    首先打开场景文件 场景中创建了一些树木组成了森林的效果.首先要为场景创建灯光. 单击创建,选择灯光,将类型切换为标准.接着单击目标平行光. 在场景中拖拽进行创建,创建一盏目标平行光, 然后单击修改,勾 ...

  3. Unity CEO:玩家不在乎AR还是VR,他们只想要优质内容

    近期,Unity CEO John Riccitiello在接受英国金融时报采访时,透露了自己对于游戏的前生今世,以及AR/VR和未来的看法.他认为,打造一个新平台需要大量优质内容,创意对于新平台很重 ...

  4. AR、VR及MR在这几年越来越火红,之间又有哪些区别

    AR.VR及MR在这几年越来越火红,在开始介绍它们的应用之前,我们先来介绍这家族三个成员之间的分别. AR.VR与MR的分别: 增强现实(Augmented Reality,AR):在使用者的现实世界 ...

  5. 3dmax图像采样器抗锯齿_内幕揭秘!同样的场景同一张图,用3DMAX网渲平台进行二次渲染时间竟然相差3个小时之多!...

    一个分辨率:4000*2000的室内客餐厅,3dmax版本是2014版本,渲染器版本为vray3.63,机器:阿里云1台服务器,这个同样的场景同样的参数同一张图,用3dmax网渲平台进行二次渲染发现时 ...

  6. 单文件浏览器_图文并茂深度解析浏览器渲染原理,包看懂超值得收藏

    在我们面试过程中,面试官经常会问到这么一个问题,那就是从在浏览器地址栏中输入URL到页面显示,浏览器到底发生了什么?这个问题看起来是老生常谈,但是这个问题回答的好坏,确实可以很好的反映出面试者知识的广 ...

  7. 克服VR眩晕之帧数:提升UE4内容实时渲染效率

    克服VR眩晕之帧数:提升UE4内容实时渲染效率 Li Wen Lei, HuNing 在 2015/10/29 23:00:31 | 新闻 Share on Facebook Share on Twi ...

  8. keyshot怎么批量渲染_怎么快速设置Keyshot渲染参数

    怎么快速设置Keyshot渲染参数 以前玩这个软件,最后设置渲染参数的时候,我总是无脑设置的很高,要么就无脑设置的很低. 这样导致的后果就是渲染参数过高时,时间太久,再赶项目的时候,简直急死人. 当渲 ...

  9. 3dmax:3dmax三维VR渲染设置之高级灯光渲染(标准灯光分类及简介—目标聚光灯、泛光灯、台灯+射灯+壁灯+筒灯+电视灯+平行光,灯带+天光灯)图文教程之详细攻略

    3dmax:3dmax三维VR渲染设置之高级灯光渲染(标准灯光分类及简介-目标聚光灯.泛光灯.台灯+射灯+壁灯+筒灯+电视灯+平行光,灯带+天光灯)图文教程之详细攻略 目录 3dmax三维VR渲染设置 ...

最新文章

  1. OpenCV 升降维度
  2. 1.15 异常处理规则
  3. Java输入输出入门 A+B
  4. 【JavaScript】查漏补缺 —数组中filter()方法
  5. 那些月入1万的自媒体大咖,惯用的4个爆款选题分享给你
  6. 设计模式----简单工厂
  7. 美发布《2025年的数学科学》报告
  8. 10.卷1(套接字联网API)---SCTP 客户/服务器程序例子
  9. C语言读者管理系统——软件项目开发实践
  10. 这些好用的音频、视频素材网站,你值得拥有。
  11. VS2017编译libcef 2623_20181107完成
  12. 深度学习 + 基因组学:破译人类 30 亿碱基对
  13. DMP 数据管理平台极简教程 ( Data Management Platform )
  14. 所谓的四层代理和七层代理分别指的是什么?又在什么场景下用到呢?
  15. ui设计属于什么专业?ui设计的前景怎么样?
  16. C#中转义字符\r, \n, \r\n, \t, \b, @作用
  17. HDU 5454 Excited Database 线段树的维护
  18. 换脸上阵的路由界新面孔,联想云路由动手玩
  19. EPI——部分笔记 + 资源无偿分享(百度网盘)~
  20. 【Python小案例教程1】Python开发简单记事本

热门文章

  1. Ant Design Upload 自定义上传 customRequest
  2. Win10系统重启Windows资源管理器explorer.exe的方法
  3. 【C/C++】译密码问题以及ASCII码表的总结
  4. 这些例子很炫,感兴趣的童鞋可以了解一下
  5. AJAX、异步和同步区别
  6. html5动画 banner 制作工具
  7. JAVA日期时间的计算
  8. android activity设置透明或者半透明背景
  9. 低代码指南:LowCodeEngine - 阿里--开源-MIT
  10. Win10pac代理脚本设置无效