说在前面的话

这篇文章翻译自Google 的Graphic Architecture。

https://source.android.com/devices/graphics/architecture.html

这篇文章对我帮助很大,所以决定翻译下来,让自己更好的学习。

翻译过程中可能有词不达意的地方。也有不知该如何翻译的地方(有用黄色标记出来)。

建议对Android Graphics 感兴趣的童鞋可以直接看英文原文,以免被误导~

Graphics Architecture

What every developer should know about Surface, SurfaceHolder, EGLSurface, SurfaceView, GLSurfaceView, SurfaceTexture, TextureView, and SurfaceFlinger

对每一个开发者来说,关于Surface,SurfaceHolder,EGLSurface,SurfaceView,GLSurfaceView,SurfaceTexture,TextureView,和SurfaceFlinger,他们都应该了解些什么呢。

This document describes the essential elements of Android’s “system-level” graphics architecture, and how it is used by the application framework and multimedia system. The focus is on how buffers of graphical data move through the system. If you’ve ever wondered why SurfaceView and TextureView behave the way they do, or how Surface and EGLSurface interact, you’ve come to the right place.

这篇文档描述了Android 系统层面图形架构中最基本的元素,以及它是如何被App 框架层和多媒体系统所使用。主要关注在图形数据缓冲区如何在系统中传递。如果你想了解SurfaceView 和TextureView是如何运作的,或者Surface 和EGLSurface是如何交互的,你来对地方了。

Some familiarity with Android devices and application development is assumed. You don’t need detailed knowledge of the app framework, and very few API calls will be mentioned, but the material herein doesn’t overlap much with other public documentation. The goal here is to provide a sense for the significant events involved in rendering a frame for output, so that you can make informed choices when designing an application. To achieve this, we work from the bottom up, describing how the UI classes work rather than how they can be used.

我们假定你已经对Android 设备和 应用开发有一定的了解。你不需要了解App框架太多细节,只有少量的API 调用会被提到。但是这里不会介绍过多其他公开文档中介绍过的内容。这篇文档的目标是提供对一些有意义的事件(包括如何渲染一帧用于输出)的一个直观的理解。所以当你设计一个APP时,你将会很清楚做出选择。为达到这个目标,我们将由底至上来描述UI 相关的类是如何运作的,而不是如何使用。

Early sections contain background material used in later sections, so it’s a good idea to read straight through rather than skipping to a section that sounds interesting. We start with an explanation of Android’s graphics buffers, describe the composition and display mechanism, and then proceed to the higher-level mechanisms that supply the compositor with data.

前面的章节会包含后面章节中会用到的背景知识,所以,最好是从前往后读,而不是直接跳到感兴趣的章节。我们会从android 图形缓冲区的解释开始介绍,描述合成和显示机制,最后再来介绍上层是如何提供合成器数据的机制。

This document is chiefly concerned with the system as it exists in Android 4.4 (“KitKat”). Earlier versions of the system worked differently, and future versions will likely be different as well. Version-specific features are called out in a few places.

这篇文档主要涉及到Android 4.4 系统。早期版本可能会不一样,未来版本可能也有所不同。版本相关的features 在某些地方将会说明。

At various points I will refer to source code from the AOSP sources or from Grafika. Grafika is a Google open source project for testing; it can be found at https://github.com/google/grafika. It’s more “quick hack” than solid example code, but it will suffice.

在很多方面,我将参考来自AOSP 或Grafika的源码。Grafika是用于测试的 Google开源project,可以在https://github.com/google/grafika获取. 它比一些solid 范例代码更“quick hack”,但这将足够了。

BufferQueue & Gralloc

To understand how Android’s graphics system works, we have to start behind the scenes. At the heart of everything graphical in Android is a class called BufferQueue. Its role is simple enough: connect something that generates buffers of graphical data (the “producer”) to something that accepts the data for display or further processing (the “consumer”). The producer and consumer can live in different processes. Nearly everything that moves buffers of graphical data through the system relies on BufferQueue.

为了理解Android 图形系统是如何工作的,必须先从这背后的概念开始。在Android中,所有图形化相关的核心就是一个被叫做BufferQueue的类。它的作用相当简单:连接某个产生图形数据缓冲区的东西(生产者)到某个接受数据用于显示或进一步处理的东西(消费者)上面。生产者与消费者可以运行在不同进程中。几乎所有图形数据缓冲区在系统中的传递都是依赖于BufferQueue。

The basic usage is straightforward. The producer requests a free buffer (dequeueBuffer()), specifying a set of characteristics including width, height, pixel format, and usage flags. The producer populates the buffer and returns it to the queue (queueBuffer()). Some time later, the consumer acquires the buffer (acquireBuffer()) and makes use of the buffer contents. When the consumer is done, it returns the buffer to the queue (releaseBuffer()).

最基本的用法是非常直接的。Producer生产者指定一系列的特性包括宽,高,像素格式,usage flags等,来请求一个空闲的缓冲区(dequeueBuffer)。Producer填充这块缓冲区,并将其归还到队列中(queueBuffer()).过了一会,Consumer 获取这块缓冲区(acquireBuffer()),并使用缓冲区的内容。当Consumer处理完,将同样将缓冲区归还到队列中(releaseBuffer())。

Most recent Android devices support the “sync framework”. This allows the system to do some nifty thing when combined with hardware components that can manipulate graphics data asynchronously. For example, a producer can submit a series of OpenGL ES drawing commands and then enqueue the output buffer before rendering completes. The buffer is accompanied by a fence that signals when the contents are ready. A second fence accompanies the buffer when it is returned to the free list, so that the consumer can release the buffer while the contents are still in use. This approach improves latency and throughput as the buffers move through the system.

近来大部分Android平台都支持同步框架。这允许系统结合一些(能够异步操作graphics data的)硬件模块做一些漂亮的事情。比如,producer提交一系列OpenGL ES 渲染命令之后,然后在渲染完成之前将这块输出缓冲区归还到队列中。这块缓冲区会携带一个fence (用于通知缓冲区内容渲染完毕)。当这块缓冲区被归还到空闲列表时会有另外一个fence会伴随这个buffer,这样consumer 可以在buffer内容还在使用的时候释放它。这种方法能够改善buffer 在系统中传递的延迟和吞吐率。

The BufferQueue is responsible for allocating buffers as it needs them. Buffers are retained unless the characteristics change; for example, if the producer starts requesting buffers with a different size, the old buffers will be freed and new buffers will be allocated on demand.

BufferQueue负责按需分配buffers。buffer会被保留,除非一些特性改变了,比如说,producer请求一块不同大小的buffer,原先的buffer 会被释放,新的buffer会按需分配。

Buffer contents are never copied by BufferQueue. Moving that much data around would be very inefficient. x, buffers are always passed by handle.

Buffer的内容不会被BufferQueue复制。传递那么多数据是非常低效的。实际上,buffer 是通过handle传递的。

Gralloc HAL

The actual buffer allocations are performed through a memory allocator called “gralloc”, which is implemented through a vendor-specific HAL interface (see hardware/libhardware/include/hardware/gralloc.h). The alloc()function takes the arguments you’d expect – width, height, pixel format – as well as a set of usage flags. Those flags merit closer attention.

真正的buffer 分配是有一个叫做gralloc的内存分配器执行的,这是通过一个供应商相关的HAL 接口实现的。alloc() 函数携带你所要求的参数:宽,高,像素格式,还有一组 usage flags。这些flags需要关注。

The gralloc allocator is not just another way to allocate memory on the native heap. In some situations, the allocated memory may not be cache-coherent, or could be totally inaccessible from user space. The nature of the allocation is determined by the usage flags, which include attributes like:
how often the memory will be accessed from software (CPU)
how often the memory will be accessed from hardware (GPU)
whether the memory will be used as an OpenGL ES (“GLES”) texture
whether the memory will be used by a video encoder

Gralloc 分配器不仅仅是一种在native heap上分配内存的方法。在某些情况下,被分配的内存可能不是cache-coherent(缓存一致性)或者完全不能被用户空间访问。分配的性质由usage flags 决定,它包含如下属性:

这块memory被软件(CPU)访问的频率。
这块memory被硬件(GPU)访问的频率
这块内存是否是作为GLES 纹理。
这块内存是否被视频encoder使用

For example, if your format specifies RGBA 8888 pixels, and you indicate the buffer will be accessed from software – meaning your application will touch pixels directly – then the allocator needs to create a buffer with 4 bytes per pixel in R-G-B-A order. If instead you say the buffer will only be accessed from hardware and as a GLES texture, the allocator can do anything the GLES driver wants – BGRA ordering, non-linear “swizzled” layouts, alternative color formats, etc. Allowing the hardware to use its preferred format can improve performance.

举个例子,如果你的像素格式指定为RGBA8888,并且你指示buffer 将会被sw访问,意味着你的app可能将直接访问操作这些像素,那么分配器需要创建一块(每像素4字节且以RGBA顺序的)buffer。如果这块buffer仅仅被硬件访问,并且作为一个GLES纹理,分配器可以做GLES driver 期望做的任何事情:BGRA ordering,非线性“swizzled”布局,可替换的颜色格式等等。允许硬件使用它自己所选择的格式能够提高性能。

Some values cannot be combined on certain platforms. For example, the “video encoder” flag may require YUV pixels, so adding “software access” and specifying RGBA 8888 would fail.
The handle returned by the gralloc allocator can be passed between processes through Binder.

有些值不能在某些平台上兼容 比如说,video encoder 标记可能需要YUV像素,所以增加“sw 访问”并且指定RGBA8888 可能会失败。

Gralloc分配器返回的handle可以通过binder 在进程间传递。

SurfaceFlinger and Hardware Composer

Having buffers of graphical data is wonderful, but life is even better when you get to see them on your device’s screen. That’s where SurfaceFlinger and the Hardware Composer HAL come in.

了解图形数据缓冲区是令人开心的,但是当你将看到他们在你的设备屏幕上显示出来时,会感觉到更美妙。这就是SurfaceFlinger 和Hardware Composer HAL所要介绍到的。

SurfaceFlinger’s role is to accept buffers of data from multiple sources, composite them, and send them to the display. Once upon a time this was done with software blitting to a hardware framebuffer (e.g./dev/graphics/fb0), but those days are long gone.

SurfaceFlinger的作用是接受那些数据来自不同源的的缓冲区,合成它们,并将它们传递给display。曾经一段时间,这些都是通过SW 块传输到硬件framebuffer来完成的,但是现在已经不再这么做了。

When an app comes to the foreground, the WindowManager service asks SurfaceFlinger for a drawing surface. SurfaceFlinger creates a “layer” - the primary component of which is a BufferQueue - for which SurfaceFlinger acts as the consumer. A Binder object for the producer side is passed through the WindowManager to the app, which can then start sending frames directly to SurfaceFlinger.
Note: The WindowManager uses the term “window” instead of “layer” for this and uses “layer” to mean something else. We’re going to use the SurfaceFlinger terminology. It can be argued that SurfaceFlinger should really be called LayerFlinger.

当一个app切换到前台,WMS会请求SF创建一个用于渲染的surface。SF创建一个Layer (BufferQueue的主要组件).SF扮演该bufferQueue的consumer。Producer端的Binder对象会通过WMS传递给APP,app就可以开始将帧传递给SF。

注意:WM用window而不是用layer 来表达这个,layer 用来表达其他含义。

For most apps, there will be three layers on screen at any time: the “status bar” at the top of the screen, the “navigation bar” at the bottom or side, and the application’s UI. Some apps will have more or less, e.g. the default home app has a separate layer for the wallpaper, while a full-screen game might hide the status bar. Each layer can be updated independently. The status and navigation bars are rendered by a system process, while the app layers are rendered by the app, with no coordination between the two.

对大部分的app来讲,任何时候屏幕上都会有3个layers、:status bar,navigation bar,app ui。有些app可能会更多或更少:wallpaper,fullscreen game。每一个layer都能够被独立更新。

Device displays refresh at a certain rate, typically 60 frames per second on phones and tablets. If the display contents are updated mid-refresh, “tearing” will be visible; so it’s important to update the contents only between cycles. The system receives a signal from the display when it’s safe to update the contents. For historical reasons we’ll call this the VSYNC signal.

显示设备以某个确定的速率(一般是60fps)刷新。如果显示内容在刷新过程中被更新到display上,可能会看到“tearing”撕裂现象。所以按照周期循环更新内容是非常重要的。系统会收到来自display的信号,从而非常安全的更新content。由于一些历史原因,我们称它Vsync 信号。

The refresh rate may vary over time, e.g. some mobile devices will range from 58 to 62fps depending on current conditions. For an HDMI-attached television, this could theoretically dip to 24 or 48Hz to match a video. Because we can update the screen only once per refresh cycle, submitting buffers for display at 200fps would be a waste of effort as most of the frames would never be seen. Instead of taking action whenever an app submits a buffer, SurfaceFlinger wakes up when the display is ready for something new.

刷新率可能会随时间而变化,一些移动设备取决与设备当前的状况可能在58-62这个范围波动。对于一个HDMI连接的电视,为了匹配video,其理论值可能下降到24 – 48 hz。因为我们只会在每个刷新周期更新屏幕,以200fps的速率往display提交buffer 是一种效率上的浪费,因为但大多数帧都不会被看见。SF只有在display 准备好去显示某些东西的时候才会被唤醒处理,而不是当app提交buffer时就会处理。

When the VSYNC signal arrives, SurfaceFlinger walks through its list of layers looking for new buffers. If it finds a new one, it acquires it; if not, it continues to use the previously-acquired buffer. SurfaceFlinger always wants to have something to display, so it will hang on to one buffer. If no buffers have ever been submitted on a layer, the layer is ignored.
Once SurfaceFlinger has collected all of the buffers for visible layers, it asks the Hardware Composer how composition should be performed.

当Vsync信号到达时,SF会逐一检查layers list中是否有新的缓冲区。如果发现新的缓冲区,会去获取这块buffer,如果没有,它继续使用之前获取的buffer。SurfaceFlinger 经常期望有什么可以显示的,所以它将会

如果一个layer 没有新的缓冲区提交,那么这个layer 就会被忽略。

一旦SF收集完所有可见layer的buffer之后,它会告诉HardwareComposer 如何进行合成。

Hardware Composer

The Hardware Composer HAL (“HWC”) was first introduced in Android 3.0 (“Honeycomb”) and has evolved steadily over the years. Its primary purpose is to determine the most efficient way to composite buffers with the available hardware. As a HAL, its implementation is device-specific and usually implemented by the display hardware OEM.

HWC 第一次引入是在3.0的时候,这些年一直在慢慢演变。主要的目的在于确定最有效的方法来使用可用的硬件合成这些buffers。作为HAL,它的实现是平台相关的,一般是由Display 硬件OEM来实现。

The value of this approach is easy to recognize when you consider “overlay planes.” The purpose of overlay planes is to composite multiple buffers together, but in the display hardware rather than the GPU. For example, suppose you have a typical Android phone in portrait orientation, with the status bar on top and navigation bar at the bottom, and app content everywhere else. The contents for each layer are in separate buffers. You could handle composition by rendering the app content into a scratch buffer, then rendering the status bar over it, then rendering the navigation bar on top of that, and finally passing the scratch buffer to the display hardware. Or, you could pass all three buffers to the display hardware, and tell it to read data from different buffers for different parts of the screen. The latter approach can be significantly more efficient.

如果你想到“覆盖平面overlay planes”,这种方法的价值很容易看出来。“覆盖平面”的目的是用Display hardware 而不是用GPU合成多个buffers。例如,假如你有一块android phone 处于竖屏。你可以通过将app的内容渲染到一个scratch buffer,然后将status bar渲染到上面,最后渲染nav bar在最上面的方式来合成,最后传递这块scratch buffer 给Display hardware。或者你可以传递这三块buffer 给display hardware,并且告诉它不同的屏幕区域读取不同的buffer 数据。显然,后者更有效率。

As you might expect, the capabilities of different display processors vary significantly. The number of overlays, whether layers can be rotated or blended, and restrictions on positioning and overlap can be difficult to express through an API. So, the HWC works like this:
SurfaceFlinger provides the HWC with a full list of layers, and asks, “how do you want to handle this?”
The HWC responds by marking each layer as “overlay” or “GLES composition.”
SurfaceFlinger takes care of any GLES composition, passing the output buffer to HWC, and lets HWC handle the rest.

正如你所期望的,不同display处理器的能力非常不同,overlays的数量,layer 能否渲染,混合,位置确定,重叠,都是非常难通过一个API去表达,所以HWC像这样工作:

  1. SF会提供HWC所有的layers,并且告诉它如何处理这些layer

  2. HWC通过为每个layer标记overlay 或者GLES 合成来处理。

  3. SF关注GLES合成,传递输出的buffer 给HWC,让HWC来处理剩下的事情。

Since the decision-making code can be custom tailored by the hardware vendor, it’s possible to get the best performance out of every device.

因为这些决定性标记的code都是有硬件商定制的,为所有的设备获取最好的性能是可能的。

Overlay planes may be less efficient than GL composition when nothing on the screen is changing. This is particularly true when the overlay contents have transparent pixels, and overlapping layers are being blended together. In such cases, the HWC can choose to request GLES composition for some or all layers and retain the composited buffer. If SurfaceFlinger comes back again asking to composite the same set of buffers, the HWC can just continue to show the previously-composited scratch buffer. This can improve the battery life of an idle device.

当屏幕上没有什么更新时,Overlay planes 覆盖平面可能比GL 合成更低效。 尤其是在覆盖的内容有透明像素,重叠layers正在被合成时。

在这种情况下,HWC可以为某些layer选择GLES合成,并保留合成的buffer。 如果SF继续请求合成同样一组buffers,hwc可以继续显示之前合成的scratch buffer。这可以提高空闲设备的电池续航能力。

Devices shipping with Android 4.4 (“KitKat”) typically support four overlay planes. Attempting to composite more layers than there are overlays will cause the system to use GLES composition for some of them; so the number of layers used by an application can have a measurable impact on power consumption and performance.

4.4的设备支持4层覆盖平面。尝试合成更多的layer会是系统使用GLES合成其中的一部分。所以app所使用的layer 数对power消耗和性能可以有一个可测量的影响。

You can see exactly what SurfaceFlinger is up to with the command adb shell dumpsys SurfaceFlinger. The output is verbose. The part most relevant to our current discussion is the HWC summary that appears near the bottom of the output:

通过adb shell dumpsys SurfaceFlinger 你可以准确看到SF的一些讯息,输出是非常啰嗦的。与我们目前所讨论相关的部分是HWC 总结,它在输出的末尾:

type    |          source crop              |           frame           nameHWC | [    0.0,    0.0,  320.0,  240.0] | [   48,  411, 1032, 1149] SurfaceViewHWC | [    0.0,   75.0, 1080.0, 1776.0] | [    0,   75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivityHWC | [    0.0,    0.0, 1080.0,   75.0] | [    0,    0, 1080,   75] StatusBarHWC | [    0.0,    0.0, 1080.0,  144.0] | [    0, 1776, 1080, 1920] NavigationBar

FB TARGET | [ 0.0, 0.0, 1080.0, 1920.0] | [ 0, 0, 1080, 1920] HWC_FRAMEBUFFER_TARGET

This tells you what layers are on screen, whether they’re being handled with overlays (“HWC”) or OpenGL ES composition (“GLES”), and gives you a bunch of other facts you probably won’t care about (“handle” and “hints” and “flags” and other stuff that we’ve trimmed out of the snippet above). The “source crop” and “frame” values will be examined more closely later on.

这会告诉你哪些layer会在屏幕上显示,是hwc合成还是OpenGL ES 合成,还有一些其他你可能不会关注的部分。source crop & frame 将会在后面经常检查到。

The FB_TARGET layer is where GLES composition output goes. Since all layers shown above are using overlays, FB_TARGET isn’t being used for this frame. The layer’s name is indicative of its original role: On a device with/dev/graphics/fb0 and no overlays, all composition would be done with GLES, and the output would be written to the framebuffer. On recent devices there generally is no simple framebuffer, so the FB_TARGET layer is a scratch buffer.
Note: This is why screen grabbers written for old versions of Android no longer work: They’re trying to read from the Framebuffer, but there is no such thing.

FB_TARGET layer 是GLES 合成输出。因为上面显示的所有layer都是使用overlays 合成的,所有这一帧中并没有使用FB_TARGET。这个layer的名字表达的就是它原本的作用:在一个有/dev/graphics/fb0 但没有overlays的device,所有的合成都是GLES完成,并且输出将会被直接写到framebuffer。在最近的设备上,一般没有简单的framebuffer,所以FB_TARGET layer是一个scratch buffer。

note:这就是为什么老设备上的截屏器不再有效了:他们尝试从framebuffer中读取,但实际上并没有这个东西。

The overlay planes have another important role: they’re the only way to display DRM content. DRM-protected buffers cannot be accessed by SurfaceFlinger or the GLES driver, which means that your video will disappear if HWC switches to GLES composition.

overlay planes 覆盖平面还有另一个重要的作用:唯一的方法去显示DRM内容。DRM 保护buffer不能被SF 或者GLES driver 访问,这也就是说如果由HWC 切换到GLES 叠图,video 数据将会不见了。

The Need for Triple-Buffering

To avoid tearing on the display, the system needs to be double-buffered: the front buffer is displayed while the back buffer is being prepared. At VSYNC, if the back buffer is ready, you quickly switch them. This works reasonably well in a system where you’re drawing directly into the framebuffer, but there’s a hitch in the flow when a composition step is added. Because of the way SurfaceFlinger is triggered, our double-buffered pipeline will have a bubble.

为了避免显示“tearing”撕裂,系统需要双buffer:front buffer 用于显示,back buffer用于准备数据。当vsync 到来时,如果back buffer也准备好了,你可以快速切换他们。 如果是直接渲染到framebuffer,这理所当然非常好的。但是现在中间增加了合成的步骤。由于sf被触发的方法,双buffer管道将会遇到问题。

Suppose frame N is being displayed, and frame N+1 has been acquired by SurfaceFlinger for display on the next VSYNC. (Assume frame N is composited with an overlay, so we can’t alter the buffer contents until the display is done with it.) When VSYNC arrives, HWC flips the buffers. While the app is starting to render frame N+2 into the buffer that used to hold frame N, SurfaceFlinger is scanning the layer list, looking for updates. SurfaceFlinger won’t find any new buffers, so it prepares to show frame N+1 again after the next VSYNC. A little while later, the app finishes rendering frame N+2 and queues it for SurfaceFlinger, but it’s too late. This has effectively cut our maximum frame rate in half.

假如Frame N正在被显示,Frame N+1 正在被SurfaceFlinger acquire,用于下一个Vsnc 显示。当Vsync 到来时,HWC会刷新这些buffers。当app准备渲染Frame N + 2 到这个正在显示Frame N的buffer 时,SF正在扫描这些layer 列表,查找更新。SF不会找到任何新的buffer,所以在下一个vsync 来临时,它继续显示frame N+1,过了一会,APP完成Frame N +2 渲染,并且将buffer 还给SF时,这就太晚了。这会将我们的帧率降低一半。

We can fix this with triple-buffering. Just before VSYNC, frame N is being displayed, frame N+1 has been composited (or scheduled for an overlay) and is ready to be displayed, and frame N+2 is queued up and ready to be acquired by SurfaceFlinger. When the screen flips, the buffers rotate through the stages with no bubble. The app has just less than a full VSYNC period (16.7ms at 60fps) to do its rendering and queue the buffer. And SurfaceFlinger / HWC has a full VSYNC period to figure out the composition before the next flip. The downside is that it takes at least two VSYNC periods for anything that the app does to appear on the screen. As the latency increases, the device feels less responsive to touch input.

我们可以采用Triple buffer 来解决这个问题。在Vsync 之前,Frame N正在显示,Frame N+1已经合成完准备显示。Frame N+2 已经queue回去了,等待SF acquire 处理。 当屏幕刷新时,这些buffer会转换状态 而no bubble。App必须在一个vsync 周期之内完成渲染,并queue 回buffer。SF&HWC在下一次刷新前的在一个vsync周期内计算完合成。 这有个问题就是必须耗费两个vsync 周期才能将app要显示的内容都显示到屏幕上。 由于letency 增加,可能会感觉到触摸响应不及时。

surfaceflinger_bufferqueue

Figure 1. SurfaceFlinger + BufferQueue

The diagram above depicts the flow of SurfaceFlinger and BufferQueue. During frame:
red buffer fills up, then slides into BufferQueue
after red buffer leaves app, blue buffer slides in, replacing it
green buffer and systemUI* shadow-slide into HWC (showing that SurfaceFlinger still has the buffers, but now HWC has prepared them for display via overlay on the next VSYNC).

上面的图描述了SF &BufferQueue的流程,在某一帧时:

  1. 红色buffer填满后,传递给BufferQueue。

  2. 等红色buffer离开app之后,蓝色buffer传递进来,替换它。

  3. 绿色buffer 和systemUI *的传递给HWC(显示此时SF依旧拥有这些buffers,但是现在HWC已经准备通过overlay在下一个vsync 显示出来。)

The blue buffer is referenced by both the display and the BufferQueue. The app is not allowed to render to it until the associated sync fence signals.
On VSYNC, all of these happen at once:
red buffer leaps into SurfaceFlinger, replacing green buffer
green buffer leaps into Display, replacing blue buffer, and a dotted-line green twin appears in the BufferQueue
the blue buffer’s fence is signaled, and the blue buffer in App empties**
display rect changes from

Surface and SurfaceHolder

The Surface class has been part of the public API since 1.0. Its description simply says, “Handle onto a raw buffer that is being managed by the screen compositor.” The statement was accurate when initially written but falls well short of the mark on a modern system.

Surface 类是在1.0的API开始加入的。它的描述非常简单:“持有一块由屏幕合成器管理的原始的缓冲区” 这种陈述最初写的时候是非常准确的,但是在现代系统上这种表达可能稍有不周全。

The Surface represents the producer side of a buffer queue that is often (but not always!) consumed by SurfaceFlinger. When you render onto a Surface, the result ends up in a buffer that gets shipped to the consumer. A Surface is not simply a raw chunk of memory you can scribble on.

Surface 代表一个buffer queue的producer生产者,而这个buffer queue的消费者经常是SF。如果往一块surface上渲染,结果会被渲染到一块会被传递给消费者的buffer上。一个Surface不单是你所理解的 一块原始的内存缓冲区。

The BufferQueue for a display Surface is typically configured for triple-buffering; but buffers are allocated on demand. So if the producer generates buffers slowly enough – maybe it’s animating at 30fps on a 60fps display – there might only be two allocated buffers in the queue. This helps minimize memory consumption. You can see a summary of the buffers associated with every layer in thedumpsys SurfaceFlinger output.

用于display surface的BufferQueue一般是3块buffer,但这些buffer是按需分配。所以如果producer产生buffer数据很慢(可能将会导致在60fps的display上只能有30fps的刷新率),BufferQueue里面可能就只有两块已分配的buffer。 这对降低内存消耗有帮助。可以通过dumpsys SurfaceFlinger的数据来了解每个layer的buffers。

Canvas Rendering

Once upon a time, all rendering was done in software, and you can still do this today. The low-level implementation is provided by the Skia graphics library. If you want to draw a rectangle, you make a library call, and it sets bytes in a buffer appropriately. To ensure that a buffer isn’t updated by two clients at once, or written to while being displayed, you have to lock the buffer to access it.lockCanvas() locks the buffer and returns a Canvas to use for drawing, andunlockCanvasAndPost() unlocks the buffer and sends it to the compositor.

曾经一段时间渲染都是由sw完成的,今天同样可以这么做。低层的实现是由skia 图形库所提供。如果你想画一个rect矩形,可以调用库函数,适当的调整buffer里的字节。为了保证这块buffer不会被两个client端同时更新,或者在其显示的时候被写入,必须锁定buffer被访问。lockCanvas() 锁定这块buffer,然后会返回一块canvas用于渲染,unlockCanvasAndPost()解锁这块buffer,并将其传递给合成器。

As time went on, and devices with general-purpose 3D engines appeared, Android reoriented itself around OpenGL ES. However, it was important to keep the old API working, for apps as well as app framework code, so an effort was made to hardware-accelerate the Canvas API. As you can see from the charts on the Hardware Acceleration page, this was a bit of a bumpy ride. Note in particular that while the Canvas provided to a View’sonDraw() method may be hardware-accelerated, the Canvas obtained when an app locks a Surface directly withlockCanvas() never is

随着时间的推移,支持多方面用途的3D引擎的设备出现,Android采用OpenGL ES来适应。然而,保留原先老接口能正常工作是非常重要的,对APP 和FWK,所以就实现了一种硬件加速的Canvas API。
特别要注意的是view的onDraw函数中的canvas可能是硬件加速的canvas,如果这个canvas是app直接通过lockCanvas()获取的就不是硬件加速的canvas。

When you lock a Surface for Canvas access, the “CPU renderer” connects to the producer side of the BufferQueue and does not disconnect until the Surface is destroyed. Most other producers (like GLES) can be disconnected and reconnected to a Surface, but the Canvas-based “CPU renderer” cannot. This means you can’t draw on a surface with GLES orsend it frames from a video decoder if you’ve ever locked it for a Canvas.

当你锁定一个surface用于canvas 访问,CPU 渲染器连接到BufferQueue的producer端,并且不会断开链接,除非surface被销毁。 大多数其他的producer(GLES)可以断开链接并重新链接到这块surface上,但是基于cpu渲染的canvas不会。这意味着如果曾经锁定它获取了一个canvas,你不能使用GLES在这块surface上渲染,或者send it frames from a video decoder 。

The first time the producer requests a buffer from a BufferQueue, it is allocated and initialized to zeroes. Initialization is necessary to avoid inadvertently sharing data between processes. When you re-use a buffer, however, the previous contents will still be present. If you repeatedly call lockCanvas() and unlockCanvasAndPost()without drawing anything,you’ll cycle between previously-rendered frames.

Producer 第一次从BufferQueue中请求buffer时,buffer将会被分配,并初始化为0. 初始化是非常重要的,是为了避免非故意的进程间共享数据。当你再次重复使用一块buffer,之前内容依旧被显示。 如果你重复调用lockcanvas &unLockCanvasAndPost而没有任何渲染操作, 将会循环显示上一次渲染的帧。

The Surface lock/unlock code keeps a reference to the previously-rendered buffer. If you specify a dirty region when locking the Surface, it will copy the non-dirty pixels from the previous buffer.There’s a fair chance the buffer will be handled by SurfaceFlinger or HWC; but since we need to only read from it, there’s no need to wait for exclusive access.

Surface的锁定与解锁代码会保留一个对前一次渲染过的buffer的引用。如果你指定一块脏区,将会从前一次buffer拷贝那些非脏区。
这块buffer将会被SF 或者HWC处理,但是因为我们只是去读数据,所以不需要特别限制其访问。

The main non-Canvas way for an application to draw directly on a Surface is through OpenGL ES. That’s described in theEGLSurface and OpenGL ES section.

对于APP直接在surface上渲染,而不需要canvas的一个主要方法是通过OpenGL。将会在EGLSurface &OpenGL ES 讲到。

SurfaceHolder

Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView. The original idea was that Surface represented the raw compositor-managed buffer, while SurfaceHolder was managed by the app and kept track of higher-level information like the dimensions and format. The Java-language definition mirrors the underlying native implementation. It’s arguably no longer useful to split it this way, but it has long been part of the public API.

有些与Surface相关的操作需要一个SurfaceHolder,尤其是SurfaceView。 最初的想法是Surface代表最原始的由合成器管理的buffer,而SurfaceHolder由App 管理,监控更上层的信息,比如大小和格式。 Java层中的定义反映了底层的实现。可以认为这种区分它的想法不再有用,但它一直以来就存在于开放API中。

Generally speaking, anything having to do with a View will involve a SurfaceHolder. Some other APIs, such as MediaCodec, will operate on the Surface itself. You can easily get the Surface from the SurfaceHolder, so hang on to the latter when you have it.

一般来讲,与view相关的操作都会包含一个SurfaceHolder。一些其他的API,比如MediaCodec,将会直接操作Surface。你可以轻易的从SurfaceHolder中获取Surface,所以保存SurfaceHolder当你有时。

APIs to get and set Surface parameters, such as the size and format, are implemented through SurfaceHolder.

获取或者设置Surface的参数,比如大小,格式,的API将会通过SurfaceHolder 实现。

EGLSurface and OpenGL ES

OpenGL ES defines an API for rendering graphics. It does not define a windowing system. To allow GLES to work on a variety of platforms, it is designed to be combined with a library that knows how to create and access windows through the operating system. The library used for Android is called EGL. If you want to draw textured polygons, you use GLES calls; if you want to put your rendering on the screen, you use EGL calls.

OpenGL ES 定义了图形渲染的API。但它没有定义一个窗口系统。为了让GLES能够多种平台下运作,需要结合一个知道如何通过操作系统去创建和访问窗口的库。在Android上这个lib被称之为EGL。如果想渲染带纹理的多边形,可以使用GLES调用;如果像将你的渲染推送到屏幕上,必须使用EGL调用。

Before you can do anything with GLES, you need to create a GL context. In EGL, this means creating an EGLContext and an EGLSurface. GLES operations apply to the current context, which is accessed through thread-local storage rather than passed around as an argument. This means you have to be careful about which thread your rendering code executes on, and which context is current on that thread.

在使用GLES做事情之前,必须创建一个GL上下文。在EGL中,也就是创建一个EGLContext,和一个EGLSurface。

GLES操作会被作用到当前的上下文中,这个上下文是线程本地存储的,而不是作为一个参数到处传递的。这意味着必须清楚你的渲染代码在哪个线程执行,这个线程里的当前上下文是哪个。

The EGLSurface can be an off-screen buffer allocated by EGL (called a “pbuffer”) or a window allocated by the operating system. EGL window surfaces are created with the eglCreateWindowSurface() call. It takes a “window object” as an argument, which on Android can be a SurfaceView, a SurfaceTexture, a SurfaceHolder, or a Surface – all of which have a BufferQueue underneath. When you make this call, EGL creates a new EGLSurface object, and connects it to the producer interface of the window object’s BufferQueue. From that point onward, rendering to that EGLSurface results in a buffer being dequeued, rendered into, and queued for use by the consumer. (The term “window” is indicative of the expected use, but bear in mind the output might not be destined to appear on the display.)

EGLSurface可以是一个由EGL分配的离屏缓冲区(pbuffer),或者操作系统分配的窗口。EGL 窗口的surface 都是通过eglCreateWindowSurface()函数调用创建的。传递一个“窗口对象”作为参数,在Android上可能是SurfaceView,SurfaceTexture,SurfaceHolder,或者是Surface(它们底层都对应一个BufferQueue)。如果你调用这个函数,EGL 将会创建一个EGLSurface 对象,并将其连接到这个窗口对象的bufferQueue的producer接口。然后,渲染到这个EGLSurface上将会引起buffer被dequeue,渲染,queue回给comsumer。

EGL does not provide lock/unlock calls. Instead, you issue drawing commands and then call eglSwapBuffers() to submit the current frame. The method name comes from the traditional swap of front and back buffers, but the actual implementation may be very different.

EGL不提供lock/unlock调用。提交渲染命令之后,调用eglSwapBuffers提叫当前的帧。这个方法的名字来自传统的front / back buffer切换,但实际上他们的实现是非常不同的。

Only one EGLSurface can be associated with a Surface at a time – you can have only one producer connected to a BufferQueue – but if you destroy the EGLSurface it will disconnect from the BufferQueue and allow something else to connect.

同一时刻只能有一个EGLSurface关联到一块Surface上,也就是只能有一个producer连接接到一个bufferqueue上。但是销毁EGLSurface会导致与BufferQueue断开连接,从而允许其他人连接。

A given thread can switch between multiple EGLSurfaces by changing what’s “current.” An EGLSurface must be current on only one thread at a time.

线程可以通过改变上下来而切换到不同的EGLSurface上。但是同一时间,只能有一个线程操作一个EGLSurface。

The most common mistake when thinking about EGLSurface is assuming that it is just another aspect of Surface (like SurfaceHolder). It’s a related but independent concept. You can draw on an EGLSurface that isn’t backed by a Surface, and you can use a Surface without EGL. EGLSurface just gives GLES a place to draw.

最常见的错误是将EGLSurface当作是surface的另外一面的表述。比如SurfaceHolder。它是一个相关但又独立的概念。可以在EGLSurface(不是由Surface支撑的)上渲染,也可以不使用EGL来操作的Surface。EGLSurface只是给GLES机会去渲染。

ANativeWindow

The public Surface class is implemented in the Java programming language. The equivalent in C/C++ is the ANativeWindow class, semi-exposed by the Android NDK. You can get the ANativeWindow from a Surface with the ANativeWindow_fromSurface() call. Just like its Java-language cousin, you can lock it, render in software, and unlock-and-post.

对外开放的Surface 类是用java 实现的。其在C/C++中的实现是ANativeWindow,在Android NDK中开放。可以通过ANativeWindow_fromSurface()从Surface中获取ANativeWindow。就像它在java中对应实现,你可以锁定它,通过软件渲染,然后unlock & post。

To create an EGL window surface from native code, you pass an instance of EGLNativeWindowType toeglCreateWindowSurface(). EGLNativeWindowType is just a synonym for ANativeWindow, so you can freely cast one to the other.

你需要传递一个EGLNativeWindowType的实例给eglCreateWindowSurface()在native层创建一个EGLWindw Surface. EGLNativeType 仅仅只是ANativeWindow的同义词,所以可以自由转换。

The fact that the basic “native window” type just wraps the producer side of a BufferQueue should not come as a surprise.

最基本的“native window”类型仅仅是封装了BufferQueue的producer。

SurfaceView and GLSurfaceView

Now that we’ve explored the lower-level components, it’s time to see how they fit into the higher-level components that apps are built from.

我们已经研究了更底层的组件。现在来看一下它们是如何与到那些用来构建app的更高层的组件串联起来的。

The Android app framework UI is based on a hierarchy of objects that start with View. Most of the details don’t matter for this discussion, but it’s helpful to understand that UI elements go through a complicated measurement and layout process that fits them into a rectangular area. All visible View objects are rendered to a SurfaceFlinger-created Surface that was set up by the WindowManager when the app was brought to the foreground. The layout and rendering is performed on the app’s UI thread.

Android App的framework UI 部分是基于一个View的层级系统。细节这里不关心,但是它对于去理解UI元素如何通过一个复杂的测量和布局过程来适配到一个矩形区域 是非常有帮助。所有可见的view 对象都会被渲染到一个由SF创建的Surface上。当app被切换到前台时,由WM来调整这块surface的相关属性。布局和渲染都是在App的UI thread中进行的。

Regardless of how many Layouts and Views you have, everything gets rendered into a single buffer. This is true whether or not the Views are hardware-accelerated.

不管有多少布局和view,所有的东西都会渲染到同一块buffer上。无论这些view是否是hw-加速的。

A SurfaceView takes the same sorts of parameters as other views, so you can give it a position and size, and fit other elements around it. When it comes time to render, however, the contents are completely transparent. The View part of a SurfaceView is just a see-through placeholder.

一个SurfaceView拥有与其他view相同的一些参数,你可以指定它的位置和大小,并且在它周围填充其他元素。当需要渲染时,其内容是完全透明的。View这个层面的SurfaceView仅仅是一个透明的占位符而已。

When the SurfaceView’s View component is about to become visible, the framework asks the WindowManager to ask SurfaceFlinger to create a new Surface. (This doesn’t happen synchronously, which is why you should provide a callback that notifies you when the Surface creation finishes.) By default, the new Surface is placed behind the app UI Surface, but the default “Z-ordering” can be overridden to put the Surface on top.

当SurfaceView的view 组件将变的可见时,framework将告诉WM去向SF请求去创建一块Surface。默认情况下,这个新的Surface将会在App Ui对应的surface的后面,但可以通过覆盖这个默认的Z-ordering 将这个surface移到前面来。

Whatever you render onto this Surface will be composited by SurfaceFlinger, not by the app. This is the real power of SurfaceView: the Surface you get can be rendered by a separate thread or a separate process, isolated from any rendering performed by the app UI, and the buffers go directly to SurfaceFlinger. You can’t totally ignore the UI thread – you still have to coordinate with the Activity lifecycle, and you may need to adjust something if the size or position of the View changes – but you have a whole Surface all to yourself, and blending with the app UI and other layers is handled by the Hardware Composer.

往这块surface渲染的任何内容都会被SF合成,而不是被APP合成。SurfaceView的厉害之处时:你可以在一个单独的线程或进程渲染这块surface,与App ui 相关的渲染操作无关,这块buffer会直接传递给SF。你不能完全忽略UI 线程(但你仍然要关注Activity 的生命周期,并且在View的大小或者位置有改变时需要调整一些东西),但是你拥有一个属于你自己的(并且由HWC处理其与appUI & 其他layer 的混合)surface。

It’s worth taking a moment to note that this new Surface is the producer side of a BufferQueue whose consumer is a SurfaceFlinger layer. You can update the Surface with any mechanism that can feed a BufferQueue. You can: use the Surface-supplied Canvas functions, attach an EGLSurface and draw on it with GLES, and configure a MediaCodec video decoder to write to it.

值得注意的是,这个新的Surface是作为BufferQueue的Producer端,SF的layer 作为其Consumer端。你可以用任何能够feed给BufferQueue机制来更新这块surface。你可以:使用Surface提供的canvas 函数,附到EGLSurface上,通过GLES绘制,可以配置一个MediaCodec 的视频解码器对其进行写操作。

Composition and the Hardware Scaler

Now that we have a bit more context, it’s useful to go back and look at a couple of fields from dumpsys SurfaceFlinger that we skipped over earlier on. Back in the Hardware Composer discussion, we looked at some output like this:

  type    |          source crop              | frame                     name     HWC | [    0.0,    0.0,  320.0,  240.0] | [   48,  411, 1032, 1149] SurfaceView        HWC | [    0.0,   75.0, 1080.0, 1776.0] | [    0,   75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity    HWC | [    0.0,    0.0, 1080.0,   75.0] | [    0,    0, 1080,   75] StatusBar        HWC | [    0.0,    0.0, 1080.0,  144.0] | [    0, 1776, 1080, 1920] NavigationBar
FB TARGET | [    0.0,    0.0, 1080.0, 1920.0] | [    0,    0, 1080, 1920] HWC_FRAMEBUFFER_TARGET

This was taken while playing a movie in Grafika’s “Play video (SurfaceView)” activity, on a Nexus 5 in portrait orientation. Note that the list is ordered from back to front: the SurfaceView’s Surface is in the back, the app UI layer sits on top of that, followed by the status and navigation bars that are above everything else. The video is QVGA (320x240).

这是从Nexus5竖屏模式下使用Grafika 播放电影时dump的数据。注意,这个列表是从后往前按顺序排列的:surfaceView对应的surface在最后面,APP ui layer在它上面,在他们上面的是status bar & nav bar。video是QVGA 320*240的。

The “source crop” indicates the portion of the Surface’s buffer that SurfaceFlinger is going to display. The app UI was given a Surface equal to the full size of the display (1080x1920), but there’s no point rendering and compositing pixels that will be obscured by the status and navigation bars, so the source is cropped to a rectangle that starts 75 pixels from the top, and ends 144 pixels from the bottom. The status and navigation bars have smaller Surfaces, and the source crop describes a rectangle that begins at the the top left (0,0) and spans their content.

Source Crop表示SF将要显示的这个Surface其对应的buffer上的区域(相对Surface的buffer而言的区域)。APP UI的Surface大小与display size一样,但不需要去渲染和合成那些被status bar & nav bar遮盖的区域。

The “frame” is the rectangle where the pixels end up on the display. For the app UI layer, the frame matches the source crop, because we’re copying (or overlaying) a portion of a display-sized layer to the same location in another display-sized layer. For the status and navigation bars, the size of the frame rectangle is the same, but the position is adjusted so that the navigation bar appears at the bottom of the screen.

frame表示这个矩形在display上的显示区域。

Now consider the layer labeled “SurfaceView”, which holds our video content. The source crop matches the video size, which SurfaceFlinger knows because the MediaCodec decoder (the buffer producer) is dequeuing buffers that size. The frame rectangle has a completely different size – 984x738.

现在来考虑一下标志为SurfaceView的layer,其携带的是视频内容。sroucre crop 与视频大小匹配,因为MediaCodec 解码器(buffer producer) dequeue的buffer就是这个size。frame 区域是一个完全不同的大小:984 * 738。

SurfaceFlinger handles size differences by scaling the buffer contents to fill the frame rectangle, upscaling or downscaling as needed. This particular size was chosen because it has the same aspect ratio as the video (4:3), and is as wide as possible given the constraints of the View layout (which includes some padding at the edges of the screen for aesthetic reasons).

SF通过缩放buffer的内容来填充frame区域来处理这种大小不同的情况,根据需求缩放。这种特殊的大小被选择是因为它与视频有同样的宽高比,并且同样可能view 布局的限制。

If you started playing a different video on the same Surface, the underlying BufferQueue would reallocate buffers to the new size automatically, and SurfaceFlinger would adjust the source crop. If the aspect ratio of the new video is different, the app would need to force a re-layout of the View to match it, which causes the WindowManager to tell SurfaceFlinger to update the frame rectangle.

如果在同样的surface播放不同的video,底层的BufferQueue会根据新的大小自动重新分配buffer,SF会调整SourceCrop。如果新的视频的宽高比不一样,那么app需要强制view 重新布局来匹配它,这回导致WM去告诉SF去更新frame 区域。

If you’re rendering on the Surface through some other means, perhaps GLES, you can set the Surface size using the SurfaceHolder#setFixedSize() call. You could, for example, configure a game to always render at 1280x720, which would significantly reduce the number of pixels that must be touched to fill the screen on a 2560x1440 tablet or 4K television. The display processor handles the scaling. If you don’t want to letter- or pillar-box your game, you could adjust the game’s aspect ratio by setting the size so that the narrow dimension is 720 pixels, but the long dimension is set to maintain the aspect ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display). You can see an example of this approach in Grafika’s “Hardware scaler exerciser” activity.

如果通过其他手段GLES等渲染这块surface时,你可以通过SurfaceHolder#setFixedSize() 来设置surface的大小。比如,你可以配置游戏总是在1280*720上渲染,将会有效减少像素的数量,这些像素需要被填充到一个2560*1440 的平板上或者4K电视上。display 处理器会处理这个缩放。如果你不想游戏被局限在一个信封或邮筒的框框里,你可以通过设置大小来调整游戏的宽高比,窄边是720,宽边被调整以与物理display 保持同样的宽高比。

GLSurfaceView

The GLSurfaceView class provides some helper classes that help manage EGL contexts, inter-thread communication, and interaction with the Activity lifecycle. That’s it. You do not need to use a GLSurfaceView to use GLES.

For example, GLSurfaceView creates a thread for rendering and configures an EGL context there. The state is cleaned up automatically when the activity pauses. Most apps won’t need to know anything about EGL to use GLES with GLSurfaceView.

In most cases, GLSurfaceView is very helpful and can make working with GLES easier. In some situations, it can get in the way. Use it if it helps, don’t if it doesn’t.

GLSurfaceView 提供一些帮助类来帮助管理EGL 上下文,线程间通信,与Activity生命周期的交互。

例如,GLSurfaceView 创建一个线程用户渲染和配置EGL上下文。这个状态会在Activity pause时被清除。大多数APP使用GLES与GLSurfaceView时将不需要了解任何关于EGL的部分。

大多数情况下,GLSurfaceView将时非常有用的,并且可以使用GLES更方便操作。

SurfaceTexture

The SurfaceTexture class is a relative newcomer, added in Android 3.0 (“Honeycomb”). Just as SurfaceView is the combination of a Surface and a View, SurfaceTexture is the combination of a Surface and a GLES texture. Sort of.
SurfaceTexture 类是一个相关的新类,Android3.0新增的。就像SurfaceView时一个Surface和一个View的结合体,SurfaceTexture 是一个Surface 与一个GLES Texture结合体。差不多可以这么讲。

When you create a SurfaceTexture, you are creating a BufferQueue for which your app is the consumer. When a new buffer is queued by the producer, your app is notified via callback (onFrameAvailable()). Your app callsupdateTexImage(), which releases the previously-held buffer, acquires the new buffer from the queue, and makes some EGL calls to make the buffer available to GLES as an “external” texture.

当创建一个SurfaceTextuer时,将会创建一个以APP作为Consumer的BufferQueue。当新的buffer 被producer归还到队列时,通过onFrameAvailable() app将会被通知到。APP可以调用updateTexture(),来释放上一次持有的buffer,从队列acquire新的buffer,并且调用一些EGL 调用使这个buffer变成一个GLES可用的外部Texture。

External textures (GL_TEXTURE_EXTERNAL_OES) are not quite the same as textures created by GLES (GL_TEXTURE_2D). You have to configure your renderer a bit differently, and there are things you can’t do with them. But the key point is this: You can render textured polygons directly from the data received by your BufferQueue.

外部纹理(GL_TEXTURE_EXTERNAL_OES)有点不同于被GLES创建的那些纹理(GL_TEXTURE_2D)。你必须配置你的渲染器有一点不同,还有一些你不能做的。但是关键点是:你可以直接从BufferQueue中收到的数据来渲染纹理多边形

You may be wondering how we can guarantee the format of the data in the buffer is something GLES can recognize – gralloc supports a wide variety of formats. When SurfaceTexture created the BufferQueue, it set the consumer’s usage flags to GRALLOC_USAGE_HW_TEXTURE, ensuring that any buffer created by gralloc would be usable by GLES.

你可能想知道如何保证buffer数据的格式时GLES能否识别的,gralloc能够支持很多中格式。当SurfaceTexture被bufferQueue创建,它会设置Consumer使用的flags为GRALLOC_USAGE_HW_TEXTURE,以保证任何被gralloc创建的buffer都能够被GLES 所用。

Because SurfaceTexture interacts with an EGL context, you have to be careful to call its methods from the correct thread. This is spelled out in the class documentation.

由于SurfaceTexture与一个EGL上下文交互,你必须小心在正确的线程中调用它的方法。这个在类文档说明中清楚的说明了。

If you look deeper into the class documentation, you will see a couple of odd calls. One retrieves a timestamp, the other a transformation matrix, the value of each having been set by the previous call to updateTexImage(). It turns out that BufferQueue passes more than just a buffer handle to the consumer. Each buffer is accompanied by a timestamp and transformation parameters.

如果你深入看这个类文档,你可能看到一对奇怪的调用。一个是获取时间戳,另外一个是转换矩阵,他们的值是通过上一次updateTexImage调用设置的。它证明了BufferQueue传递不止一个buffer的handle 给Consumer。每个Buffer都携带一个时间戳和转换矩阵。

The transformation is provided for efficiency. In some cases, the source data might be in the “wrong” orientation for the consumer; but instead of rotating the data before sending it, we can send the data in its current orientation with a transform that corrects it. The transformation matrix can be merged with other transformations at the point the data is used, minimizing overhead.

转换是为了效率而提供的。在某些情况下,source data对于comsumer可能是错误的方向,但并不是在发送它之前就旋转这些数据,而是以它当前的方向发送出去并携带一个调整它的转换。转换矩阵可以在数据被使用时,与其他转换合并,以降低开销。

The timestamp is useful for certain buffer sources. For example, suppose you connect the producer interface to the output of the camera (with setPreviewTexture()). If you want to create a video, you need to set the presentation time stamp for each frame; but you want to base that on the time when the frame was captured, not the time when the buffer was received by your app. The timestamp provided with the buffer is set by the camera code, resulting in a more consistent series of timestamps.

时间戳对于某些buffer 源是非常有用的。比如,你将producer连接到camera的输出(setPreviewTexture)。如果你想创建一个视频,你必须为每一帧设置呈现的时间戳,但是你希望是基于frame被截的时间,而不是app收到的时间。buffer 提供的时间戳由camera设置,这样就会有一个更连贯一致的时间戳。

SurfaceTexture and Surface

If you look closely at the API you’ll see the only way for an application to create a plain Surface is through a constructor that takes a SurfaceTexture as the sole argument. (Prior to API 11, there was no public constructor for Surface at all.) This might seem a bit backward if you view SurfaceTexture as a combination of a Surface and a texture.

Under the hood, SurfaceTexture is called GLConsumer, which more accurately reflects its role as the owner and consumer of a BufferQueue. When you create a Surface from a SurfaceTexture, what you’re doing is creating an object that represents the producer side of the SurfaceTexture’s BufferQueue.

SurfaceTexture 被GLComsumer 调用,这更准确的反映它作为BufferQueue的owner 和Consumer的作用。当你从一个SurfaceTexture 创建一个Surface,你所做的只是创建一个代表SurfaceTexture的BufferQueue的producer 端而已。

Case Study: Grafika’s “Continuous Capture” Activity

The camera can provide a stream of frames suitable for recording as a movie. If you want to display it on screen, you create a SurfaceView, pass the Surface to setPreviewDisplay(), and let the producer (camera) and consumer (SurfaceFlinger) do all the work. If you want to record the video, you create a Surface with MediaCodec’screateInputSurface(), pass that to the camera, and again you sit back and relax. If you want to show the video and record it at the same time, you have to get more involved.

The “Continuous capture” activity displays video from the camera as it’s being recorded. In this case, encoded video is written to a circular buffer in memory that can be saved to disk at any time. It’s straightforward to implement so long as you keep track of where everything is.

There are three BufferQueues involved. The app uses a SurfaceTexture to receive frames from Camera, converting them to an external GLES texture. The app declares a SurfaceView, which we use to display the frames, and we configure a MediaCodec encoder with an input Surface to create the video. So one BufferQueue is created by the app, one by SurfaceFlinger, and one by mediaserver.

这里包括了3个BufferQueue。APP使用SurfaceTexture来接收来自camere的frame,转换他们到一个外部GLES 纹理。APP声明了一个SurfaceView,用与显示这些frames,为MediaCodec encoder 配置一个输出的surface来创建视屏。所以一个bufferqueue是被app创建的,一个是被surfaceflinger创建的,一个是mediaserver创建的。

In the diagram above, the arrows show the propagation of the data from the camera. BufferQueues are in color (purple producer, cyan consumer). Note “Camera” actually lives in the mediaserver process.

上面的图中发,箭头显示了来自camera的数据传播。BufferQueue用颜色标识了,紫色是producer,蓝绿色是comsumer。注意Camera是运行在Mediaserver 进程中的。

Encoded H.264 video goes to a circular buffer in RAM in the app process, and is written to an MP4 file on disk using the MediaMuxer class when the “capture” button is hit.

当“capture”被点击之后,H.264编码的视频进入一个圆形的buffer,并且使用mediaMuxer类将其写入到硬盘上的一个Mp4文件

All three of the BufferQueues are handled with a single EGL context in the app, and the GLES operations are performed on the UI thread. Doing the SurfaceView rendering on the UI thread is generally discouraged, but since we’re doing simple operations that are handled asynchronously by the GLES driver we should be fine. (If the video encoder locks up and we block trying to dequeue a buffer, the app will become unresponsive. But at that point, we’re probably failing anyway.) The handling of the encoded data – managing the circular buffer and writing it to disk – is performed on a separate thread.

这三个BufferQueue都是在APp中的一个EGL 上下文处理的,GLES操作是在Ui 线程。UI线程中进行SurfaceView渲染一般是不好的,但是因为我们做一些简单的被GLESdriver异步处理的简单操作,所以也还好。编码数据的处理(管理圆形的buffer,并将其写入到硬盘)是在单独的线程中进行的。

The bulk of the configuration happens in the SurfaceView’s surfaceCreated() callback. The EGLContext is created, and EGLSurfaces are created for the display and for the video encoder. When a new frame arrives, we tell SurfaceTexture to acquire it and make it available as a GLES texture, then render it with GLES commands on each EGLSurface (forwarding the transform and timestamp from SurfaceTexture). The encoder thread pulls the encoded output from MediaCodec and stashes it in memory.

大量的配置发生在surfacaeView的SurfaceCreate中。EGLContext被创建,EGLSurface被创建用于display 和video编码器。当一个新的frame来时,会告诉SurfaceTexture 去acquire 这块buffer,使其能够作为一个GLES 纹理,然后通过GLES 命令将其渲染到每个EGLSurface上。编码线程从Mediacodex获取将编码过的输出,然后从memory中隐藏它

TextureView

The TextureView class was introduced in Android 4.0 (“Ice Cream Sandwich”). It’s the most complex of the View objects discussed here, combining a View with a SurfaceTexture.

textureView·是在Android4.0引入的,这是目前所讨论的最复杂的一个View对象,一个由View 和SurfaceTexture结合的东西。

Recall that the SurfaceTexture is a “GL consumer”, consuming buffers of graphics data and making them available as textures. TextureView wraps a SurfaceTexture, taking over the responsibility of responding to the callbacks and acquiring new buffers. The arrival of new buffers causes TextureView to issue a View invalidate request. When asked to draw, the TextureView uses the contents of the most recently received buffer as its data source, rendering wherever and however the View state indicates it should.

回想前面提到的SurfaceTexture是一个GLComsumer,消费图形数据缓冲区,并将它们标记成纹理可用。

TextureView封装了SurfaceTexture,负责响应这些回调,acquireing 一块新的buffers。新buffer的带来会引起TextureView出发一次View的invalidate 请求。当请求去渲染的时候,TextureView使用最近接受到的buffer内容作为其数据源,按照View渲染的方式渲染。

You can render on a TextureView with GLES just as you would SurfaceView. Just pass the SurfaceTexture to the EGL window creation call. However, doing so exposes a potential problem.

仅仅是传递SurfaceTexture给EGL window 创建调用,这样做可能会暴露一个潜在的问题。

In most of what we’ve looked at, the BufferQueues have passed buffers between different processes. When rendering to a TextureView with GLES, both producer and consumer are in the same process, and they might even be handled on a single thread. Suppose we submit several buffers in quick succession from the UI thread. The EGL buffer swap call will need to dequeue a buffer from the BufferQueue, and it will stall until one is available. There won’t be any available until the consumer acquires one for rendering, but that also happens on the UI thread… so we’re stuck.

BufferQueue在不同进程间传递buffer。通过GLES渲染一个TextureView时,Producer & Consumer在同一个进程里,甚至可能在同一个进程中处理。

The solution is to have BufferQueue ensure there is always a buffer available to be dequeued, so the buffer swap never stalls. One way to guarantee this is to have BufferQueue discard the contents of the previously-queued buffer when a new buffer is queued, and to place restrictions on minimum buffer counts and maximum acquired buffer counts. (If your queue has three buffers, and all three buffers are acquired by the consumer, then there’s nothing to dequeue and the buffer swap call must hang or fail. So we need to prevent the consumer from acquiring more than two buffers at once.) Dropping buffers is usually undesirable, so it’s only enabled in specific situations, such as when the producer and consumer are in the same process.

解决方案是是BufferQueue 能够确保总是由一块buffer可以被dequeue,buffer切换就不会拖延。由一个方法可以确保这点,当一块新的buffer queue回来时,BufferQueue丢弃上一次queue的buffer的内容,并且限制最小buffer数量,和最大acquired buffer的数量。丢弃buffer总是不可取的,所以只有在某些情况下才会这么做,比如,producer和consumer在同一个process中。

SurfaceView or TextureView?

SurfaceView and TextureView fill similar roles, but have very different implementations. To decide which is best requires an understanding of the trade-offs.

SurfaceView & TextureView扮演差不多的作用,但是是非常不同的实现。决定谁最好需要利弊权衡。

Because TextureView is a proper citizen of the View hierarchy, it behaves like any other View, and can overlap or be overlapped by other elements. You can perform arbitrary transformations and retrieve the contents as a bitmap with simple API calls.

因为TextureView是View层级中真正的一员,它的行为更像其他view,可以重叠或者被其他元素重叠。你可以随心所欲的装换,并通过简单的调用获取其内容。

The main strike against TextureView is the performance of the composition step. With SurfaceView, the content is written to a separate layer that SurfaceFlinger composites, ideally with an overlay. With TextureView, the View composition is always performed with GLES, and updates to its contents may cause other View elements to redraw as well (e.g. if they’re positioned on top of the TextureView). After the View rendering completes, the app UI layer must then be composited with other layers by SurfaceFlinger, so you’re effectively compositing every visible pixel twice. For a full-screen video player, or any other application that is effectively just UI elements layered on top of video, SurfaceView offers much better performance.

TextureView主要缺点是合成步骤的性能。SurfaceView的内容被写入到一个独立的,由SF进行合成(实际是overlay)的layer。TextureView 的合成是由GLES执行,更新它的内容,可能会导致其他view 元素同样被重绘。View 渲染完成之后,app ui layer必须与其他layer 合成,所以实际上每一个可见像素合成了两次。对一个全屏的视频播放器,或者其他app(实际上只有UI 元素在视频上面)SurfaceView 能提供更好的性能。

As noted earlier, DRM-protected video can be presented only on an overlay plane. Video players that support protected content must be implemented with SurfaceView.

正如前面提到的,DRM保护的视频只能在overlay plane呈现。支持保护内容的视频播放器只能使用SurfaceView实现。

Android Graphics - Architecture相关推荐

  1. Android Platform Architecture 安卓平台架构

    Android Platform Architecture 安卓平台架构 Android is an open source, Linux-based software stack created f ...

  2. Android之android.graphics.drawable.Drawable.Callback回调接口

    [java] view plaincopy /*如果你想实现一个扩展子Drawable的动画drawable,那么你可以通过setCallBack(android.graphics.drawable. ...

  3. Matrix: android 中的Matrix (android.graphics.Matrix) (转)

    本篇博客主要讲解一下如何处理对一个Bitmap对象进行处理,包括:缩放.旋转.位移.倾斜等.在最后将以一个简单的Demo来演示图片特效的变换. 1. Matrix概述 对于一个图片变换的处理,需要Ma ...

  4. android.graphics.drawable.Drawable.Callback回调接口

    [java]view plaincopy /*如果你想实现一个扩展子Drawable的动画drawable,那么你可以通过setCallBack(android.graphics.drawable.D ...

  5. 绘制几何图形——使用android.graphics类 onDraw

    范例说明 "如何在和机上绘制2D图形呢?"这是许多android游戏开发都是常提到的问题,在android SDK 当中,并没有Java Graphics2D的函数可以使用,而是使 ...

  6. android.graphics.bitmap jar,Android入门之画图详解

    前文常用的控件介绍了不少,现在就来讨论一下手机开发中常用到的画图.要掌握Android的画图,首先就要了解一下,基本用到的如下一些图形接口: 1.Bitmap,可以来自资源/文件,也可以在程序中创建, ...

  7. 草根博客[很牛逼的,都浏览下] android Graphics(二):路径及文字

    他的博客,都需要浏览下: http://blog.csdn.net/harvic880925?viewmode=contents 前言:今天项目进入攻关期,他们改Bug要改疯掉了,主管为了激励大家,给 ...

  8. 【Bitmap】Canvas: trying to use a recycled bitmap android.graphics.Bitmap问题

    Canvas: trying to use a recycled bitmap android.graphics.Bitmap问题 我这用到bitmap中间变量了,还用到 Bitmap bitmap ...

  9. boolean android.graphics.Bitmap.compress(android.graphics.Bitmap$CompressFormat, int, java.io.Output

    log日志 java.lang.NullPointerException: Attempt to invoke virtual method 'boolean android.graphics.Bit ...

最新文章

  1. 面向对象设计原则_聊聊面向对象的6大设计原则
  2. mysql数据库锁定机制
  3. 面试官,别再问高并发了!
  4. Java之currenHashMap
  5. Qt下的OpenGL 编程(1)Qt下的OpenGL编程必须步骤
  6. layout_weight layout_gravity gravity
  7. Docker Dirty Cow逃逸
  8. java sar包_linux下查看最占性能的JAVA进程
  9. Operations Manager 2007 监控Active Directory SCOM-Part 3
  10. 百度API获取位置范围内的周边服务
  11. 【零基础学Java】—继承父类并实现多个接口(二十五)
  12. SQL Server系统数据库– msdb数据库
  13. WEB架构师成长之路-摘录
  14. 如何成为专家-核心的七个特质
  15. Klevgrand DAW Cassette for Mac(磁带模拟效果器插件)
  16. Windows98 win98.bif 文件
  17. 解决wps公式编辑器上移情况
  18. JAVA怎么开发一个胖客户端_胖客户端瘦客户端哑终端
  19. 舌尖上的阳朔,除米粉之外的桂菜诱惑
  20. 影响信用贷款的四大条件

热门文章

  1. 深蓝路径规划 A*作业
  2. 蓝桥杯模拟赛2(大学生组青少年组)赛后试题解析(C实现)
  3. coredump 使用总结
  4. Centos7上安装配置Spark
  5. byte(128)为什么是-128?
  6. Autoware激光雷达与网络摄像机联合标定(二) Autoware安装
  7. (一)win10环境下使用自带linux系统进行虚拟机创建
  8. 2-HTML多媒体与嵌入
  9. LCD液晶屏电路设计思路
  10. Underscore.js 入门教程