2006.4.18 官方日志:一览媒体中心的显示引擎和原理

博客原文:

**************************************************************************************************

A Look at Media Center's Rendering Engine

In Part One, we examined the high-level architecture of the Windows Media Center Presentation Layer, including the relationship between its User Experience Framework and Rendering Engine.  In this installment, we’ll take a more detailed look at the Rendering Engine and its component parts.

The Rendering Engine is an internal component of Media Center.  It is designed to be used exclusively via the Windows Media Center Presentation Layer’s Messaging System and requires a sophisticated client such as the User Experience Framework to drive it.  It is written in C++ and places extreme emphasis on simplicity and performance.

To understand how the Rendering Engine works, we need to start by examining its foundation and work upward.

Underpinnings

The lowest layers of the Rendering Engine aren’t directly concerned with media processing tasks at all.

The Rendering Engine’s Core Services form a foundation runtime for all rendering features.

  • The Scheduling layer is a simple work-queue system for handling incoming requests to the Rendering Engine.  It also contains custom logic for integrating time-critical periodic work that bypasses normal queue processing.
  • The Memory Management layer provides heap support optimized to the Rendering Engine’s threading and allocation patterns.
  • The Message Transport layer implements an endpoint of the Messaging System.  Pluggable transport implementations allow for flexibility in how the User Experience Framework and Rendering Engine are connected and deployed.
  • The Object Management layer defines an object-oriented model for presenting Rendering Engine functionality to Messaging System clients.  It provides identity and lifetime management services and includes facilities for resource partitioning and graceful cleanup of per-application resources.

Now that we’ve seen the foundation, we can start to look at layers where rendering actually happens.

Output Options

Pluggable Output Drivers allow for audio-visual rendering on a variety of technologies and hardware platforms.

Output Drivers do the “heavy lifting” to implement rendering functionality by providing peer implementations for many of the key objects in the Presentation Model.  Current drivers include DirectX, Win32 and XBOX 360 implementations for graphics and sound.

Various driver implementations have existed in Media Center over the years.  In addition to the obvious physical platform support (set-top boxes, PCs, game consoles), output drivers have been used to provide flexibility in the development process.  For example, Windows XP Media Center Edition 2004 (a.k.a. Harmony) was based on DirectX 7.  For Windows XP Media Center Edition 2005 (a.k.a. Symphony), we moved to DirectX 9 and significantly reworked some of our graphics algorithms.  To keep this work from disrupting other parts of the project, we supported both the D3D7 and D3D9 graphics output drivers side-by-side for most of the project.

At startup time, the User Experience Framework communicates directly with Output Drivers in order to initialize and configure the Rendering Engine.  Once things are up and running however, the conversation moves up a layer in the stack.

Painting a Picture

The Rendering Engine’s abstract Presentation Model defines building blocks that can be combined to create an audio-visual scene.

Once the Rendering Engine has been initialized, the User Experience Framework describes UI scenes using objects from the abstract Presentation Model.

  • A Graphics Device exposes properties, capabilities and rendering configuration of a graphics technology (e.g. GDI, D3D…)
  • A Render Operation implements an individual unit of work to be performed during a rendering pass.  It can also track possible cleanup or error handling that may be required later in the pass.
  • A Visual defines a unique coordinate space in the rendering hierarchy.  Visuals are organized as a tree and expose UI-relevant states like transforms and constant alpha.  These states are translated into rendering operations as needed during a rendering pass.  Visuals may also contain rendering operations for drawing content as directed by the User Experience Framework.
  • A Clip Gradient is a hybrid primitive that performs color channel modulation according to a specified ramp, optionally clipping.  It permutes the output of other render operations.  A variety of visual effects are possible.  The most visible example is the “edge fade” effect used when scrolling lists and galleries in Media Center.
  • A Surface is a drawable piece of pixel-mapped visual data (like an image or video frame).
  • A Surface Pool is physical storage for one or more surfaces.  On technologies where texture allocation is expensive enough to cause glitches, a Surface Pool may be sub-allocated to hold multiple Surfaces.  For video playback, a Surface Pool may hold multiple frames of video data.
  • A Sound Device exposes properties, capabilities and rendering configuration of an audio technology (e.g. Win32, DSound, XAudio…)
  • A Sound Buffer is physical storage for audio data.
  • A Sound is a logical instance of playback from a Sound Buffer.

With these features, the User Experience Framework can compose a static scene and send it coarse updates.  This is enough to produce UI that looks like Media Center, but not yet enough to build UI that feels like Media Center.  For that, we need animation.

A Measure of Independence

An important goal of separating the Rendering Engine from the User Experience Framework is to allow for loosely-coupled timing between them.  From a rendering perspective, a continuous stream of new frames needs to get to the screen without involving the User Experience Framework very often.

In addition to creating or modifying states via the Presentation Model, the User Experience Framework can direct the Animation System to modify presentation states on a frame-by-frame basis according to a timeline.  Any numeric property in the presentation model (single or composite) can be animated.  Many effects are orchestrated from the User Experience Framework by creating a scene and adding animation to it.

  • The Value Table is a set of individual values being computed for animation purposes.
  • A Sequence is a keyframe-based timeline for modifying an individual value in the Value Table.
  • An Interpolation is a function that can be applied to produce intermediate values between two keyframes in a sequence.  Examples include Linear, Sine, Square, Bezier, etc.
  • A Property Connector collects one or more values from the Value Table and combines them to update an object property.  Also supports sampling from the target property to initialize keyframes before a sequence is played.

The Animation System also has a special callback registered with the scheduling layer for monitoring the passage of scene time.  This allows various output driver implementations to synchronize animation updates with other media processing.  For example, the DirectX driver for both PC and XBOX may prepare frames in advance based on upcoming presentation timestamps from a video stream.

Renderer Wrap-Up

It is easy to see why the Rendering Engine requires a sophisticated client to drive it.  The design includes few graphical primitives and follows a strict philosophy of keeping complexity out of the rendering path.  Complex scenes are achieved by composing many simple elements together.  Orchestration of interactive UI scenes and transitions is a task left to the User Experience Framework, the topic of our next installment.

Francis

*********************************************************************************************************

惊鸿一瞥-Windows presentation layer application 系列 (二)相关推荐

  1. C#制作Windows service服务系列二:演示一个定期执行的windows服务及调试(windows service)(转载)...

    系列一: 制作一个可安装.可启动.可停止.可卸载的Windows service(downmoon原创) 系列二:演示一个定期执行的windows服务及调试(windows service)(down ...

  2. Windows 核心编程研究系列之二 读取指定物理内存地址中的内容

    分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow 也欢迎大家转载本篇文章.分享知识,造福人民,实现我们中华民族伟大复兴! [原创/ ...

  3. (C#)Windows Shell 外壳编程系列4 - 上下文菜单(iContextMenu)(二)嵌入菜单和执行命令...

    (本系列文章由柠檬的(lc_mtt)原创,转载请注明出处,谢谢-) 接上一节:(C#)Windows Shell 外壳编程系列3 - 上下文菜单(iContextMenu)(一)右键菜单 上一节说到如 ...

  4. Code Project精彩系列二

    Applications Crafting a C# forms Editor From scratch http://www.codeproject.com/csharp/SharpFormEdit ...

  5. Remoting系列(二)----建立第一个入门程序

    http://www.cnblogs.com/Ring1981/archive/2006/07/23/455043.aspx Remoting系列(二)----建立第一个入门程序 下面的Remotin ...

  6. 微服务架构系列二:密码强度评测的实现与实验

    本文是继<微服务架构系列一:关键技术与原理研究>的后续,系列一中论述了微服务研究的背景和意义,主要调研了传统架构的发展以及存在的问题和微服务架构的由来,然后针对微服务架构的设计原则.容器技 ...

  7. Appium: Windows系统桌面应用自动化测试(二)

    一.关于自动化过程中,打开了应用,但获取不到操作句柄的问题 1.问题描述 (1)下图是通过python脚本连接的不同应用 (2)应用一:有sessionId,说明会话正常,可通过句柄操作应用 (3)应 ...

  8. Windows Presentation Foundation 用户指南

    Microsoft Windows Presentation Foundation(以前的代号称为"Avalon")为构建高度投入,且在视觉效果上与众不同的应用程序提供了一个集成的 ...

  9. [知识库分享系列] 二、.NET(ASP.NET)

    最近时间又有了新的想法,当我用新的眼光在整理一些很老的知识库时,发现很多东西都已经过时,或者是很基础很零碎的知识点.如果分享出去大家不看倒好,更担心的是会误人子弟,但为了保证此系列的完整,还是选择分享 ...

最新文章

  1. java web里实现 mvc_MVC模式在Java Web应用程序中的实现
  2. 功能农业奠基人-农业大健康·万祥军:赵其国安康工作站揭牌
  3. 认清JavaScript和JAVA全局变量和局部变量的作用域
  4. 删除你的所有计算机文件的英文,《电脑文件英文对照》.doc
  5. C++ edmond karp和ford fulkerson求最大流算法(附完整源码)
  6. NIO与传统IO的区别(形象比喻)
  7. .Net中的多态知识点
  8. 数字图像处理 实验一 图像的基本运算
  9. 【BZOJ3622】已经没有什么好害怕的了,两次DP
  10. Jmeter中的变量(三)
  11. 拓端tecdat|R语言隐马尔可夫模型HMM识别股市变化分析报告
  12. 单片机交通灯实训c语言编程,单片机交通灯程序(C语言).docx
  13. 传奇开服架设之地图索引编辑器以及安装问题排查教程
  14. 人件札记:软件开发的管理思想
  15. 文本分析python和r_中文文本挖掘R语言和Python哪个好?
  16. ERP系统中的工作流和业务流
  17. h5将数字翻译为大写汉字_JS将数字转换为大写汉字人民币
  18. npm run dev卡住
  19. 博客之星规则能否参照“金球奖”
  20. 设备版本升级(锐捷)

热门文章

  1. flickr php,phpFlickr并从Flickr获取图像
  2. JAVA电子设备销售网站计算机毕业设计Mybatis+系统+数据库+调试部署
  3. 操作系统复试面试问题
  4. 因为Istio,谷歌不惜公开与CNCF、合作伙伴撕破脸
  5. iOS视图控制器编程指南—— 支持无障碍
  6. 1023 穷游?“穷”游?
  7. 菜鸟小试沪江网站下载日语听力文章和录音
  8. Flutter Text 行高相关
  9. 安装AdminLTE
  10. 频繁模式挖掘 Apriori