I’ve been experimenting a great deal lately with OpenGL and QuickTime trying to see how the two technologies work together. It’s been a bit challenging, but fortunately Apple provides two really great resources–number one, sample code. I’ve been able to learn a lot just from the samples they provide in the development tools examples as well as online. And second, the cocoa-dev and quicktime-api lists are great. Lot’s of brilliant people there who are willing to share their knowledge. It’s very helpful, prohibition to discuss the iPhone SDK notwithstanding.

Getting two technologies to work together can be a challenge especially when the bridges between the two are not necessarily clearly laid out in documentation. As I pointed out Apple provides some excellent sample code to help you along, but there is no hand-holding approach to any of it. I actually appreciate that having come from the Windows world as it seems that there all you get sometimes is hand-holding where Microsoft doesn’t trust you, the developer to figure things out and really own what you’re doing. But I digress (wouldn’t be a CIMGF post if I didn’t dig on MS a bit).

Like peanut butter and chocolate, what you get when you put together QuickTime and OpenGL is something greater than either of them left on their own (ok, this is subjective. Not everyone likes peanut butter and chocolate together, but again, I digress).

If you read the Core Video Programming Guide from Apple, you see they provide the reasons for using Core Video:

CoreVideo is necessary only if you want to manipulate individual video frames. For example, the following types of video processing would require CoreVideo:

  • Color correction or other filtering, such as provided by Core Image filters
  • Physical transforms of the video images (such as warping, or mapping on to a surface)
  • Adding video to an OpenGL scene
  • Adding additional information to frames, such as a visible timecode
  • Compositing multiple video streams

If all you need to do is display a movie, you should simply use either a QTMovieView or, if you want to stick with the Core Animation route, use a QTMovieLayer. They both function similarly, however, the view provides a lot of features that you won’t have to implement in the UI yourself such as a scrubber or play/pause buttons. Plus the view is very fast and efficient. I’m in the process of exploring performance differences between the two, but I will save my comments about that for another post.

For our example code we are most interested in the third point above–adding video to an OpenGL scene. It seems that new QuickTime developers often want to know how to manipulate movie images before displaying them. Often this leads them to pursue adding sub-views to the movie view which can become a big mess. Because we are using OpenGL, doing other drawing on the scene is very fast. I won’t kid you. OpenGL is a pain. I don’t know anybody who loves it, but everybody respects it because of its raw speed.

Point number five above–compositing multiple video streams–is also interesting. While I won’t be covering it in this post, I will say that it makes a world of difference performance wise if you composite the movies into a OpenGL scene. If you’ve ever tried to run multiple videos simultaneously in two different views or layers, it can get pretty herky jerky. You can see why it is necessary to use OpenGL instead.

www.cimgf.com/2008/09/10/core-animation-tutorial-rendering-quicktime-movies-in-a-caopengllayer/

The OpenGL QuickTime Two Step

Ok, it actually will take more than two steps, however, when you are working with Core Animation layers things get a whole lot easier than they are for rendering a movie in an NSOpenGLView. Here is what you get for free, as the kids say.

  • You don’t have to set up the OpenGL context. It is already available for you to send your OpenGL calls to.
  • The viewport for display is already configured
  • You don’t need to set up a display link callback

What took over 400 lines of code when rendering a QuickTime movie with no filters to an NSOpengGLView now only takes around 150 lines. Any time you can reduce code to something simpler, it makes life easier. It also makes it much easier to grok, in my opinion.

There really are two primary steps you take when using a CAOpenGLLayer. First you check to see if you should draw. Then, depending upon the answer, drawInCGLContext gets called or doesn’t. Really thats it. Determining whether or not you should draw depends upon what you are trying to do. In our case, we only want to draw if all of the following are true:

  • The movie is actually playing back
  • The visual context for the movie has been initialized
  • The visual context has a new image ready to be rendered

If all of these are true, then our call to canDrawInCGLContext returns YES. Here is the code I use to check these contraints in canDrawInCGLContext:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
- (BOOL)canDrawInCGLContext:(CGLContextObj)glContext pixelFormat:(CGLPixelFormatObj)pixelFormat forLayerTime:(CFTimeInterval)timeInterval displayTime:(const CVTimeStamp *)timeStamp
{ // There is no point in trying to draw anything if our// movie is not playing.if( [movie rate] <= 0.0 )return NO;if( !qtVisualContext ){// If our visual context for our QTMovie has not been set up// we initialize it now[self setupVisualContext:glContext withPixelFormat:pixelFormat];}// Check to see if a new frame (image) is ready to be draw at// the time specified.if(QTVisualContextIsNewImageAvailable(qtVisualContext,timeStamp)){// Release the previous frameCVOpenGLTextureRelease(currentFrame);// Copy the current frame into our image bufferQTVisualContextCopyImageForTime(qtVisualContext,NULL,timeStamp,&currentFrame);// Returns the texture coordinates for the part of the image that should be displayedCVOpenGLTextureGetCleanTexCoords(currentFrame, lowerLeft, lowerRight, upperRight, upperLeft);return YES;}return NO;
}

The call to setup the visual context is where we are associating the QuickTime movie itself with a QTVisualContextRef object which is what OpenGL needs to draw the current frame. We will then use this object to load image data into a CVImageBufferRef which can be used for rendering with OpenGL. Here is the code to set up the visual context.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
- (void)setupVisualContext:(CGLContextObj)glContext withPixelFormat:(CGLPixelFormatObj)pixelFormat;
{OSStatus               error;NSDictionary      *attributes = nil;attributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithFloat:[self frame].size.width],kQTVisualContextTargetDimensions_WidthKey,[NSNumber numberWithFloat:[self frame].size.height],kQTVisualContextTargetDimensions_HeightKey, nil], kQTVisualContextTargetDimensionsKey, [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithFloat:[self frame].size.width], kCVPixelBufferWidthKey, [NSNumber numberWithFloat:[self frame].size.height], kCVPixelBufferHeightKey, nil], kQTVisualContextPixelBufferAttributesKey,nil];// Create our quicktimee visual contexterror = QTOpenGLTextureContextCreate(NULL,glContext,pixelFormat,(CFDictionaryRef)attributes,&qtVisualContext);// Associate it with our movie.SetMovieVisualContext([movie quickTimeMovie],qtVisualContext);
}

Next we check to see if there is an image ready using:

1
if(QTVisualContextIsNewImageAvailable(qtVisualContext,timeStamp))

And then we copy the image to our CVImageBufferRef with:

1
2
3
4
5
// Copy the current frame into our image buffer
QTVisualContextCopyImageForTime(qtVisualContext,NULL,timeStamp,&currentFrame);

Now it’s all a matter of rendering the frame for the current time stamp.

But Wait! What TimeStamp?

If you asked this question, then you are a very astute reader. In order to obtain the next image, we simply passed the CVTimeStamp parameter, timeStamp to our call to QTVisualContextCopyImageForTime. But how do we even have a timestamp? Isn’t that something we need to get from a display link? If you’re asking what is a display link at this point, take a look at the Core Video Programming Guide which states:

To simplify synchronization of video with a display’s refresh rate, Core Video provides a special timer called a display link. The display link runs as a separate high priority thread, which is not affected by interactions within your application process.In the past, synchronizing your video frames with the display’s refresh rate was often a problem, especially if you also had audio. You could only make simple guesses for when to output a frame (by using a timer, for example), which didn’t take into account possible latency from user interactions, CPU loading, window compositing and so on. The Core Video display link can make intelligent estimates for when a frame needs to be output, based on display type and latencies.

I will provide a more complete answer to the question in the future as I am still studying it myself, however, I will mention that a display link callback is unnecessary in this context as the CAOpenGLLayer is providing this for us. The timestamp field is all we need in order to get the current frame assuming that the movie is playing back.

Drawing The Frame

There is a special group of people who really get OpenGL. I salute all of you to whom this applies. You are amazing. I, however, only write as much of it as necessary and you’ll see that most of the code I have here is simply a copy and paste from sample code I got from Apple. I am starting to understand it more and more, however, it makes my brain hurt. Here is my drawing code for when a frame is ready to be rendered.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
- (void)drawInCGLContext:(CGLContextObj)glContext pixelFormat:(CGLPixelFormatObj)pixelFormat forLayerTime:(CFTimeInterval)interval displayTime:(const CVTimeStamp *)timeStamp
{NSRect    bounds = NSRectFromCGRect([self bounds]);GLfloat    minX, minY, maxX, maxY;        minX = NSMinX(bounds);minY = NSMinY(bounds);maxX = NSMaxX(bounds);maxY = NSMaxY(bounds);glMatrixMode(GL_MODELVIEW);glLoadIdentity();glMatrixMode(GL_PROJECTION);glLoadIdentity();glOrtho( minX, maxX, minY, maxY, -1.0, 1.0);glClearColor(0.0, 0.0, 0.0, 0.0);        glClear(GL_COLOR_BUFFER_BIT);CGRect imageRect = [self frame];// Enable target for the current frameglEnable(CVOpenGLTextureGetTarget(currentFrame));// Bind to the current frame// This tells OpenGL which texture we are wanting // to draw so that when we make our glTexCord and // glVertex calls, our current frame gets drawn// to the context.glBindTexture(CVOpenGLTextureGetTarget(currentFrame), CVOpenGLTextureGetName(currentFrame));glMatrixMode(GL_TEXTURE);glLoadIdentity();glColor4f(1.0, 1.0, 1.0, 1.0);glBegin(GL_QUADS);// Draw the quadsglTexCoord2f(upperLeft[0], upperLeft[1]);glVertex2f  (imageRect.origin.x, imageRect.origin.y + imageRect.size.height);glTexCoord2f(upperRight[0], upperRight[1]);glVertex2f  (imageRect.origin.x + imageRect.size.width, imageRect.origin.y + imageRect.size.height);glTexCoord2f(lowerRight[0], lowerRight[1]);glVertex2f  (imageRect.origin.x + imageRect.size.width, imageRect.origin.y);glTexCoord2f(lowerLeft[0], lowerLeft[1]);glVertex2f  (imageRect.origin.x, imageRect.origin.y);glEnd();// This CAOpenGLLayer is responsible to flush// the OpenGL context so we call super[super drawInCGLContext:glContext pixelFormat:pixelFormat forLayerTime:interval displayTime:timeStamp];// Task the contextQTVisualContextTask(qtVisualContext);}

If you’re not familiar with OpenGL it would help you to know that it’s all about the current state. What does this mean? Well, simply put, it means that the call you are making right now, applies to whatever state the context is in. These two calls are what are the most important for our purposes.

1
2
3
glEnable(CVOpenGLTextureGetTarget(currentFrame));
glBindTexture(CVOpenGLTextureGetTarget(currentFrame), CVOpenGLTextureGetName(currentFrame));

With these two calls we have told OpenGL which texture to use. Now, every subsequent call applies to this texture until the state is changed to something else. So now, when we set a color or draw a quad, it applies to the texture that has been set here.

Conclusion

I love both of these technologies, QuickTime and OpenGL. They are so powerful. It’s harnessing the power that’s the trick. I’ve got some other ideas for some related posts that I plan to cover in the weeks to come, but this was a real breakthrough for me. With the help of John Clayton, Jean-Daniel Dupas, and David Duncan on the cocoa-dev list, I was able to get the sample code put together for this post. Feel free to ask questions in the comments. I will do my best to answer, but I’m still pretty new to these technologies. Write some code yourself and have fun. This is really exciting stuff. Until next time.

About The Demo Code

John Clayton has some issues getting the code to work on his Mac Pro. I successfully ran it on my MacBook Pro, and the family iMac without any issues. Anyhow, we’re not sure what the problem is, so if you do run into trouble, let me know. Maybe we can figure it out. Meanwhile we’re investigating it as we have time.

Update: John Clayton figured it out. Apparently the visual context needs to be reset because the pixel format is not correct on the first run. We just reset the visual context now in the call to -copyCGLContextForPixelFormat and everything seems happy. The demo code has been updated to reflect the change.


Quicktime CAOpenGLLayer Demo

Core Animation Tutorial: Rendering QuickTime Movies In A CAOpenGLLayer相关推荐

  1. Instruments性能优化-Core Animation

    简书地址:http://www.jianshu.com/users/6cb2622d5eac/latest_articles 当App发展到一定的规模.性能优化就成为不可缺少的一点.可是非常多人,又对 ...

  2. Core Animation学习笔记—第二节Setting up Layer Objects

    各位iOS开发大佬们好: 我是一名Swift+SwiftUI栈的iOS小白,目前还在上大三,最近准备实习,面试的过程中发现现在大公司很多还在用OC + UIKit的技术栈,OC我还在考虑要不要学,目前 ...

  3. Core Animation

    iOS 核心动画高级技术 核心动画是基于苹果iOS用户界面的技术.通过使用核心动画的全部功能,可以用2D和3D视觉效果来提升应用程序并创造炫酷的全新接口. iOS开发者尼克·洛克伍德会带你一步一步体验 ...

  4. Core Animation简介

    1.我们是使用Core Animatioin创建动画的时,实质上是更改CALayer的属性,然后让这些属性流畅的变化.可以使用Core Animation对象的位置.颜色.透明度以及CGAffine变 ...

  5. 使用Core Animation对象来实现动画

    转载保留原文地址:http://blog.csdn.net/kqjob/article/details/10417461,转载的 在iOS中如果使用普通的动画则可以使用UIKit提供的动画方式来实现, ...

  6. CORE ANIMATION的学习备忘录

    CORE ANIMATION的学习备忘录(第一天) 研究Core Animation已经有段时间了,关于Core Animation,网上没什么好的介绍.苹果网站上有篇专门的总结性介绍,但是似乎原理性 ...

  7. IOS Core Animation Advanced Techniques的学习笔记(五)

    第六章:Specialized Layers   类别 用途 CAEmitterLayer 用于实现基于Core Animation粒子发射系统.发射器层对象控制粒子的生成和起源 CAGradient ...

  8. 图层几何学 -- iOS Core Animation 系列二

    <图层树和寄宿图 -- iOS Core Animation 系列一>介绍了图层的基础知识和一些属性方法.这篇主要内容是学习下图层在父图层上怎么控制位置和尺寸的. 1.布局 首先看一张例图 ...

  9. (转) Core Animation 简介

    原文出处:(http://hi.baidu.com/zijian0428/blog/item/6085e5fe8ff987225d60083b.html) 1.    简介 Core animatio ...

最新文章

  1. 让Linux系统开机速度更快的方法
  2. Java Web架构知识整理——记一次阿里面试经历
  3. 增强QQ空间的统计功能
  4. Zookeeper 安装
  5. php如何去除侧栏,设置内容区侧栏
  6. Spring源码(1)
  7. Hive分析窗口函数 NTILE,ROW_NUMBER,RANK,DENSE_RANK
  8. 数字图像处理--图像旋转变换的推导
  9. 查询中where和having的区别
  10. 农行支付php,ECSHOP教程:农行支付接口开发(含手机端)
  11. kittito_rosbag入坑教程
  12. java开发面试中经常问到的问题(2019年5月)
  13. 图像 super-resolution restruction 的各种主流实现方式
  14. 《深度学习》李宏毅 -- task6卷积神经网络
  15. 因子分析模型 - 因子分析法原理与代码实现 -(Python,R)
  16. 条码管理系统,帮助企业打造高效的仓库管理模式
  17. git报错:remote: warning: Large files detected. / 移动文件夹
  18. ClickHouse 数据导出导入
  19. linux 6.5 安装vnc,Linux_CentOS6.5安装vncserver实现图形化访问
  20. 基于UnifierP6的4D,5D,nD规划

热门文章

  1. “煮熟的鸡蛋 可以反生孵化出小鸡”?原谅我,被打败了...
  2. 猪脸识别?!阿里和京东这次要AI养猪了
  3. 非线性光纤光学中分步傅里叶算法(SSFFT)的matlab代码实现
  4. 微信域名防红防封系统,轻松微信推广域名被屏蔽问题
  5. Lintcode题目总结
  6. 详解搜索引擎的工作原理
  7. UVa Online Judge的重建
  8. 管理学书籍排行榜,这些书管理者必看
  9. 嵌入式笔试/面试概念
  10. 安装 Samba 网络共享服务, 可以通过网络访问我们指定的文件夹