With the release of ARKit and the iPhone X paired with Unity, developers have an easy-to-use set of tools to create beautiful and expressive characters. This opens up exploring the magic of real-time puppeteering for the upcoming “Windup” animated short, directed by Yibing Jiang.

随着ARKit的发布以及与Unity配对的iPhone X,开发人员拥有了一套易于使用的工具来创建漂亮而富有表现力的角色。 这将为即将到来的动画短片《 Windup》(由姜一冰执导)探索实时伪装的魔力。

Unity Labs and the team behind “Windup” have come together to see how far we could push Unity’s ability to capture facial animation in real time on a cinematic character. We also enlisted the help of Roja Huchez of Beast House FX for modeling and rigging of the blend shapes to help bring the character expressions to life.

Unity实验室和“ Windup”背后的团队聚在一起,探讨了我们可以将Unity的实时捕捉人像动画角色的能力提高到多远。 我们还寻求Beast House FX的Roja Huchez的帮助,对混合形状进行建模和装配,以使角色表情栩栩如生。

What the team created is Facial AR Remote, a low-overhead way to capture performance using a connected device directly into the Unity editor. We found using the Remote’s workflow is useful not just for animation authoring, but also for character and blend shape modeling and rigging, creating a streamlined way to build your own animoji or memoji type interactions in Unity. This allows developers to be able to iterate on the model in the editor without needing to build to the device, removing time-consuming steps in the process.

团队创建的是Facial AR Remote,这是一种低开销的方法,可以使用连接的设备直接在Unity编辑器中捕获性能。 我们发现使用Remote的工作流程不仅对动画创作有用,而且对角色和混合形状建模和绑定也有用,创建了一种简化的方法来在Unity中构建自己的动画或备忘录类型的交互。 这使开发人员能够在编辑器中迭代模型,而无需构建到设备上,从而消除了过程中耗时的步骤。

为什么要构建面部AR遥控器 (Why build the Facial AR Remote)

We saw an opportunity to build new animation tools for film projects opening up a future of real-time animation in Unity. There was also a “cool factor” in using AR tools for authoring and an opportunity to continue to push Unity’s real-time rendering. As soon as we had the basics working with data coming from the phone to the editor, our team and everyone around our desks could not stop having fun puppeteering our character. We saw huge potential for this kind of technology. What started as an experiment quickly proved itself both fun and useful. The project quickly expanded into the current Facial AR Remote and feature set.

我们看到了为电影项目构建新的动画工具的机会,从而为Unity中的实时动画打开了未来。 使用AR工具进行创作还有一个“凉爽的因素”,并且有机会继续推动Unity的实时渲染。 一旦我们有了处理从电话到编辑器的数据的基础知识,我们的团队和办公桌周围的每个人就无法停止玩弄伪装我们角色的乐趣。 我们看到了这种技术的巨大潜力。 从实验开始的东西很快就证明了自己的乐趣和实用性。 该项目Swift扩展到当前的Facial AR Remote和功能集。

The team set out expanding the project with Unity’s goal of democratizing development in mind. We wanted the tools and workflows around AR blend shape animation to be easier to use and more available than what was currently available and traditional methods of motion capture. The Facial Remote let us build out some tooling for iterating on blend shapes within the editor without needing to create a new build just to check mesh changes on the phone. What this means is a user is able to take a capture of an actor’s face and record it in Unity. And that capture can be used as a fixed point to iterate and update the character model or re-target the animation to another character without having to redo capture sessions with your actor. We found this workflow very useful for dialing in expressions on our character and refining the individual blend shapes.

团队着眼于Unity的民主化发展目标,着手扩大项目。 我们希望围绕AR混合形状动画的工具和工作流比当前可用的运动捕捉方法更易于使用且可用性更高。 使用Facial Remote,我们可以构建一些工具来在编辑器中迭代混合形状,而无需创建新版本来仅检查手机上的网格更改。 这意味着用户可以捕获演员的脸并将其记录在Unity中。 而且该捕获可用作固定点,以迭代和更新角色模型或将动画重新定位到另一个角色,而无需重做与演员的捕获会话。 我们发现此工作流程对于在角色上拨入表达式并改进单个混合形状非常有用。

面部AR遥控器的工作方式 (How the Facial AR Remote works)

The remote is made up of a client phone app, with a stream reader acting as the server in Unity’s editor. The client is a light app that’s able to make use of the latest additions to ARKit and send that data over the network to the Network Stream Source on the Stream Reader GameObject. Using a simple TCP/IP socket and fixed-size byte stream, we send every frame of blendshape, camera and head pose data from the device to the editor. The editor then decodes the stream and to updates the rigged character in real time. To smooth out some jitter due to network latency, the stream reader keeps a tunable buffer of historic frames for when the editor inevitably lags behind the phone. We found this to be a crucial feature for preserving a smooth look on the preview character while staying as close as possible the real actor’s current pose. In poor network conditions, the preview will sometimes drop frames to catch up, but all data is still recorded with the original timestamps from the device.

遥控器由一个客户端电话应用程序组成,其中流阅读器充当Unity编辑器中的服务器。 客户端是一个轻量级的应用程序,能够利用ARKit的最新功能并将该数据通过网络发送到Stream Reader GameObject上的Network Stream Source。 使用简单的TCP / IP套接字和固定大小的字节流,我们将从设备将混合形状,相机和头部姿势数据的每一帧从设备发送到编辑器。 然后,编辑器对流进行解码,并实时更新装配的角色。 为了消除由于网络延迟引起的某些抖动,流编辑器会保留可调整的历史帧缓冲,以防止编辑器不可避免地落后于手机。 我们发现这是一个至关重要的功能,它可以保持预览角色的流畅外观,同时尽可能地保持真实演员的当前姿势。 在恶劣的网络条件下,预览有时会丢帧以赶上,但所有数据仍会以设备的原始时间戳记录下来。

On the editor side, we use the stream data to drive the character for preview as well as baking animation clips. Since we save the raw stream from the phone to disk, we can continue to play back this data on a character as we refine the blend shapes. And since the save data is just a raw stream from the phone, we can even re-target the motion to different characters. Once you have a stream you’re happy with captured, you can bake the stream to an animation clip on a character. This is great since they can use that clip that you have authored like any other animation in Unity to drive a character in Mecanim, Timeline or any of the other ways animation is used.

在编辑器端,我们使用流数据来驱动角色进行预览以及烘焙动画剪辑。 由于我们将原始流从手机保存到磁盘,因此我们可以在优化混合形状时继续在角色上回放此数据。 由于保存的数据只是手机的原始数据流,因此我们甚至可以将动作重新定位到不同的字符。 一旦有了对捕获感到满意的流,就可以将流烘焙到角色上的动画剪辑。 这很棒,因为他们可以像Unity中的任何其他动画一样使用您创作的剪辑来驱动Mecanim,Timeline或使用动画的任何其他方式中的角色。

Windup动画演示 (The Windup animation demo)

With the Windup rendering tech demo previously completed, the team was able to use those high-quality assets to start our animation exploration. Since we were able to get a baseline up and running rather quickly, we had a lot of time to iterate on the blend shapes using the tools we were developing. Jitter, smoothing and shape tuning quickly became the major areas of focus for the project. The solves for the jittering were improved by figuring out the connection between frame rate and lag in frame processing as well as removing camera movement from the playback. Removing the ability to move the camera really focused the users on capturing the blend shapes and facilitated us being able to mount the phone in a stand.

通过先前完成的Windup渲染技术演示,该团队得以使用这些高质量的资产来开始我们的动画探索。 由于我们能够快速启动并运行基线,因此我们有很多时间可以使用我们开发的工具迭代混合形状。 抖动,平滑和形状调整Swift成为该项目的重点领域。 通过解决帧处理中帧速率和滞后之间的联系以及从回放中消除摄像机移动,改善了抖动的解决方案。 取消移动相机的功能确实使用户着重于捕获混合形状,并使我们能够将手机安装在支架上。

Understanding the blend shapes and getting the most out of the blend shape anchors in ARKit is what required the most iteration. It is difficult to understand the minutia of the different shapes from the documentation. So much of the final expression comes from the stylization of the character and how the shapes combine in some expected ways. We found that shapes like the eye/cheek squint shapes and mouth stretch were improved by limiting the influence of the blend shape changes to specific areas of the face. For example, the cheek squint should have little to no effect on the lower eyelid, and the lower eyelid in the squint should have little to no effect on the cheek. It also does not help that we initially missed how the mouthClosed shape was a corrective pose to bring the lips closed with the jawOpen shape at 100%.

了解混合形状并充分利用ARKit中的混合形状锚点是需要最多迭代的地方。 从文档中很难理解不同形状的细节。 最终的表达方式很大程度上取决于角色的风格以及形状如何以某些预期方式组合。 我们发现,通过限制混合形状变化对面部特定区域的影响,可以改善诸如眼睛/脸颊斜视形状和嘴巴伸展的形状。 例如,斜眼斜视对下眼睑几乎没有影响,而斜眼斜视的下眼睑对脸颊几乎没有影响。 这也无济于事,我们最初错过了“ mouthClosed形状是一种矫正姿势, jawOpen以“ 100%的jawOpen形状使嘴唇闭合。

Using information from the Skinned Mesh Renderer to look at the values that made up my expression on any frame, then under- or over-driving those values really helped to dial in the blend shapes. We were able to quickly over or underdrive the current blend shapes and determine if any blend shapes needed to be modified, and by how much. This helped with one of the hardest things to do, getting the right character to a key pose, like the way we wanted the little girl to smile. This was really helped by being able to see what shapes make up a given pose and in this case, it was the amount mouth stretch right and left worked with the smile to give the final shape. We found it helps to think of the shapes the phone provided as little building blocks, not as some face pose a human could make in isolation.

使用来自Skinned Mesh Renderer的信息来查看构成我在任何帧上的表情的值,然后欠驱动或过驱动这些值确实有助于调入混合形状。 我们能够快速超过或降低当前的混合形状,并确定是否需要修改任何混合形状,以及需要修改多少。 这帮助完成了最困难的事情之一,使正确的角色扮演关键角色,就像我们希望小女孩微笑的方式一样。 能够看到什么形状构成给定的姿势,这确实有所帮助,在这种情况下,这是嘴向左和向右伸展,并带着微笑产生最终形状的过程。 我们发现,将电话提供的形状看作是很小的组成部分是有帮助的,而不是人类可以孤立地做出的某些面Kong姿势。

At the very end of art production on the demo, we wanted to try an experiment to improve some of the animation on the character. Armed with the collective understanding of the blend shapes from ARKit, we tried modifying the base neutral pose of the character. Due to the stylization of the little girl character, there was an idea that the base pose of the character had the eyes too wide and a little too much base smile to the face. This left too little in the delta between eyes wide and base, with too wide a delta between base and closed. The effect of the squint blend shapes also needed to be better accounted for. The squint as it turns out seems to always be at ~60-70% when someone closes their eyes for the people we tested on. The change to the neutral pose paid off, and along with all the other work makes for the expressive and dynamic character you see in the demo.

在演示的美术制作结束时,我们想尝试进行实验以改善角色上的某些动画。 有了ARKit对混合形状的集体理解,我们尝试了修改角色的基本中性姿势。 由于小女孩角色的风格,有一个想法认为角色的基本姿势会使眼睛睁得太宽,而脸上的微笑会显得太多。 这在双眼和底之间的三角形中留得太小,而在双眼和闭合之间的三角形中留得太宽。 斜眼混合形状的效果也需要更好地考虑。 事实证明,当有人为我们测试的人闭上眼睛时,斜视似乎总是在60-70%左右。 中立姿势的改变得到了回报,并且与所有其他工作一起使您在演示中看到了富有表现力和动态感的角色。

未来 (The future)

Combining Facial AR Remote and the rest of the tools in Unity, there is no limit to the amazing animations you can create! Soon anyone will be able to puppeteer digital characters, be it kids acting out and recording their favorite characters then sharing with friends and family, game streamers adding extra life to their avatars, or opening up new avenues for professionals and hobbyists to make animated content for broadcast. Get started by downloading Unity 2018 and checking out setup instructions on Facial AR Remote’s github. The team and the rest of Unity look forward to the artistic and creative uses of Facial AR Remote our users will create.

将Facial AR Remote和Unity中的其他工具结合在一起,就可以创作出惊人的动画了! 很快,任何人都可以伪造数字角色,例如孩子们表演并录制自己喜欢的角色,然后与朋友和家人分享,游戏彩带为他们的化身增加额外的生活,或者为专业人士和业余爱好者开辟新的途径来制作动画内容广播。 通过下载Unity 2018并在Facial AR Remote的github上查看设置说明开始使用。 团队和Unity的其他成员都期待我们的用户将创造出Facial AR Remote的艺术和创意用途。

翻译自: https://blogs.unity3d.com/2018/08/13/facial-ar-remote-animating-with-ar/

面部AR遥控器:使用AR制作动画相关推荐

  1. unity3D AR涂涂乐制作浅谈

    unity3D AR涂涂乐制作浅谈 AR为现在是虚拟现实较为火爆的一个技术,其中有个比较炫酷的就是AR涂涂乐的玩法,这个技术可以把扫描到的图片上的纹理 粘贴到模型上实现为模型上色的功能,但是我们需要怎 ...

  2. android studio 中配置groovy源码_麻省理工教程:使用Unity AR Foundation在AR中查看模型...

    本文将分享麻省理工学院的教程-使用Unity AR Foundation在增强现实中查看模型. 在本教程中,我们将介绍如何把3D模型导入Unity,并使用Android设备或iOS设备在AR中查看模型 ...

  3. 使用WebGL + Three.js制作动画场景

    使用WebGL + Three.js制作动画场景 3D图像,技术,打造产品,还有互联网:这些只是我爱好的一小部分. 现在,感谢WebGL的出现-一个新的JavaScriptAPI,它可以在不依赖任何插 ...

  4. 使用WebGL + Three.js制作动画场景 1

    使用WebGL + Three.js制作动画场景 3D图像,技术,打造产品,还有互联网:这些只是我爱好的一小部分. 现在,感谢WebGL的出现-一个新的JavaScriptAPI,它可以在不依赖任何插 ...

  5. 计算机中ar的作用,AR增强现实的作用

    目前,随着科学技术的不断发展,人们对于事物的要求都在逐渐提高,现在已经有很多人不满足于平面事物展示的形式,因此AR增强现实技术也就应运而生,而其发展起来也就一发不可收拾,现在很多企业和商家都比较注重A ...

  6. 计算机中ar的作用,AR增强现实是什么意思?

    AR增强现实是一种以计算机系统为基础,将真实的和虚拟世界的画面进行集成,戴上AR眼镜就能感受现实生活中没办法呈现的场景.透过AR眼镜能够看到虚拟或者现实世界,连同计算机生成而投射到这一世界表面的图像, ...

  7. 《Python数据可视化编程实战》——5.5 用OpenGL制作动画

    本节书摘来异步社区<Python数据可视化编程实战>一书中的第5章,第5.5节,作者:[爱尔兰]Igor Milovanović,更多章节内容可以访问云栖社区"异步社区" ...

  8. 华为发布《AR洞察与应用实践白皮书》,提出用5G点燃AR,用AR照亮5G

    [中国,深圳,2021年06月17日] 今日,在华为共赢未来5G+AR全球峰会(Better World Summit)上,华为运营商BG首席营销官蔡孟波,发表了主题演讲<5G+AR,让梦想照进 ...

  9. html与css结合动效案例,CSS3制作动画效果例子

    实现网站的图片.文字的动态效果,我们有photoshop制作多帧动画GIF.用flash制作更精巧的动画,还有利用javascript通过识别ID/CLASS 来实现对应DIV块的动画效果.然而,即使 ...

最新文章

  1. [WPF疑难]避免窗口最大化时遮盖任务栏
  2. 一流科技完成5000万人民币A轮融资,高瓴创投独家领投
  3. 计算机 配置不过4000,台式机4000元以上免谈,非主流配置免谈
  4. c bool 类型检查_C语言和C+的区别是什么?8个点通俗易懂的告诉你!
  5. Intel Realsense D435 python 测试是否能将pipeline、config、enable、start单独提出wait for frames循环外?(不能,配置必须全部在外)
  6. python的高级特性:切片,迭代,列表生成式,生成器,迭代器
  7. 【第九课】MriaDB密码重置和慢查询日志
  8. iOS开发之二维码生成(错误问题小记,微信扫描,长按不识别)
  9. eclipse经常出现弹窗Refreshing workspace
  10. Linux内核配置.config文件
  11. 使用rpm命令卸载程序
  12. c++:template使用中的常见报错
  13. 优秀开源项目YYKit
  14. AFNetworking 2.0 来了
  15. 适用于开发者的开源分布式即时通讯系统
  16. tableau连接数据库时出现检查服务器是否正在运行以及您是否有权访问请求的数据库
  17. HARK学习(六)--AudioStreamFromWave
  18. TurnipBit—MicroPython开发板:妥妥拽拽零基础也能玩编程
  19. 【3D目标检测】Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud
  20. 学位论文和论文的区别是什么?

热门文章

  1. 【Python 跟书学习笔记】
  2. Tomcat 8 性能优化
  3. javaspringboot面试题,java面试问职业规划
  4. 超级详细:公网环境下登录 Docker 仓库: Docker Hub 或 国内阿里镜像仓库!超级解惑!(推送镜像到docker hub 或 国内阿里云镜像仓库)
  5. 用什么软件压缩视频最好?最好的视频压缩软件?
  6. java 和 c# 下的RSA证书+AES+DES加解密实现
  7. 快手+中科大 | 全曝光推荐数据集KuaiRec 2.0版本
  8. 百度JS实现文本语音朗读
  9. 破解EXCEL工作表保护密码
  10. 2022年11月PMP难考吗?