观看unity自制vr场景

A fun game experience is something that players want to show off, record, and share. With VR, seeing what the player sees on a single, rectangular screen doesn’t always convey the entire feeling. This means that spectators can often find the default ‘seeing through a player’s POV’ experience underwhelming. What I wanted to do was set up a simple starter system for how a spectator camera should work and to add a little more fun for those not in the VR experience themselves.  Fortunately, there have been a few shipped examples that successfully designed a good spectator view. The goal of this project was to come up with a spectator system that builds on those designs, is compact and portable, and can easily be integrated into your own projects.

有趣的游戏体验是玩家想要炫耀,记录和分享的东西。 使用VR,观看玩家在单个矩形屏幕上看到的内容并不能始终传达出完整的感觉。 这意味着观众经常会发现默认的“通过玩家的POV进行观看”体验让人不知所措。 我想要做的是建立一个简单的启动系统,用于观众摄像机的工作方式,并为那些没有VR体验的人增加更多的乐趣。 幸运的是,有一些附带的示例成功地设计了一个好的观众视图。 该项目的目标是提出一种基于这些设计的观众系统,该系统紧凑且便携,并且可以轻松集成到您自己的项目中。

源代码 (Source Code)

You can download the associated project here. Requires Unity version 2017.2 or later.

您可以在此处下载相关项目。 需要Unity版本2017.2或更高版本 。

创建一个基本的观众摄像机 (Creating a Basic Spectator Camera)

The first thing I need to do is to create a second camera specifically for the Spectator. I create a second camera and place it facing my first, original camera. Then, in the Camera Settings, I need to set the Target Eye to None (Main Display).

我需要做的第一件事是为观众创建第二台摄像机。 我创建了第二台相机,并将其面对我的第一台原始相机。 然后,在“摄像机设置”中,我需要将“目标眼睛”设置为“无”(主显示)。

Run the project in the editor, and already Unity’s game view is rendered independent of what the VR headset displays. It’s that easy! But don’t worry, there’s more fun we can have here.

在编辑器中运行项目,已经呈现了Unity的游戏视图,与VR头显的显示无关。 就这么简单! 但是不用担心,我们可以在这里享受更多的乐趣。

做一名球员 (Making a Player)

If I point that spectator camera back at myself, and hit play, I can’t see anything! I need to create an avatar to represent me in the world. I managed to create a nice little head and hands model using Unity’s built-in shapes, and can now link them up as a head and hands. I want these to move with my tracked devices in the real world. To link these up, we have a new component in 2017.2: The Tracked Pose Driver. Drop it onto a gameobject, set whether you want to use the HMD or a Controller, and voila, that gameobject will be updated and can be used as an in-game proxy for any tracked part of your VR hardware. This makes it trivial to build a quick player VR rig.

如果我将观众摄像机对准自己,然后点击播放,我什么也看不到! 我需要创建一个头像来代表我在这个世界上。 我设法使用Unity的内置形状创建了一个不错的头部和手部小模型,现在可以将它们链接为头部和手部。 我希望这些与我在现实世界中跟踪的设备一起移动。 为了将这些链接起来,我们在2017.2中增加了一个新组件:The Tracked Pose Driver。 将其放到游戏对象上,设置是否要使用HMD或控制器,然后瞧瞧,该游戏对象将被更新,并可用作VR硬件中任何被跟踪部分的游戏内代理。 这使得构建快速播放器VR装备变得轻而易举。

()

添加相机角度 (Adding Camera Angles)

My narcissistic itch satisfied, now I want to get a few more in-game angles. All I need is a few world locations, and a small script, called the Spectator Controller, to iterate over those locations. The core of this script keeps track of the transform that the camera is currently attached to. In our sample, we are tracking m_CurrentTransform. I want to be able to switch cameras both as a VR player, and as a spectator, so I’ve linked that up to both the touchpad/stick clicks on the VR controllers, and the spacebar on the keyboard. The second responsibility of this Spectator Controller is to enable and disable the color and viewfinders of the currently active camera. I’ll opt to create a CameraAttachPoint MonoBehaviour in order to handle the elements that are specific to my high tech camera and viewfinder.

我的自恋痒很满足,现在我想获得更多游戏角度。 我需要的只是几个世界位置,还有一个名为Spectator Controller的小脚本可以遍历这些位置。 该脚本的核心跟踪摄像机当前附加的变换。 在我们的示例中,我们正在跟踪m_CurrentTransform。 我希望能够同时将摄像机切换为VR播放器和观众,所以我将其链接到VR控制器上的触摸板/操纵杆单击和键盘上的空格键。 观众控制器的第二个职责是启用和禁用当前活动摄像机的颜色和取景器。 我将选择创建CameraAttachPoint MonoBehaviour,以处理针对我的高科技相机和取景器的特定元素。

()

游戏中观众摄像机预览 (In-Game Spectator Camera Preview)

Next up, I want to be able to see what the Spectator sees, while still in VR. I won’t know if I’m `striking a good pose until I can see for myself, in real time. For this, I need a render target, and an extra camera. If I render my spectator’s camera to a render target, I can then redirect the output to both a texture in the world, and a camera directed towards the Main Display. This part just needs a few more assets, conveniently located in the Assets/RenderTarget folder. I also need a third camera. We now have 3 cameras: the VR camera, the spectator camera, and the spectator display, which takes the spectator camera’s render target and displays it to the user. I’ll opt to use a Canvas UI object here so that I could then add additional UI not visible to the VR player nor any spectator render targets.

接下来,我希望能够在仍处于VR状态下看到观众所看到的内容。 在我能实时看到自己之前,我不知道自己是否在摆出好姿势。 为此,我需要一个渲染目标和一台额外的相机。 如果将观众的摄影机渲染到渲染目标,则可以将输出重定向到世界上的纹理和定向到主显示器的摄像机。 这一部分仅需要更多资产,可以方便地位于Assets / RenderTarget文件夹中。 我还需要第三台相机。 现在,我们有3个摄像头:VR摄像头,观众摄像头和观众显示屏,该显示屏获取观众摄像头的渲染目标并将其显示给用户。 我将选择在此处使用Canvas UI对象,以便随后可以添加VR播放器或任何观众渲染目标不可见的其他UI。

与相机互动 (Interacting with the Cameras)

That’s fun, but now that I can see myself dance, I don’t just want to iterate over preset angles, I want to be able to set my own. I want to be able to grab that camera and really show myself off. For that, I need to build a small component called the Grabber. It’s a simple system: when I press the trigger, I check for any physics objects in a small radius that are on a specific layer. While the trigger is held, I continue to update the position and rotation of any found objects to match that of the grabbing hand. Simple, but it gets the job done.

那很有趣,但是现在我可以看到自己跳舞了,我不仅要遍历预设的角度,还希望能够设置自己的角度。 我希望能够拿起那台相机,真正展现自己。 为此,我需要构建一个名为Grabber的小组件。 这是一个简单的系统:当按下触发器时,我会检查特定层上半径较小的任何物理对象。 按住扳机后,我将继续更新找到的所有对象的位置和旋转,以使其与抓手相匹配。 很简单,但是可以完成工作。

An important note about moving the camera: getting the camera tossed around like a small ragdoll can be disorienting to our spectators. If you don’t have your inner ear helping you out, it can be hard to understand jittery movement. For that purpose, all camera movements (Grabber and Spectator Controller behaviours) contain settings for smoothing. These smoothing values, which go from 0 (no smoothing) to 1 (stays at the original position indefinitely), will use linear interpolation between the original and desired camera location and orientation to smooth out any sudden movements. I’ve found 0.1 is generally enough, but it’s a personal preference and can depend on context, so adjust as needed.

关于移动摄像头的重要说明:像小型布娃娃一样将摄像头摆弄起来可能会使我们的观众迷失方向。 如果您的内耳没有帮助,那么可能很难理解抖动的动作。 为此,所有摄像机运动(抓取器和观众控制器的行为)都包含用于平滑的设置。 这些平滑值从0(不平滑)到1(无限期停留在原始位置),将在原始和所需相机位置和方向之间使用线性插值来平滑任何突然的运动。 我发现0.1通常就足够了,但这是个人喜好,并且可以取决于上下文,因此请根据需要进行调整。

后续步骤和注意事项 (Next Steps & Considerations)

I’ve now got everything bundled up nicely. I’ve got a series of toggleable spectator cameras that can be grabbed, posed with, and presented within the VR world itself. I still need a way to make sure the users know what they can manipulate, without interfering with the spectator scenery. Since I’ve got separate cameras for the spectator and the player, it’s trivial to use the cameras layer mask to create a player-only layer and place instructions there.

我现在把所有东西都捆好了。 我有一系列可切换的观众摄像机,它们可以在VR世界本身中抓取,摆放和展示。 我仍然需要一种方法来确保用户知道他们可以操纵什么,而不会干扰观众的风景。 由于我为观众和播放器提供了单独的摄像头,因此使用摄像头图层蒙版创建仅播放器的图层并将指令放置在其中很简单。

It’s important to note that all these cameras get expensive. We draw the whole world twice and then re-render the spectator’s view a third time. Disabling both spectator cameras when not in use would be a useful addition. To do that, turn off both the Spectator Camera and Spectator View cameras and the system will fall back into the original ‘render from the player’s POV’ way of spectating.

重要的是要注意所有这些相机都变得昂贵。 我们绘制了整个世界两次,然后第三次重新渲染观众的视线。 在不使用时禁用两个旁观者摄像机将是一个有用的选择。 为此,请关闭“观众摄像机”和“观众视图”摄像机,系统将退回到原始的“从玩家的POV渲染”进行观众观看的方式。

And this is where I leave it up to you. There is a grabbable, movable spectator camera, with its own in-game viewfinder and a separate UI layer for both player and spectator. Take it apart, swap out the assets, change the camera switching behaviour and UI, and turn this project into your own. I’ve tried to keep it light and easy to dissect, with environment and visual assets easy to exclude, and there is a minimal amount of custom scripts. This would be an excellent place to start looking into Cinemachine to pick the right angles to maintain a good view of the action. A crafty developer could even add more to the spectator UI and inputs and design a new asymmetric style of gameplay where the spectator can be a real participant.

这就是我要交给你的地方。 有一个可抓取的,可移动的观众摄像机,它带有自己的游戏内取景器,并为玩家和观众提供了单独的UI层。 拆开它,换出资产,更改摄像头切换行为和UI,然后将此项目变成您自己的项目。 我尝试使其易于剖析,并易于排除环境和视觉资产,并且定制脚本的数量很少。 这是开始研究Cinemachine并选择合适的角度以保持良好视野的绝佳场所。 狡猾的开发人员甚至可以向观众的用户界面和输入内容添加更多内容,并设计一种新的不对称风格的游戏方式,使观众可以成为真正的参与者。

What would you like to see in a good VR spectator system?

您想在一个好的VR观众系统中看到什么?

翻译自: https://blogs.unity3d.com/2017/12/12/spectating-vr/

观看unity自制vr场景

观看unity自制vr场景_观看VR相关推荐

  1. 谁有vr片源_聊一聊VR虚拟现实(八):VR视频的清晰度

    相信很多看过VR视频的朋友,有时候都会有这个困惑:怎么看视频内容这么不清晰.是机器的问题?还是内容源的问题? 有一些初入视频拍摄行业的朋友,也会问到:你们这个VR眼镜是4K屏,我应该输出多少尺寸的VR ...

  2. vr场景制作费用介绍,vr场景制作流程都有哪些?

    疫情当下"云生活"层出不穷,vr场景制作得到了众多朋友的关注,这一形式能够使我们随时随地能够进入线上场景.线上场景由实景拍摄结合vr等技术所制作,仿佛身临其境一般,例如商超vr场景 ...

  3. unity的vr场景怎么做_如何运用Unity制作VR全景漫游?

    看文百篇,不如实操一遍.今天给大家分享的是通过Unity制作VR全景漫游的过程与方法,大家感兴趣可以动手试试. 前言 本文介绍了两种方法来制作VR场景:方法一:通过6张小图搭建的VR场景 方法二:通过 ...

  4. unity的vr场景怎么做_如何用Unity快速创建一个VR体验

    文章相关引用及参考:uxdesign 不断试验Unity3D,探索Asset Store,并尝试创建出令人称奇的体验. (映维网 2017年08月25日)我(Andrew Coyle)一直希望为虚拟现 ...

  5. unity的vr场景怎么做_如何通过Unity快速创建1:1VR场景 不到一小时就可完成制作 - VR之家...

    如何通过Unity快速创建1:1 VR场景?日前,开发者Casland在medium上分享了他是如何通过Unity在VR中快速创建1:1 VR场景物理空间.只需不到一小时的时间就可以完成制作,而且还十 ...

  6. unity的vr场景怎么做_怎么运用Unity制作VR全景漫游

    展开全部 前言 本文介绍了两种方法来制作VR场景: 方法一62616964757a686964616fe58685e5aeb931333363393038:通过6张小图搭建的VR场景 方法二:通过一张 ...

  7. unity的vr场景怎么做_Unity Editor VR告诉你,建立VR场景很简单

    原标题:Unity Editor VR告诉你,建立VR场景很简单 11月1-3日,在洛杉矶举行的Unite'16上,Unity发布了一款Editor VR,用搭积木的方式,让每个零编程基础的人,从两三 ...

  8. 初探CardBoard:(1)在Unity中实现简单VR场景

    初探CardBoard:(1)在Unity中实现简单VR场景 为何使用CardBoard 价格低廉 兼容性 前期准备 开始工程 一.简单的环境布置 二.导入SDK 三.SDK基础组件说明 四.将之前的 ...

  9. Unity虚拟现实插件VRTK3.3使用教程二:在VR场景中瞬移

    在VR场景中瞬移 根据上一篇基本配置完成了, 此时我们只能在这个虚拟世界里看看,还不能移动.想移动怎么办呢? 传送/瞬移 想实现传送或者摸瞬移,最主要的工作就是两步. 给手柄控制器增加光标指针和光标指 ...

最新文章

  1. 两个半小时,一份Python基础试卷,满分100,却有80%的人都不及格
  2. 无法连接共享打印机处理办法
  3. 信息系统项目管理师-论文专题(三)范围管理论文写作
  4. 第五十二课、命令行参数的应用------------------狄泰软件学院
  5. iOS:项目中用到的Cookie
  6. android 字符串 转公式,java – 在android中将字符串转换为bigdecimal
  7. mfc控件随框变化(EasySize的用法,仔细看绝对有用)
  8. python: excel单元格读取写入
  9. 计算机学院方阵入场词,学校运动会方阵入场解说词
  10. 致谢zyf2000,仅是你的备份,留念和记录学习C++的足迹
  11. C语言判断节日思路,C语言 程序设计 节日查询和任务提醒系统-万年历参考.doc
  12. godaddy 服务器位置,GoDaddy DNS服务器地址 | Godaddy美国主机中文指南
  13. K线图|K线图分析法简介 |K线图怎么看
  14. android 适配7.0,Android7.0适配心得(一)_拍照兼容
  15. 保姆级教学之内网穿透(NATAPP)
  16. PPT中如何将图片灰化(黑白化)
  17. 阿里云短信服务使用介绍
  18. 怎么修改云服务器数据库权限设置,怎么修改云服务器数据库权限
  19. Unity 实现人工智能语音
  20. win11下MSVC++ 6.0无法启动问题

热门文章

  1. 华为鸿蒙适用哪些机型,华为重磅消息公布:鸿蒙OS适配手机机型大名单曝光
  2. 输入一个不大于10的9次方的正整数,从高位开始逐位分割并输出各位数字。 例如,输入 12345 ,输出 1 2 3 4 5
  3. 如何高效学习Java?
  4. 如何清理iphoto图片
  5. 【Java实习生面试题系列】-- JVM篇一
  6. IDEA修改maven的JVM启动内存参数
  7. windows配置python环境变量
  8. Altaba宣布5月20日开始出售阿里巴巴股份 可能全部出售
  9. oracle sql语句序列,Oracle SQL之 序列使用限制
  10. python shutil_python中shutil模块