unity vr手柄移动

VR is all about immersion, and the ability to track the user’s position in space is a key element of it. However, to date this has only been available in desktop and console VR, even though modern smartphones already incorporate the essential technology to make it possible in mobile VR too. This blog explains how to achieve inside-out tracking in mobile VR using only Unity and AR SDKs with today’s handsets.

VR完全与沉浸感有关,而跟踪用户在太空中位置的能力是其中的关键要素。 但是,到目前为止,即使现代智能手机已经采用了必不可少的技术,也只能在台式机和控制台VR中使用,而在移动VR中也可以使用。 该博客介绍了如何仅使用Unity和AR SDK与当今的手机在移动VR中实现由内而外的跟踪。

Note that this particular method of implementing inside-out-tracking is not an officially supported Unity feature, nor is it on our immediate roadmap. We learned that Roberto from ARM was doing something cool with some of our integrated platforms and wanted to share it with you.

请注意,这种由内而外跟踪的特定实现方法不是正式支持的Unity功能,也不在我们的直接路线图上。 我们了解到,来自ARM的Roberto正在使用我们的某些集成平台做一些很棒的事情,并希望与您分享。

If you have ever tried a room scale VR console or desktop game then you will understand how badly I wished to implement mobile inside-out VR tracking. The problem was that there were no SDK’s and/or phones to try it. At the beginning of the year, I saw the first opportunity at CES when the news about the new ASUS release supporting AR functionality and Daydream became public. This ended up being an accidental leak because as it turned out, the ASUS release was not available until June. Only then I could create my first inside-out mobile VR tracking project in Unity for Daydream using the early AR SDK for Android. When I got it working, it was amazing to walk in the real world and see how my camera in VR also moved around virtual objects. It felt so natural, and it is something you need to experience yourself.

如果您曾经尝试过一个房间规模的VR控制台或台式机游戏,那么您将了解我希望实现移动内外VR跟踪有多么糟糕。 问题是没有SDK和/或电话可以尝试。 在今年年初,当有关支持AR功能和Daydream的新ASUS发布的消息公开时,我在CES上看到了第一个机会。 最终导致意外泄漏,因为事实证明,直到6月才发布ASUS版本。 只有这样,我才能使用早期的Android AR SDK在Unity for Daydream中创建我的第一个由内而外的移动VR跟踪项目。 当我开始工作时,走进现实世界,看看我的VR相机如何在虚拟对象周围移动,真是太棒了。 感觉是如此自然,这是您需要体验的东西。

The second chance I had to implement inside-out tracking became available when Google released the ARCore SDK. On the same day, Unity released a version supporting it. I was so excited I couldn’t wait! So, that weekend I got my second inside-out mobile VR tracking project in Unity. This time for the Samsung Gear VR using the Google ARCore SDK on a Samsung Galaxy S8. This mobile device has an Arm Mali-G71 MP20 GPU capable of delivering high image quality in VR by using 8x MSAA running consistently @ 60 FPS.

当Google发布ARCore SDK时,我不得不实施由内而外的跟踪的第二次机会。 当天,Unity发布了支持它的版本。 我很激动,我等不及了! 因此,那个周末,我在Unity中获得了第二个由内而外的移动VR跟踪项目。 这次是在Samsung Galaxy S8上使用Google ARCore SDK的Samsung Gear VR。 该移动设备具有Arm Mali-G71 MP20 GPU,能够通过以60 FPS持续运行的8倍MSAA在VR中提供高图像质量。

This blog is intended to share my experience in developing inside-out mobile VR tracking apps and making it available to Unity developers.  The Unity integration with ARCore SDKs is not yet prepared to do inside-out mobile VR tracking out of the box (or it wasn’t intended to do it), so I hope I will save you some time and pain with this blog.

该博客旨在分享我在开发由内而外的移动VR跟踪应用程序并将其提供给Unity开发人员的经验。 尚未将Unity与ARCore SDK的集成集成到现成的内外移动VR跟踪中(或者不打算这样做),所以我希望我可以为您节省一些时间和麻烦。

I hope you will experience the same satisfaction I had when you implement your own Unity mobile VR project with inside-out tracking.  I will explain step by step how to do it with the ARCore SDK.

我希望您能体验到与内而外跟踪实现自己的Unity移动VR项目时相同的满意度。 我将逐步解释如何使用ARCore SDK。

在Unity中使用Google ARCore SDK进行移动内外VR跟踪 (Mobile inside-out VR tracking using the Google ARCore SDK in Unity)

I won’t point out all the steps you need to follow to get Unity working. I assume you have Unity 2017.2.0b9 or later, and have the entire environment prepared to build Android apps. Additionally, you’ll need a Samsung Galaxy S8. Unfortunately, you can try inside-out VR tracking based on Google ARCore only on this phone and Google Pixel and Pixel XL so far.

我不会指出要使Unity工作所需执行的所有步骤。 我假设您拥有Unity 2017.2.0b9或更高版本,并且已准备好整个环境来构建Android应用程序。 此外,您需要三星Galaxy S8。 不幸的是,到目前为止,您只能在这款手机以及Google Pixel和Pixel XL上尝试基于Google ARCore的由内而外的VR跟踪。

The first step is to download the Unity package of the Google ARCore SDK for Unity (arcore-unity-sdk-preview.unitypackage) and import it to your project. A simple project will be enough; just a sphere, a cylinder and a cube on a plain.

第一步是下载适用于Unity的 Google ARCore SDK 的Unity软件包 ( arcore-unity-sdk-preview.unitypackage )并将其导入到您的项目中。 一个简单的项目就足够了。 只是一个球体,一个圆柱体和一个立方体。

You will also need to download the Google ARCore service. It is an APK file (arcore-preview.apk), and you need to install it on your device.

您还需要下载 Google ARCore服务 。 这是一个APK文件( arcore-preview.apk ),您需要将其安装在设备上。

At this point you should have a folder in your project called “GoogleARCore” containing a session configuration asset, an example, the prefabs, and the SDK.

此时,您应该在项目中拥有一个名为“ GoogleARCore”的文件夹,其中包含会话配置资产,示例,预制件和SDK。

Figure 1. The Google ARCore SDK folders after imported in Unity.

图1.在Unity中导入后的Google ARCore SDK文件夹。

We can now start integrating ARCore in our sample. Drag and drop the ARCore Device prefab that you will find in the Prefabs folder into the scene hierarchy. This prefab includes a First-Person Camera. My initial thought was to keep this camera that automatically converts to the VR camera when ticking the “Virtual Reality Supported” box in Player Settings. I understood later that this is a bad decision. The reason for this is that this is the camera used for AR. We mean the camera used to render the phone camera input together with the virtual objects we add to the “real world scene”. I have identified three big inconveniences so far:

现在,我们可以开始在示例中集成ARCore。 将您在Prefabs文件夹中找到的ARCore Device预制件拖放到场景层次中。 该预制件包括第一人称相机。 我最初的想法是保留在“播放器设置”中选中“支持虚拟现实”框时自动将其转换为VR摄像机的摄像机。 后来我知道这是一个错误的决定。 原因是这是用于AR的相机。 我们指的是用于渲染手机摄像头输入的摄像头,以及我们添加到“真实世界场景”中的虚拟对象。 到目前为止,我已经发现了三大不便之处:

  • You need to manually comment the line that calls _SetupVideoOverlay() in the SessionComponent script because if you untick the “Enable AR Background” option in the session settings asset (see Fig. 3) then the camera pose tracking doesn’t work at all.

    您需要在SessionComponent脚本中手动注释调用_SetupVideoOverlay()的行,因为如果取消选中会话设置资产中的“ Enable AR Background”选项(请参见图3),那么照相机姿势跟踪将根本无法工作。

  • You can’t apply any scale factor you may need to use to map the real world to your virtual world. You can’t always use a 1:1 map.

    您无法应用将真实世界映射到虚拟世界可能需要的任何比例因子。 您不能总是使用1:1地图。

  • After selecting the Single-pass Stereo Rendering option, I got the left eye rendered correctly but not-so-good rendering in the right eye. Single-pass Stereo Rendering is something we need to use, to reduce the load on the CPU and accommodate the additional load that ARCore tracking brings.

    选择“单遍立体渲染”选项后,我正确渲染了左眼,但右眼的渲染却不太好。 我们需要使用单通道立体声渲染,以减少CPU的负载并适应ARCore跟踪带来的额外负载。

So, we will use our own camera. As we are working on a VR project, place the camera as a child of a game object (GO); so we can change camera coordinates according to the tracking pose data from the ARCore subsystem. It is important to note here that the ARCore subsystem provides the camera position and orientation, but I decided to use only the camera position and let the VR subsystem to work as expected. The head orientation tracking the VR subsystem provides is in sync with the timewarp process and we don’t want to disrupt this sync.

因此,我们将使用自己的相机。 在进行VR项目时,请将相机放置为游戏对象(GO)的子对象; 因此我们可以根据ARCore子系统中的跟踪姿态数据更改相机坐标。 在此必须注意,ARCore子系统提供了摄像机的位置和方向,但我决定仅使用摄像机的位置,并让VR子系统按预期工作。 VR子系统提供的头部定向跟踪与时间扭曲过程同步,我们不想破坏这种同步。

The next step is to configure the ARCore session to exclusively use what we need for tracking. Click on the ARCore Device GO and you will see in the Inspector the scripts attached to it as in the picture below:

下一步是将ARCore会话配置为专门使用我们需要进行跟踪的会话。 单击ARCore Device GO,您将在Inspector中看到附加的脚本,如下图所示:

Figure 2. The AR Core Device game object and the scripts attached to it.

图2. AR核心设备游戏对象和附加的脚本。

Double click on Default SessionConfig to open the configuration options and untick the “Plane Finding” and “Point Cloud” options as we don’t need them since they add a substantial load on the CPU. We need to leave “Enable AR Background” (passthrough mode) ticked in options otherwise the AR Session component won’t work and we won’t get any camera pose tracking.

双击Default SessionConfig打开配置选项,并取消选中“ Plane Finding”和“ Point Cloud”选项,因为我们不需要它们,因为它们在CPU上增加了大量的负载。 我们需要在选项中勾选“启用AR背景”(直通模式),否则AR Session组件将无法正常工作,并且我们将无法获得任何相机姿态跟踪信息。

Figure 3. The session settings as we need to set.

图3.我们需要设置的会话设置。

The next step is to add our own ARCore controller. Create a new GO ARCoreController and attach to it the script HelloARController.cs which we will borrow from the GoogleARCore/HelloARExample/Scripts folder. I renamed it to ARTrackingController and removed some items we don’t need.  My ARCoreController looks as the picture below. I have also attached to it a script to calculate the FPS.

下一步是添加我们自己的ARCore控制器。 创建一个新的GO ARCoreController并将脚本HelloARController.cs附加到该脚本,我们将从GoogleARCore / HelloARExample / Scripts文件夹中借用。 我将其重命名为ARTrackingController,并删除了一些我们不需要的项目。 我的ARCoreController如下图所示。 我还附加了一个脚本来计算FPS。

Figure 4. The ARCoreController GO.

图4. ARCoreController GO。

The Update function of the ARTrackerController script will look like below:

ARTrackerController脚本的Update函数如下所示:

public void Update (){ _QuitOnConnectionErrors(); if (Frame.TrackingState != FrameTrackingState.Tracking) { trackingStarted = false;  // if tracking lost or not initialized m_camPoseText.text = "Lost tracking, wait ..."; const int LOST_TRACKING_SLEEP_TIMEOUT = 15; Screen.sleepTimeout = LOST_TRACKING_SLEEP_TIMEOUT; return; } else { m_camPoseText.text = ""; } Screen.sleepTimeout = SleepTimeout.NeverSleep; Vector3 currentARPosition = Frame.Pose.position; if (!trackingStarted) { trackingStarted = true; m_prevARPosePosition = Frame.Pose.position; } //Remember the previous position so we can apply deltas Vector3 deltaPosition = currentARPosition - m_prevARPosePosition; m_prevARPosePosition = currentARPosition; if (m_CameraParent != null) { Vector3 scaledTranslation = new Vector3 (m_XZScaleFactor * deltaPosition.x, m_YScaleFactor * deltaPosition.y, m_XZScaleFactor * deltaPosition.z); m_CameraParent.transform.Translate (scaledTranslation); if (m_showPoseData) { m_camPoseText.text = "Pose = " + currentARPosition + "\n" + GetComponent<FPSARCoreScript> ().FPSstring + "\n" + m_CameraParent.transform.position; } } } public void Update (){ _QuitOnConnectionErrors(); if (Frame.TrackingState != FrameTrackingState.Tracking) { trackingStarted = false;  // if tracking lost or not initialized m_camPoseText.text = "Lost tracking, wait ..."; const int LOST_TRACKING_SLEEP_TIMEOUT = 15; Screen.sleepTimeout = LOST_TRACKING_SLEEP_TIMEOUT; return; } else { m_camPoseText.text = ""; } Screen.sleepTimeout = SleepTimeout.NeverSleep; Vector3 currentARPosition = Frame.Pose.position; if (!trackingStarted) { trackingStarted = true; m_prevARPosePosition = Frame.Pose.position; } //Remember the previous position so we can apply deltas Vector3 deltaPosition = currentARPosition - m_prevARPosePosition; m_prevARPosePosition = currentARPosition; if (m_CameraParent != null) { Vector3 scaledTranslation = new Vector3 (m_XZScaleFactor * deltaPosition.x, m_YScaleFactor * deltaPosition.y, m_XZScaleFactor * deltaPosition.z); m_CameraParent.transform.Translate (scaledTranslation); if (m_showPoseData) { m_camPoseText.text = "Pose = " + currentARPosition + "\n" + GetComponent<FPSARCoreScript> ().FPSstring + "\n" + m_CameraParent.transform.position; } } }

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

public void Update (){
_QuitOnConnectionErrors();
if (Frame.TrackingState != FrameTrackingState.Tracking) {
trackingStarted = false;  // if tracking lost or not initialized
m_camPoseText.text = "Lost tracking, wait ...";
const int LOST_TRACKING_SLEEP_TIMEOUT = 15;
Screen.sleepTimeout = LOST_TRACKING_SLEEP_TIMEOUT;
return;
}
else {
m_camPoseText.text = "";
}
Screen.sleepTimeout = SleepTimeout.NeverSleep;
Vector3 currentARPosition = Frame.Pose.position;
if (!trackingStarted) {
trackingStarted = true;
m_prevARPosePosition = Frame.Pose.position;
}
//Remember the previous position so we can apply deltas
Vector3 deltaPosition = currentARPosition - m_prevARPosePosition;
m_prevARPosePosition = currentARPosition;
if (m_CameraParent != null) {
Vector3 scaledTranslation = new Vector3 (m_XZScaleFactor * deltaPosition.x, m_YScaleFactor * deltaPosition.y, m_XZScaleFactor * deltaPosition.z);
m_CameraParent.transform.Translate (scaledTranslation);
if (m_showPoseData) {
m_camPoseText.text = "Pose = " + currentARPosition + "\n" + GetComponent<FPSARCoreScript> ().FPSstring + "\n" + m_CameraParent.transform.position;
}
}
}

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

public void Update ( ) {
_QuitOnConnectionErrors ( ) ;
if ( Frame . TrackingState != FrameTrackingState . Tracking ) {
trackingStarted = false ;  // if tracking lost or not initialized
m_camPoseText . text = "Lost tracking, wait ..." ;
const int LOST_TRACKING_SLEEP_TIMEOUT = 15 ;
Screen . sleepTimeout = LOST_TRACKING_SLEEP_TIMEOUT ;
return ;
}
else {
m_camPoseText . text = "" ;
}
Screen . sleepTimeout = SleepTimeout . NeverSleep ;
Vector3 currentARPosition = Frame . Pose . position ;
if ( ! trackingStarted ) {
trackingStarted = true ;
m_prevARPosePosition = Frame . Pose . position ;
}
//Remember the previous position so we can apply deltas
Vector3 deltaPosition = currentARPosition - m_prevARPosePosition ;
m_prevARPosePosition = currentARPosition ;
if ( m_CameraParent != null ) {
Vector3 scaledTranslation = new Vector3 ( m_XZScaleFactor * deltaPosition . x , m_YScaleFactor * deltaPosition . y , m_XZScaleFactor * deltaPosition . z ) ;
m_CameraParent . transform . Translate ( scaledTranslation ) ;
if ( m_showPoseData ) {
m_camPoseText . text = "Pose = " + currentARPosition + "\n" + GetComponent < FPSARCoreScript > ( ) . FPSstring + "\n" + m_CameraParent . transform . position ;
}
}
}

I removed everything but the checking of connection errors and the right tracking state. I have replaced the original class members by the ones below:

我删除了所有内容,但检查了连接错误和正确的跟踪状态。 我将原来的班级成员替换为以下成员:

public Text m_camPoseText; public GameObject m_CameraParent; public float m_XZScaleFactor = 10; public float m_YScaleFactor = 2; public bool m_showPoseData = true; private bool trackingStarted = false; private Vector3 m_prevARPosePosition; public Text m_camPoseText; public GameObject m_CameraParent; public float m_XZScaleFactor = 10; public float m_YScaleFactor = 2; public bool m_showPoseData = true; private bool trackingStarted = false; private Vector3 m_prevARPosePosition;

1

2
3
4
5
6
7
8
9
10
11
12
13

public Text m_camPoseText;
public GameObject m_CameraParent;
public float m_XZScaleFactor = 10;
public float m_YScaleFactor = 2;
public bool m_showPoseData = true;
private bool trackingStarted = false;
private Vector3 m_prevARPosePosition;

1

2
3
4
5
6
7
8
9
10
11
12
13

public Text m_camPoseText ;
public GameObject m_CameraParent ;
public float m_XZScaleFactor = 10 ;
public float m_YScaleFactor = 2 ;
public bool m_showPoseData = true ;
private bool trackingStarted = false ;
private Vector3 m_prevARPosePosition ;

You then need to populate the public members in the Inspector. The camPoseText is used to show  on-screen data for debugging and errors, when tracking is lost, together with the phone camera position obtained from the Frame and the virtual camera position after applying the scale factors.

然后,您需要在检查器中填充公共成员。 camPoseText用于在丢失跟踪时显示调试和错误的屏幕数据,以及从帧获得的电话摄像头位置和应用比例因子后的虚拟摄像头位置。

As I mentioned before, you will hardly always be able to map your real environment one to one to the virtual scene, and this is the reason I have introduced a couple of scaling factors for the movement on the XZ plane and in the Y axis (up-down).

如前所述,您几乎永远无法将真实环境一对一地映射到虚拟场景,这就是我为XZ平面和Y轴上的移动引入了几个缩放因子的原因(上下)。

The scale factor depends on the virtual size (vSize) we want to walk through and the actual space we can use in the real world. If the average step length is 0.762 m and we know we have room in the real world to do only nSteps, then a first approximation to the XZ scale factor will be:

比例因子取决于我们要遍历的虚拟大小(vSize)以及可以在现实世界中使用的实际空间。 如果平均步长为0.762 m,并且我们知道我们在现实世界中有空间仅可以执行nStep,则XZ比例因子的第一近似值将是:

scaleFactorXZ = vSize / (nSteps x 0.762 m)

scaleFactorXZ = vSize /(nSteps x 0.762 m)

I kept the _QuitOnConnectionErrors() class method and only changed the message output to use the Text component m_camPoseText.

我保留了_QuitOnConnectionErrors()类方法,仅将消息输出更改为使用Text组件m_camPoseText。

private void _QuitOnConnectionErrors() { // Do not update if ARCore is not tracking. if (Session.ConnectionState == SessionConnectionState.DeviceNotSupported) { m_camPoseText.text = "This device does not support ARCore."; Application.Quit(); } else if (Session.ConnectionState == SessionConnectionState.UserRejectedNeededPermission) { m_camPoseText.text = "Camera permission is needed to run this application."; Application.Quit(); } else if (Session.ConnectionState == SessionConnectionState.ConnectToServiceFailed) { m_camPoseText.text = "ARCore encountered a problem connecting. Please start the app again."; Application.Quit(); } } private void _QuitOnConnectionErrors() { // Do not update if ARCore is not tracking. if (Session.ConnectionState == SessionConnectionState.DeviceNotSupported) { m_camPoseText.text = "This device does not support ARCore."; Application.Quit(); } else if (Session.ConnectionState == SessionConnectionState.UserRejectedNeededPermission) { m_camPoseText.text = "Camera permission is needed to run this application."; Application.Quit(); } else if (Session.ConnectionState == SessionConnectionState.ConnectToServiceFailed) { m_camPoseText.text = "ARCore encountered a problem connecting. Please start the app again."; Application.Quit(); } }

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

private void _QuitOnConnectionErrors()
{
// Do not update if ARCore is not tracking.
if (Session.ConnectionState == SessionConnectionState.DeviceNotSupported) {
m_camPoseText.text = "This device does not support ARCore.";
Application.Quit();
}
else if (Session.ConnectionState == SessionConnectionState.UserRejectedNeededPermission) {
m_camPoseText.text = "Camera permission is needed to run this application.";
Application.Quit();
}
else if (Session.ConnectionState == SessionConnectionState.ConnectToServiceFailed) {
m_camPoseText.text = "ARCore encountered a problem connecting. Please start the app again.";
Application.Quit();
}
}

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

private void _QuitOnConnectionErrors ( )
{
// Do not update if ARCore is not tracking.
if ( Session . ConnectionState == SessionConnectionState . DeviceNotSupported ) {
m_camPoseText . text = "This device does not support ARCore." ;
Application . Quit ( ) ;
}
else if ( Session . ConnectionState == SessionConnectionState . UserRejectedNeededPermission ) {
m_camPoseText . text = "Camera permission is needed to run this application." ;
Application . Quit ( ) ;
}
else if ( Session . ConnectionState == SessionConnectionState . ConnectToServiceFailed ) {
m_camPoseText . text = "ARCore encountered a problem connecting. Please start the app again." ;
Application . Quit ( ) ;
}
}

After all this is working, your hierarchy (besides your geometry), should look like in the picture below:

在所有这些工作之后,您的层次结构(除了几何形状)应该如下图所示:

Figure 5. The needed ARCore game objects as listed in the hierarchy.

图5.层次结构中列出的所需的ARCore游戏对象。

As in my project the camera is colliding with some chess pieces in a chess room (this is an old demo I use every time I need to show something quick) I have added a CharacterController component to it.

就像在我的项目中一样,相机正在与棋牌室中的一些棋子碰撞(这是我每次需要快速展示某些东西时都会使用的旧演示),因此我向其中添加了CharacterController组件。

At this point we are almost ready. We just need to set up the player settings. Besides the standard settings we commonly used for Android, Google recommends:

至此,我们几乎准备就绪。 我们只需要设置播放器设置。 除了我们通常用于Android的标准设置外,Google还建议:

Other Settings -> Multithreaded Rendering: Off

其他设置->多线程渲染:关闭

   Other Settings -> Minimum API Level: Android 7.0 or higher

其他设置->最低API级别:Android 7.0或更高版本

   Other Settings -> Target API Level: Android 7.0 or 7.1

其他设置->目标API级别:Android 7.0或7.1

   XR Settings -> ARCore Supported: On

XR设置->支持的ARCore:开

Below you can see a capture of my XR Settings. It is important to set the Single-pass option to reduce the number of draw calls we issue to the driver (almost halved).

在下面,您可以看到我的XR设置的抓图。 设置“单次通过”选项以减少我们向驱动程序发出的绘图调用次数(几乎减半)非常重要。

Figure 6. The XR Settings.

图6. XR设置。

If you build your project following the above described steps, you should get the mobile VR inside-out tracking working. For my project the picture below was my rendering result. The first line of text shows the phone camera position in the world supplied by Frame.Pose. The second line shows the FPS, and the third line shows the position of the VR camera in the virtual world.

如果按照上述步骤构建项目,则应该使移动VR由内而外的跟踪工作。 对于我的项目,下面的图片是我的渲染结果。 文本的第一行显示了Frame.Pose提供的电话摄像头在世界上的位置。 第二行显示FPS,第三行显示VR摄像机在虚拟世界中的位置。

Although the scene is not very complex in terms of geometry, the chess pieces are rendered with reflections based on local cubemaps, there are camera-chess pieces and chess pieces – chess room collisions. I am using 8x MSAA to achieve high image quality. Additionally, the ARCore tracking subsystem is running and all this on the Samsung S8 CPU and Arm Mali-G71 MP20 GPU render the scene at a steady 60 FPS.

尽管场景的几何形状不是很复杂,但是棋盘上的棋子是基于局部立方体贴图进行反射的,还有照相机棋子和棋子–棋盘室碰撞。 我正在使用8倍MSAA以获得高图像质量。 此外,ARCore跟踪子系统正在运行,三星S8 CPU和Arm Mali-G71 MP20 GPU上的所有部件均以稳定的60 FPS渲染场景。

Figure 7. A screenshot from a Samsung Galaxy S8 running VR in developer mode with inside-out tracking.

图7.三星Galaxy S8的屏幕快照,以开发人员模式运行VR,具有由内而外的跟踪。

结论 (Conclusions)

At this point, I hope you have been able to follow this blog and build your own mobile VR Unity project with inside-out tracking and above all, experience walking around a virtual object while doing the same in the real world. You will hopefully agree with me that it feels very natural and adds even more sense of immersion to the VR experience.

现在,我希望您能够跟随该博客并通过内而外的跟踪构建自己的移动VR Unity项目,最重要的是,体验在虚拟对象中行走的同时在现实世界中做同样的事情。 希望您会同意我的看法,它感觉非常自然,并为VR体验增添了更多的沉浸感。

Just a few words about the quality of the tracking. I haven’t performed rigorous measurements, and these are only my first impressions after some tests and the feedback of colleagues that have tried my apps. I have tried both implementations indoors and outdoors, and they worked pretty stable in both scenarios. The loop closing was also very good, with no noticeable difference when coming back to the initial spot. When using Google ARCore I was able to go out of the room and the tracking still worked correctly. Nevertheless, formal tests need to be performed to determine the tracking error and stability.

关于跟踪质量的几句话。 我没有进行严格的测量,这些只是经过一些测试以及尝试过我的应用程序的同事的反馈后给我的第一印象。 我在室内和室外都尝试过两种实现方式,它们在这两种情况下都非常稳定。 循环闭合也非常好,回到初始位置时没有明显差异。 使用Google ARCore时,我可以离开房间,并且跟踪仍然可以正常进行。 尽管如此,仍需要进行正式测试以确定跟踪误差和稳定性。

Up to now we have been bound to a chair, moving the virtual camera by means of some interface being able to control only the camera orientation with our head. However, now we are in total control of the camera in the same way we control our eyes and body. We are able to move the virtual camera by replicating our movements in the real world.  The consequences of this new “6DoF power” are really important. Soon, we should be able to play new types of games on our mobile phones that up to now are only possible in the console and desktop space. Other potential applications of mobile inside-out VR tracking in training and education will be possible soon as well just with a mobile phone and a VR headset.

到现在为止,我们已经被束缚在椅子上,通过某些界面可以移动虚拟摄像机,该接口只能控制头部的摄像机方向。 但是,现在我们可以像控制眼睛和身体一样完全控制摄像机。 通过复制现实世界中的动作,我们能够移动虚拟相机。 这种新的“ 6DoF力量”的后果非常重要。 很快,我们应该能够在手机上玩新型游戏,而到目前为止,只有在控制台和台式机空间才可以玩。 移动内外VR跟踪在培训和教育中的其他潜在应用也将很快成为可能,只需使用手机和VR耳机即可。

As always, I really appreciate your feedback on these blogs and please any comments on your own inside-out mobile VR experience.

与往常一样,我非常感谢您对这些博客的反馈,并请对您自己的由内而外的移动VR体验发表任何评论。

About the Author

关于作者

 After a decade working in nuclear physics, Roberto discovered his real passion for 3D graphics in 1995 and has been working in leading companies ever since. In 2012 Roberto joined Arm and has been working closely with the ecosystem in developing optimized rendering techniques for mobile devices. He also regularly publishes graphics related blogs, delivers talks and workshops at different game related events.

  在核物理领域工作了十年之后,Roberto在1995年发现了他对3D图形的真正热情,从那时起就一直在领先的公司工作。 Roberto于2012年加入Arm,并一直与生态系统密切合作,为移动设备开发优化的渲染技术。 他还定期发布与图形相关的博客,在与游戏相关的不同事件上进行演讲和研讨会。

翻译自: https://blogs.unity3d.com/2017/10/18/mobile-inside-out-vr-tracking-now-readily-available-on-your-phone-with-unity/

unity vr手柄移动

unity vr手柄移动_移动内外VR跟踪,现在可通过Unity在手机上轻松使用相关推荐

  1. 国在产vr视频区_九台vr技术安全体验馆VR行走平台资讯

    九台vr技术安全体验馆VR行走平台资讯 VR的真正的含义是现实,即计算机工程师通过对建筑工地现场的录视频.拍照片,然后在计算机上通过专业的3D建模软件大程度上还原建筑工地的真实施工现场,然后通过高端的 ...

  2. unity 贴图合并_地形系统挣扎录——从Blender到Unity

    欢迎参与讨论,转载请注明出处. 前言 之前由前篇决定场景的制作模式为Tile流,这种面片堆砌的流派对于方方正正的场景(如室内)很有效.但是对于如山地草原一般的场景就很难受了: 上图可能粗看下去感觉还行 ...

  3. unity text不能刷新_厦门泳池派对惊喜之Carta与Unity合作新单预告派对

    我们多久没有号召厦门的好朋友们见面了?也许几个月?也许半年以上?值得庆幸的是,在令人烦躁.焦虑的夏日,我们发现了有趣的清凉时刻.你最爱的泳池派对回来了.「Sunset Sessions」对于任何舞客来 ...

  4. winrar 百度网盘_不冲百度网盘会员,如何在手机上打开网盘里的压缩包?

    如果你冲了百度网盘会员 点击网盘里的压缩包(一般以zip.7z.rar为文件后缀名)后 它就会有如下提示 等待几秒 就进入了压缩包 里头的文件可以解压成直接可用的文件 也可以单个解压提取 花钱就是好使 ...

  5. 苹果语音备忘录怎么改名字_苹果手机语音备忘录怎么恢复?教你在手机上直接操作...

    苹果手机语音备忘录怎么恢复?苹果手机自带的语音备忘录可以帮助我们录音,如果平常有需要记录的语音内容可以通过语音备忘录直接录制保存,使用非常方便. 不过很多手机用户表示自己偶尔会误删除手机里的语音备忘录 ...

  6. 手柄的姿态算法_VR手柄姿态初始算法、VR手柄及存储介质的制作方法

    本发明涉及一种手柄姿态精准计算,尤其涉及一种VR手柄姿态初始算法.VR手柄及存储介质. 背景技术: 目前,市面上普通的VR手柄设备的初始姿态一般采用(w,x,y,z)=(1,0,0,0)作为初始化姿态 ...

  7. 【XR】VR手柄设计之LED光点布局

    1. 如何设计一台VR手柄(设计一台VR手柄需要考虑以下几个方面:) 功能:VR手柄需要具备一定的交互功能,例如触摸板.按键.扳机等,以及能够感知手部动作和姿态的传感器. 舒适性:VR手柄需要舒适地握 ...

  8. unity发布WebGl在手机上的横屏适配,webgl横版游戏在手机上直接转横屏

    unity版本2020.1 问题:webgl的横版游戏 1920*1080,在手机上适配的不好,还是竖屏显示, 使用官方的说明,说是只在全屏模式下能旋转,也不好用,可能直接旋转也不会达到理想的效果 解 ...

  9. Pico Neo3 通过unity实现VR手柄瞬移功能

    效果展示: 一.Pico sdk导入以及环境搭建 本文unity测试版本为2019.4.19,picosdk版本为PicoVR Unity SDK v2.8.11 可以vx咨询:159-7084-33 ...

最新文章

  1. 解密中国研发团队如何开发VS2010新特性
  2. 有漏洞无作为才可怕、可耻!
  3. 附加数据库失败 操作系统错误5:5拒绝访问 错误5120 解决方法
  4. mapreduce shuffle过程
  5. python中insert()函数的用法_Python list insert()用法及代码示例
  6. 《啊哈!算法》笔记_Day02
  7. MySQL常见备份与恢复方案
  8. Python爬虫自学之第(③)篇——实战:requests+BeautifulSoup实现静态爬取
  9. java用户注册代码解析_java springmvc 注册中央调度器代码解析
  10. solr5.0mysql_solr5.5.4 添加mysql数据,实现同步更新
  11. oracle默认端口号是,sqlserver、mysql、oracle各自的默认端口号
  12. 慕课PDF下载扩展--再也不用担心慕课不给PDF了
  13. 如何确保数据的准确性
  14. Email邮件发送设置 工具开发整理(网易邮箱、Mailgun为例) 下篇
  15. 【串口服务器rs485通信教程】存储型网关工作模式
  16. 【CF37E】 Trial for Chief
  17. 搜狗拼音输入法2015 v7.2f 官方版
  18. Linux平台基于poll实现网络编程IO多路复用
  19. 如何预测转录因子的结合位点教程
  20. 元界快讯|首个物联网元宇宙平台“物联森友会”即将上线

热门文章

  1. VBS--Microsoft Visual Basic Script Edition(一)
  2. 中文情感分析 (Sentiment Analysis) 的难点在哪?现在做得比较好的有哪几家?
  3. 计算机考研面试自我介绍范文英语,研究生面试自我介绍英文范文
  4. 设置cookies过期时间的几种方法(cookies随浏览器关闭而失效的方法)
  5. 龙岗CBD中心:恒大集团向前村城市更新旧改项目!
  6. C#练习题答案: 巴路士惠勒改造【难度:4级】--景越C#经典编程题库,1000道C#基础练习题等你来挑战
  7. java中怎么编写围棋对弈_java课程设计围棋对弈(含代码).doc
  8. Deferred Shading,延迟渲染(提高渲染效率,减少多余光照计算)【转】
  9. mysql 0xc0000005_duilib菜单开发遇见“0xC0000005: 读取位置 0xFFFFFFFFFFFFFFFF 时发生访问冲突”...
  10. 学习PrintWriter类