其中的ovrHmdInfo是VR 头盔(眼镜)相关的一个结构体,包含,分辨率,刷新率,默认分辨率,和水平垂直视野角度。

显示一幅分辨率为2560*2440的图像的时候,每个像素在眼睛中所占用的视角约为0.06度。展现完整360度就需要6000个像素(360/0.06=6000),展现90度的视角就需要占用其中的1/4。分辨率为1536*1536的图像能够产生1:1的图像。但是对于偏离中心的像素,需要提供大密度投影。考虑到不需要大密度投影的情况和更好的表现,这里提供了默认值1024*1024.

VR是一款需要考虑光学畸变和视场覆盖的产品,,需要权衡分辨率和视场覆盖率。太小的视场会让部分没有被渲染的像素也被展示,但是太大的话又会浪费分辨率或者填充率。没有一个确切的度可度量的度,只能尽可能地增加视场直至其被完全覆盖。

应用在时间轴边界上的角速度比较大时可能会选择一个大的视场来减少黑色暂停。

//-----------------------------------------------------------------
// HMD information.
//-----------------------------------------------------------------typedef struct
{// Resolution of the display in pixels.int     DisplayPixelsWide;int       DisplayPixelsHigh;// Refresh rate of the display in cycles per second.// Currently 60Hz.float   DisplayRefreshRate;// With a display resolution of 2560x1440, the pixels at the center// of each eye cover about 0.06 degrees of visual arc. To wrap a// full 360 degrees, about 6000 pixels would be needed and about one// quarter of that would be needed for ~90 degrees FOV. As such, Eye// images with a resolution of 1536x1536 result in a good 1:1 mapping// in the center, but they need mip-maps for off center pixels. To// avoid the need for mip-maps and for significantly improved rendering// performance this currently returns a conservative 1024x1024.int      SuggestedEyeResolutionWidth;int     SuggestedEyeResolutionHeight;// This is a product of the lens distortion and the screen size,// but there is no truly correct answer.//// There is a tradeoff in resolution and coverage.// Too small of an FOV will leave unrendered pixels visible, but too// large wastes resolution or fill rate.  It is unreasonable to// increase it until the corners are completely covered, but we do// want most of the outside edges completely covered.//// Applications might choose to render a larger FOV when angular// acceleration is high to reduce black pull in at the edges by// the time warp.//// Currently symmetric 90.0 degrees.float    SuggestedEyeFovDegreesX;            // Horizontal Field of View in degreesfloat SuggestedEyeFovDegreesY;            // Vertical Field of View in degrees
} ovrHmdInfo;

AllowPowerSave这个变量,在为true时,给出提醒,并允许应用继续在30帧/s的情况下继续运行。如果为false,则给一个需要用户主动移除的提示信息。

ResetWindowFullscreen这个变量,当一个包含多个Activity的应用被压入activity栈的时候,回到的窗口状态不再是全屏。

因此,Android也会渲染decor view,这会浪费很大的带宽。通过设置这个标识,来复位窗口状态。通常这回扰乱一些诸如stratum和UE4之类的基于NativeActivity编写的程序的生命周期,因此这个变量应该只在包含多个activity的应用中使用。可以用adb shell命令 "sumpsys SurfaceFlinger"查看是否只有一个HWC指向 FB_TARGET来确保这一点。

//-----------------------------------------------------------------
// VR mode
//-----------------------------------------------------------------typedef struct
{// If true, warn and allow the app to continue at 30fps when // throttling occurs.// If false, display the level 2 error message which requires// the user to undock.bool  AllowPowerSave;// When an application with multiple activities moves backwards on// the activity stack, the activity window it returns to is no longer// flagged as fullscreen. As a result, Android will also render// the decor view, which wastes a significant amount of bandwidth.// By setting this flag, the fullscreen flag is reset on the window.// Unfortunately, this causes Android life cycle events that mess up// several NativeActivity codebases like Stratum and UE4, so this// flag should only be set for select applications with multiple// activities. Use "adb shell dumpsys SurfaceFlinger" to verify// that there is only one HWC next to the FB_TARGET.bool   ResetWindowFullscreen;// The Java VM is needed for the time warp thread to create a Java environment.// A Java environment is needed to access various system services. The thread// that enters VR mode is responsible for attaching and detaching the Java// environment. The Java Activity object is needed to get the windowManager,// packageName, systemService, etc.ovrJava  Java;
} ovrModeParms;

定义了一些三维四维向量

typedef struct ovrVector3f_
{float x, y, z;
} ovrVector3f;// Quaternion.
typedef struct ovrQuatf_
{float x, y, z, w;
} ovrQuatf;// Row-major 4x4 matrix.
typedef struct ovrMatrix4f_
{float M[4][4];
} ovrMatrix4f;// Position and orientation together.
typedef struct ovrPosef_
{ovrQuatf   Orientation;ovrVector3f Position;
} ovrPosef;

这个变量应该是模型变化率相关的结构体,用于算速度,加速度之类的一二阶导数

// Full rigid body pose with first and second derivatives.
typedef struct ovrRigidBodyPosef_
{ovrPosef   Pose;ovrVector3f    AngularVelocity;ovrVector3f LinearVelocity;ovrVector3f  AngularAcceleration;ovrVector3f LinearAcceleration;double       TimeInSeconds;          // Absolute time of this pose.double        PredictionInSeconds;    // Seconds this pose was predicted ahead.
} ovrRigidBodyPosef;

ovrTrackingStatus使用来表示传感器状态的结构体

// Bit flags describing the current status of sensor tracking.
typedef enum
{VRAPI_TRACKING_STATUS_ORIENTATION_TRACKED  = 0x0001,  // Orientation is currently tracked.VRAPI_TRACKING_STATUS_POSITION_TRACKED      = 0x0002,  // Position is currently tracked.VRAPI_TRACKING_STATUS_HMD_CONNECTED            = 0x0080   // HMD is available & connected.
} ovrTrackingStatus;

ovrTracking用于追踪眼镜的状态

// Tracking state at a given absolute time.
typedef struct ovrTracking_
{// Sensor status described by ovrTrackingStatus flags.unsigned int     Status;// Predicted head configuration at the requested absolute time.// The pose describes the head orientation and center eye position.ovrRigidBodyPosef  HeadPose;
} ovrTracking;

ovrTextureType是纹理类型。

typedef enum
{VRAPI_TEXTURE_TYPE_2D,             // 2D textures.VRAPI_TEXTURE_TYPE_2D_EXTERNAL,      // External 2D texture.VRAPI_TEXTURE_TYPE_2D_ARRAY,     // Texture array.VRAPI_TEXTURE_TYPE_CUBE,           // Cube maps.VRAPI_TEXTURE_TYPE_MAX
} ovrTextureType;

ovrTextureFormat是纹理格式

typedef enum
{VRAPI_TEXTURE_FORMAT_NONE,VRAPI_TEXTURE_FORMAT_565,VRAPI_TEXTURE_FORMAT_5551,VRAPI_TEXTURE_FORMAT_4444,VRAPI_TEXTURE_FORMAT_8888,VRAPI_TEXTURE_FORMAT_8888_sRGB,VRAPI_TEXTURE_FORMAT_RGBA16F,VRAPI_TEXTURE_FORMAT_DEPTH_16,VRAPI_TEXTURE_FORMAT_DEPTH_24,VRAPI_TEXTURE_FORMAT_DEPTH_24_STENCIL_8,
} ovrTextureFormat;

Frame是帧数据提交,

//-----------------------------------------------------------------
// Frame Submission
//-----------------------------------------------------------------typedef enum
{// To get gamma correct sRGB filtering of the eye textures, the textures must be// allocated with GL_SRGB8_ALPHA8 format and the window surface must be allocated// with these attributes:// EGL_GL_COLORSPACE_KHR,  EGL_GL_COLORSPACE_SRGB_KHR//// While we can reallocate textures easily enough, we can't change the window// colorspace without relaunching the entire application, so if you want to// be able to toggle between gamma correct and incorrect, you must allocate// the framebuffer as sRGB, then inhibit that processing when using normal// textures.VRAPI_FRAME_OPTION_INHIBIT_SRGB_FRAMEBUFFER                     = 1,// Correct for chromatic aberration.VRAPI_FRAME_OPTION_INHIBIT_CHROMATIC_ABERRATION_CORRECTION     = 2,// Enable / disable the sliced warp.VRAPI_FRAME_OPTION_USE_SLICED_WARP                             = 4,// Flush the warp swap pipeline so the images show up immediately.// This is expensive and should only be used when an immediate transition// is needed like displaying black when resetting the HMD orientation.VRAPI_FRAME_OPTION_FLUSH                                      = 8,// This is the final frame. Do not accept any more frames after this.VRAPI_FRAME_OPTION_FINAL                                      = 16,// The overlay plane is a HUD, and should ignore head tracking.// This is generally poor practice for VR.VRAPI_FRAME_OPTION_FIXED_OVERLAY                             = 32,          // FIXME: use ovrFrameLayer::FixedToView// The third image plane is blended separately over only a small, central// section of each eye for performance reasons, so it is enabled with// a flag instead of a shared ovrFrameProgram.VRAPI_FRAME_OPTION_SHOW_CURSOR                                  = 64,          // FIXME: use ovrFrameLayerTexture::TextureRect// Draw the axis lines after warp to show the skew with the pre-warp lines.VRAPI_FRAME_OPTION_DRAW_CALIBRATION_LINES                     = 128          // FIXME: use local preference
} ovrFrameOption;

还定义了一个VR context

这个context可以让多个activity在同一个地址空间中共享VrApi,在有多个video相关的子系统时,每个activity都需要保持它自己的上下文

除此之外还定义了一些系统命令和错误处理

VrApi_LocalPrefs.h中

定义的是本地化设置相关的变量

//本地化设置是为了存储一些平台级别的设置,将这些设置绑定到设备而非应用和用户,

通常这些值被存储在/sdcard/.oculusprefs中,当然有时候也会被移到其他数据库中。

可以使用命令:"echo dev_enableCapture 1 > /sdcard/.oculusprefs"来添加一个设置

也可以一次性设置多条: "echo dev_enableCapture 1 dev_powerLevelState 1 > /sdcard/.oculusprefs"

VrApi_Helpers.h中是关于矩阵运算,视角变换和初始化相关的一些辅助函数

VrApi_Android.h中是Api接口列表

VrApi.h也是Api接口列表,其中描述了生命周期相关的信息和工作模式。

VrApi

多个Android activity可以在同一地址空间公用VrApi,然而同一时间只有一个activity可以处在VR模式,下面描述了什么时候应用会进入VR模式

Android VR的生命周期:

一个Android acitivity只能在处于resumed状态的时候才能够进入VR模式。

1.  VrActivity::onCreate() <---------+
 2.  VrActivity::onStart() <-------+  |
 3.  VrActivity::onResume() <---+  |  |
 4.  vrapi_EnterVrMode()        |  |  |
 5.  vrapi_LeaveVrMode()        |  |  |
 6.  VrActivity::onPause() -----+  |  |
 7.  VrActivity::onStop() ---------+  |
 8.  VrActivity::onDestroy() ---------+

所以这里是在Android原生的生命周期基础上添加了两个状态切换点。

一个Android activity只能在有一个可用的Android surface的时候才能够进入VR模式。

具体表现在:

1.  VrActivity::surfaceCreated() <----+
 2.  VrActivity::surfaceChanged()      |
 3.  vrapi_EnterVrMode()               |
 4.  vrapi_LeaveVrMode()               |
 5.  VrActivity::surfaceDestroyed() ---+

不过需要注意的是surface相关的生命周期并非和activity的生命周期紧耦合,这两个生命周期可能在时间顺序上有允许错位。

比如通常surfaceCreated会在onResume之后调用,surfaceDestroyed会在onPause和onDestroy之间被调用

。然而有时候surfaceDestroyed页会在onDestroyed之后甚至onPause之前被调用(为什么不设计为严格限制的生命周期顺序)

开发示例:

1.//设置Java属性

ovrJava java;
java.Vm = javaVm;
java.Env = jniEnv;
java.ActivityObject = activityObject

2.//初始化API

const ovrInitParms initParms = vrapi_DefaultInitParms( &java );
vrapi_Initialize( &initParms );const ovrHmdInfo hmdInfo = vrapi_GetHmdInfo( &java );

3.创建一个EGLContext 并使用'ovrHmdInfo'利用推荐的FOV和分辨率去设置一个项目矩阵和视觉纹理交换链(eye texture swap chains)

const ovrHmdInfo hmdInfo = vrapi_GetHmdInfo( &java );// Setup a projection matrix based on the 'ovrHmdInfo'.
const ovrMatrix4f eyeProjectionMatrix = ovrMatrix4f_CreateProjectionFov( hmdInfo->SuggestedEyeFov[0],hmdInfo->SuggestedEyeFov[1],0.0f, 0.0f, VRAPI_ZNEAR, 0.0f );// Allocate a texture swap chain for each eye.
ovrTextureSwapChain * colorTextureSwapChain[VRAPI_FRAME_LAYER_EYE_MAX];
for ( int eye = 0; eye < VRAPI_FRAME_LAYER_EYE_MAX; eye++ )
{colorTextureSwapChain[eye] = vrapi_CreateTextureSwapChain( VRAPI_TEXTURE_TYPE_2D, VRAPI_TEXTURE_FORMAT_8888,hmdInfo.SuggestedEyeResolution[0],hmdInfo.SuggestedEyeResolution[1],1, true );
}

4.之后就进入Android Activity/surface life 的loop循环

// Android Activity/Surface life cycle loop.
for ( ; ; )
{//从Android Surface中获取ANativeWindow,并创建EGLSurface.//再在当前的surface上创建EGLContext上下文// Acquire ANativeWindow from Android Surface and create EGLSurface.// Make the EGLContext context current on the surface.//在activity利用合法的EGLSurface和当前EGLcontext上下文进入resumed状态的时候进入VR模式// Enter VR mode once the activity is in the resumed state with a// valid EGLSurface and current EGLContext.const ovrModeParms modeParms = vrapi_DefaultModeParms( &java );ovrMobile * ovr = vrapi_EnterVrMode( &modeParms );//进入VR模式之后,通常在其他线程上运行一个处理帧的循环// Frame loop, possibly running on another thread.for ( long long frameIndex = 1; ; frameIndex++ ){//在帧循环里,会获取头盔(眼镜)所处的姿势,预期当前到下一个眼睛可见视图被展示这段时间的中间状态。提前做预期处理的帧数取决于引擎的管道深度和合成速率// Get the HMD pose, predicted for the middle of the time period during which// the new eye images will be displayed. The number of frames predicted ahead// depends on the pipeline depth of the engine and the synthesis rate.// The better the prediction, the less black will be pulled in at the edges.const double predictedDisplayTime = vrapi_GetPredictedDisplayTime( ovr, frameIndex );const ovrTracking baseTracking = vrapi_GetPredictedTracking( ovr, predictedDisplayTime );//当没有位置追踪 数据的时候就进入默认的头盔处理模式// Apply the head-on-a-stick model if there is no positional tracking.const ovrHeadModelParms headModelParms = vrapi_DefaultHeadModelParms();const ovrTracking tracking = vrapi_ApplyHeadModel( &headModelParms, &baseTracking );//基于预期的显示时间来提高仿真效果// Advance the simulation based on the predicted display time.//使用ovrTracking来渲染眼睛看到的图像并设置ovrFrameParms// Render eye images and setup ovrFrameParms using 'ovrTracking'.ovrFrameParms frameParms = vrapi_DefaultFrameParms( &java, VRAPI_FRAME_INIT_DEFAULT, NULL );frameParms.FrameIndex = frameIndex;const ovrMatrix4f centerEyeViewMatrix = vrapi_GetCenterEyeViewMatrix( &headModelParms, &tracking, NULL );//渲染眼镜看到的图像for ( int eye = 0; eye < VRAPI_FRAME_LAYER_EYE_MAX; eye++ ){const ovrMatrix4f eyeViewMatrix = vrapi_GetEyeViewMatrix( &headModelParms, ¢erEyeViewMatrix, eye );const int colorTextureSwapChainIndex = frameIndex % vrapi_GetTextureSwapChainLength( colorTextureSwapChain[eye] );const unsigned int textureId = vrapi_GetTextureSwapChainHandle( colorTextureSwapChain[eye], colorTextureSwapChainIndex );//将eyeViewMatirx和eyeProjectMatirx绑定到相应ID的纹理上去// Render to 'textureId' using the 'eyeViewMatrix' and 'eyeProjectionMatrix'.frameParms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].ColorTextureSwapChain = colorTextureSwapChain[eye];frameParms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].TextureSwapChainIndex = colorTextureSwapChainIndex;frameParms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].TexCoordsFromTanAngles = ovrMatrix4f_TanAngleMatrixFromProjection( &eyeProjectionMatrix );frameParms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].HeadPose = tracking.HeadPose;}//绑定完之后将可见图像移交给时间通道// Hand over the eye images to the time warp.vrapi_SubmitFrame( ovr, &frameParms );}//当activity被暂停(paused)的时候退出VR模式,Android Surface被销毁,或者切换到其他activity。// Leave VR mode when the activity is paused, the Android Surface is// destroyed, or when switching to another activity.vrapi_LeaveVrMode( ovr );
}

5.退出VR模式后销毁交换链和关闭API

//退出VR模式之后,销毁交换链
// Destroy the texture swap chains.
for ( int eye = 0; eye < VRAPI_FRAME_LAYER_EYE_MAX; eye++ )
{vrapi_DestroyTextureSwapChain( colorTextureSwapChain[eye] );
}
//关闭API
// Shut down the API.
vrapi_Shutdown();

合成

Integration
===========
//API被设计成的通过一个简单的Android SurfaceView与Android Activity配合工作,
//而Activity的生命周期和Surface的生命周期通过发送生命周期事件(onResume, onPause, surfaceChanged 等)给native层,完全在native code中管理。
The API is designed to work with an Android Activity using a plain Android SurfaceView,
where the Activity life cycle and the Surface life cycle are managed completely in native
code by sending the life cycle events (onResume, onPause, surfaceChanged etc.) to native code.//API不是通过GLSurfaceView与Android Activity配合工作,GLSurfaceView 类管理着window surface和EGLSurface,
//并且GLSurfaceView的派生类在onPause()被调用之前不会被绑定到EGLSurface上。
//因此,在EGLSurface销毁之前不可能离开VR模式.GLSurfaceView另一个需要注意的点是它通过eglChooseConfig()创建的EGLContext。
//如果用户在设置中选择了"force 4x MSAA",Android 的EGL代码将multisample(多重采样)标记传递进eglChooseConfig函数.
//在时间隧道渲染中使用多重采样前向buffer完全是一种浪费
The API does not work with an Android Activity using a GLSurfaceView. The GLSurfaceView
class manages the window surface and EGLSurface and the implementation of GLSurfaceView
may unbind the EGLSurface before onPause() gets called. As such, there is no way to
leave VR mode before the EGLSurface disappears. Another problem with GLSurfaceView is
that it creates the EGLContext using eglChooseConfig(). The Android EGL code pushes in
multisample flags in eglChooseConfig() if the user has selected the "force 4x MSAA" option
in settings. Using a multisampled front buffer is completely wasted for time warp
rendering.//通常Android NativeActivity的使用可以用来避免处理所有生命周期事件。
//然而很重要的一点是需要手动而非通过eglChooseConfig()函数来选择EGLConfig,以确保前向buffer不是多重采样的。
Alternatively an Android NativeActivity can be used to avoid manually handling all
the life cycle events. However, it is important to select the EGLConfig manually
without using eglChooseConfig() to make sure the front buffer is not multisampled.//vrapi_GetHmdInfo()函数可以在任何时间任何地点被调用.这允许应用设置它的着色器,比如在进入VR模式之前在一个独立线程中调用。
The vrapi_GetHmdInfo() function can be called at any time from any thread. This allows
an application to setup its renderer, possibly running on a separate thread, before
entering VR mode.//在Android上,一个应用不能仅仅分配一个窗口或者前向buffer之后再渲染就完事。
//Android通过在生命周期状态事件中通知应用状态来分配和管理这窗口/frontbuffer(surfaceCreated / surfaceChanged/ surfaceDestroyed).
//既然VrApi不能仅仅分配一个window/frontbuffer就完事,并且VrApi不能处理生命周期事件,那么VrApi就要通过某种方法从应用中拦截Android的 surface.
//最简单的方式是为应用提前设置一个Android window surface上的OpenGL ES context.然后通过这个OpenGL ESL context来调用vrapi_EnterVrMode()
,并获得用于渲染的真实frontbuffer的所有权。
On Android, an application cannot just allocate a new window/frontbuffer and render to it.
Android allocates and manages the window/frontbuffer and (after the fact) notifies the
application of the state of affairs through life cycle events (surfaceCreated / surfaceChanged
/ surfaceDestroyed). The application (or 3rd party engine) typically handles these events.
Since the VrApi cannot just allocate a new window/frontbuffer, and the VrApi does not
handle the life cycle events, the VrApi somehow has to hijack the Android surface from
the application. The easiest way to do this is by having the application first setup an
OpenGL ES context that is current on the Android window surface. vrapi_EnterVrMode() is
then called from the thread with this OpenGL ESL context, which allows vrapi_EnterVrMode()
to swap out the Android window surface and take ownership of the actual frontbuffer that
is used for rendering.//传感器输入只在进入VR模式之后才生效。某种程度上来说,这是因为VrApi支持混合应用。应用以普通模式开始运行,只在被连接到头盔(眼镜)的时候才进入VR模式.
//当不在VR模式的时候,一个普通模式的应用就不应该负担一个用于处理传感器输入的SCHED_FIFO设备管理线程,这可能会产生高昂的传感器/视图处理负担。
//简而言之,只有当手机被连接并且应用处在VR模式时才需要处理传感器输入。
Sensor input only becomes available after entering VR mode. In part this is because the
VrApi supports hybrid apps. The app starts out in non-stereo mode, and only switches to
VR mode when the phone is docked into the headset. While not in VR mode, a non-stereo app
shoud not be burdened with a SCHED_FIFO device manager thread for sensor input and possibly
expensive sensor/vision processing. In other words, there is no sensor input until the
phone is docked and the app is in VR mode.//在获得传感器输入之前,应用可能也需要知道即将被合成的图像什么时候会被显示,因为传感器输入需要在这个时间之前被预处理。
//事实证明,并不需要预期一个准确的现实时间,因此计算预期显示时间就作为了VrApi的一部分.
//准确的显示时间只能在渲染循环开启并且允许而且数据帧有规律周期性地提交之后才能被计算出来。换句话说,在获取传感器输入之前,应用需要一个准确的显示时间的预估,
//而这又要求着色器已经被启动和运行。因此这就导致传感器输入在vrappi_EnterVrMode()被调用之前是不可用的.
//然而,一旦应用进入VR模式,就可以在任何时间任何线程中调用vrapi_GetPredictedDisplayTime()和vrapi_GetPredictedTracking()
Before getting sensor input, the application also needs to know when the images that are
going to be synthesized will be displayed, because the sensor input needs to be predicted
ahead for that time. As it turns out, it is not trivial to get an accurate predicted
display time. Therefore the calculation of this predicted display time is part of the VrApi.
An accurate predicted display time can only really be calculated once the rendering loop
is up and running and submitting frames regularly. In other words, before getting sensor
input, the application needs an accurate predicted display time, which in return requires
the renderer to be up and running. As such, it makes sense that sensor input is not
available until vrapi_EnterVrMode() has been called. However, once the application is
in VR mode, it can call vrapi_GetPredictedDisplayTime() and vrapi_GetPredictedTracking()
at any time from any thread.//vrapi_SubmitFrame()必须通过OpenGL ES context在一个专门用于渲染的线程中调用。
//这样使用的结果是VrApi允许一帧数据重叠,这帧数据通常是在GPU上处理的.因为有一帧数据的重叠,眼镜看到的图像在提交到vrapi_SubmitFrame()的时候还没有完全渲染完。
//vrapi_SubmitFrame()因此添加了一个同步对象到当前的context中,以让后台的时间隧道线程检查什么时候眼镜看到的图像被渲染完成。
//
vrapi_SubmitFrame() must be called from the thread with the OpenGL ES context that was
used for rendering. The reason for this is that the VrApi allows for one frame of overlap
which is essential on tiled mobile GPUs. Because there is one frame of overlap, the eye images
have typically not completed rendering by the time they are submitted to vrapi_SubmitFrame().
vrapi_SubmitFrame() therefore adds a sync object to the current context which allows the
background time warp thread to check when the eye images have completed.//既然vrapi_EnterVrMode()和vrapi_SubmitFrame()能够在不同线程中被调用。
//vrapi_EnterVrMode()需要通过OpenGL ES context在一个当前处于Android window surface上的线程中被调用。这不要求和用于渲染的线程是同一个。
//vrapi_SubmitFrame()需要通过OpenGL ES context在一个渲染眼睛看到图像的线程中被调用。如果调用vrapi_SubmitFrame()的context和调用vrapi_EnterVrMode()
//的context不是同一个,那么为了立体渲染,这个context永远不需要处于Android window surface上。
Note that vrapi_EnterVrMode() and vrapi_SubmitFrame() can be called from different threads.
vrapi_EnterVrMode() needs to be called from a thread with an OpenGL ES context that is current
on the Android window surface. This does not need to be the same context that is also used
for rendering. vrapi_SubmitFrame() needs to be called from the thread with the OpenGL ES
context that was used to render the eye images. If this is a different context than the context
used to enter VR mode, then for stereoscopic rendering this context never needs to be current
on the Android window surface.

Oculus VR SDK实现 -主要结构体及Api接口设计相关推荐

  1. 个人微信号二次开发sdk协议,微信个人号开发API接口

    个人微信号二次开发sdk协议,微信个人号开发API接口 微信SDK程序概要说明 个人微信号开发sdk非微信ipad协议.非mac协议,非安卓协议,api可实现微信99%功能: 无需扫码登录.可收发朋友 ...

  2. java云控_抖音群控sdk,抖音云控api接口java调用代码

    抖音群控sdk,抖音云控api接口 1.抖音上线下线 /** * 抖音上线通知 * @author wechat:happybabby110 * @blog http://www.wlkankan.c ...

  3. Oculus VR SDK实现-Oculus针对双眼显示的交换链设计

    目录 一.创建交换链 二.创建DepthBuffer和FrameBuffer 三.基于交换链对Buffer的使用 一.创建交换链 首先我们关注一下ovrFramebuffer这个结构体: typede ...

  4. C语言 计算结构体大小

    本文主要参考:结构体内存对齐(如何计算结构体的大小) 前言 数组是相同类型的元素的集合,只要会计算单个元素的大小,整个数组所占空间等于基础元素大小乘上元素的个数. 结构体中的成员可以是不同的数据类型, ...

  5. IOS 学习笔记 2015-03-24 OC-API-常用结构体

    一 标题 常用结构体二 API 1 NSRange 表示一个范围 A 实例化 NSRange rg={3,5};//第一参数是起始位置第二个参数是长度B 实例化 NSRange rg2=NSMakeR ...

  6. Golang——结构体创建与初始化、结构体与数组、结构体与切片、结构体与map、结构体作为函数参数、结构体方法、结构体方法继承

    结构体: 结构体是由一系列具有相同类型或不同类型的数据构成的数据集合 结构体可以很好的管理一批有联系的数据,使用结构体可以提高程序的易读性,类似于Java的类一样 不能在结构体直接赋值 字段名必须唯一 ...

  7. 鸿蒙系统深度解析,深度解析鸿蒙内核最重要的结构体

    谁是鸿蒙内核最重要的结构体? 答案一定是:LOS_DL_LIST(双向链表),它长这样. typedef struct LOS_DL_LIST {//双向链表,内核最重要结构体 struct LOS_ ...

  8. Go语言的使用结构体、指针和方法

    结体体定义如下: type author struct{ field1 type1 field2 type2 ... } 结构体的定义格式如下: type 类型名 struct{ 字段1 字段1类型 ...

  9. 鸿蒙内核源码分析表,鸿蒙内核源码分析(双向链表篇) | 谁是内核最重要结构体 ? | 开篇致敬鸿蒙内核开发者 | v1.10...

    谁是鸿蒙内核最重要的结构体? 答案一定是: LOS_DL_LIST(双向链表),它长这样.typedef struct LOS_DL_LIST {//双向链表,内核最重要结构体 struct LOS_ ...

最新文章

  1. python的print函数
  2. halcon与c#联合编程的方法
  3. python 整合excel_Python将多个excel文件合并为一个文件
  4. Visual Studio 2010 Ultimate敏捷功能特性(上)
  5. java web应用开发_Java Web应用开发基础
  6. 2021前端面试题总结
  7. C#中的?和?? 可空类型和空值判断
  8. java 多线程 串行 加锁_Java多线程(2)线程锁
  9. day10 java的this关键字
  10. asp.net调用ajax实例
  11. Nginx 配置学习
  12. EditText控件(圆角处理)
  13. 创维E900V21E机顶盒刷机固件 解决:不用设置有线自动连网
  14. 2019最新PHP100项目实战(PHP新手入门教程)
  15. react native 更改app名称和图标
  16. 右耳Python小作业--快递分拣
  17. 鸿蒙支持ps4手柄吗,完美兼容PS4手柄!iPhone也能畅玩PS4,教程在此
  18. 数字图像处理第三章-----空间域滤波
  19. 管理感悟:今后公司工作的建议
  20. excel表格打印每页都有表头_Excel打印时怎么让每一页都包括表头?

热门文章

  1. 英语类学术期刊《中学生英语》刊物简介及投稿须知
  2. 创意的舞台——WEBGAME面面观
  3. Fatal Not possible to fast-forward, aborting
  4. c语言父子进程,C语言无名管道实现父子进程间通信
  5. note4 android 7.0,三星手机安卓7.0升级名单曝光:S5、Note4无缘
  6. 毒蛇咬伤后的紧急处理
  7. 满足奇数位为奇数或偶数位为偶数算法
  8. Linux解决:系统主机名为bogon方法
  9. 自学python三个月能做什么兼职_自学Python能干些什么副业
  10. 实现FCFS和SJF调度算法(电梯调度算法详解)