ugmented reality(AR) equipment
笔记源于书:Virtual Reality and Augmented Reality: Myths and Realities[M]. John Wiley & Sons, 2018.
It is possible to divide the restitution devices used in AR into different
categories based on their physical support:
– carried in front of the user’s eyes: depending on the complexity of
the device, we will either use the term “Augmented Reality Headset” or
“Augmented Reality Glasses”. We differentiate between “optical see-through”
systems, which allow direct natural vision, and “video see-through” systems,
where the environment is perceived indirectly, through a camera;
– hand-held devices: these are smartphones and tablets. The massive
distribution of these devices on the market has allowed the general public to
discover and explore AR;
– fixed or mobile devices: video-projectors that project an image directly
onto objects in the real environment, thereby bringing about a natural
superimposition. This modality (called Spatial Augmented Reality or SAR)
has been widely developed within industrial applications such as repairing or
maintaining equipment where the information from technical manuals makes
it possible to see the texts, assembly diagrams or even videos in-situ, without
the user having any equipment themselves;
– contact-lens-based systems have been developed recently but are yet
to see definite progress. At the moment, these systems display only highly
simplified symbols and images. The main technological challenge is improving
the resolution and energy consumption and taking into account the human
factor: acceptance and ease of use.
In the following sections, we will focus primarily on the first category of
equipment: headsets and glasses
1 Google Glass
Announced in 2012......at the end of 2014, that they were stopping sales of the device.
2 Google Tango
3 HoloLens
Developed by Microsoft, HoloLens is an AR headset that was unveiled in
2015. It is equipped with a waveguide that has a diagonal field of vision that
extends up to approximately 35° (30° × 17.5°) with a very high resolution for
each eye (1280 × 720) and an accommodation of 2 m. The HoloLens is
loaded with sensors: 4 cameras used for location and positioning, a
time-of-flight depth sensor with low energy consumption, a frontal camera
with large field of vision (120° × 120°) and an inertial unit. The power of the
HoloLens resides primarily in its computing architecture: not only thanks to
its CPU and its GPU but, above all, in the part that Microsoft misleadingly
calls its HPU (Holographic Processing Unit), which has nothing to do with
holographs and which is more commonly called a VPU (Vision Processing
Unit). This processor allows us to obtain pose computations that perform 200
times better than a classic software implementation, while also consuming
very little energy (only 10 watts). This architecture results in very good
autonomy and a pose estimate that is much more robust and rapid compared
with state-of-the-art methods.
4. Magic Leap
5. Other AR glasses
Apart from these three companies, which can mobilize considerably large
investments in the development of AR glasses, there are dozens of other actors
that propose devices based on optical see-through systems. In order to compare
them easily, we propose the following set of criteria:
– The optical system: in most cases, optical see-through systems are based
on optical waveguides that propagate the image emitted by a micro-screen
towards the user’s eye. These optical waveguides make it possible to position
the physical screen onto the sides or the upper edge of the glasses, while
offering a transparent display that gives the user a direct perception of the
real environment. There are several waveguide technologies that can be used:
diffractive, reflective, polarized or holographic [SAR 13]. Another solution
consists of using a semi-transparent mirror (generally curved) that directly
or indirectly reflects a screen positioned in the glasses. In the latter case,
The Democratization of VR-AR 93
the difficulty is in finding an accommodation distance that is large enough
and gives the user the impression that this screen is located several meters
away from them. Finally, CastAR proposes an original solution: this is based
on a projection system integrated into the glasses, which emit an image that
reflects off a specific surface positioned in the real environment. This surface is
made up of catadioptric microscopes (similar to the reflective surfaces on highvisibility vests) that make it possible to reflect the projected images towards the
source alone (the glasses). Thus, one system does not interfere with another,
which opens up the possibility for a multi-user solution. Furthermore, this
device is equipped with a pico-projector for each eye that is synchronized
with an active stereoscopic vision system integrated into the glasses, thereby
offering a stereoscopic visual experience of the virtual content.
– The positioning of the display: the display can be located either in the
user’s peripheral vision, as with Google Glass (thereby making it possible to
display data that is non-recalibrated into the real environment and disengaged
from the field of vision), or it can be located in the central portion of the
user’s visual field (making it possible to display information that is perfectly
positioned in the real environment). It must be noted that some devices, such
as the Optinvent ORA glasses, can switch from one configuration to the other.
– Monocular versus binocular: we differentiate between systems that offer
an optical see-through system for only one eye or those that offer binocular
vision, with an optical see-through system for each eye. Monocular systems are
simpler to configure, but can provoke a phenomenon called “binocular rivalry”
(discomfort related to the fact that information is visible only to one eye).
Binocular systems offer a greater ease of use, as long as the system is perfectly
configured. This remains a complicated process and is dependent on the user.
– Visual field: the relatively small visual field of AR glasses remains,
and will remain for some years to come, a strong limitation. The values are
generally angular and given by the diagonal of the screen. This value is only
indicative, as it depends on the distance between the virtual screen and the
user’s eye. At present, the best optical see-through systems offer a visual field
of 40°, or even up to 60° for some prototypes that are not yet on the market.
However, given the wavelength of visible light and the waveguide principle, it
will be difficult to exceed 60° of the visual field [MAI 14].
– Light intensity: in AR glasses, the lower the brightness of the pixels, the
more transparent they are and vice versa. It is thus impossible to display black,
94 Virtual Reality and Augmented Reality
or even dark objects in these glasses, and it is always preferable to display
virtual content with light colors in order to improve their perception by the
user. Moreover, whenever there is greater ambient brightness, the display from
the wave guides becomes harder to see and thus less perceptible. Optical seethrough systems must therefore offer displays even at high brightness in order
to be usable in all conditions. Hence, many AR glasses have the option of a
solar filter: this improves the display quality in conditions of high brightness.
The GaN micro-screen technology developed by CEA-LETI is able reach a
brightness of a million candelas per m2, and this will, in the future, enable the
sale of AR glasses that adapt to ambient light in the real environment.
– Sensors: as mentioned earlier, AR glasses must be precisely located in the
real environment. In order to do this, they are generally equipped with a GPS,
an inertial unit (made up of accelerometers, a gyroscope and a magnetometer)
and vision captors with one or more color cameras or even a depth sensor.
The robustness and precision of the superimposing of the virtual content
onto the real environment depends on these sensors. The various difficulties
inherent to this problem of integrating real and virtual worlds are discussed
in detail in section 3.2 and the proposed solutions to these problems are
discussed in section 4.1. Let us note that, from a purely material point of view,
accelerometers may be disturbed by the earth’s gravitational field, resulting in
noisy signals. Similarly, magnetometers may be disturbed by the surrounding
magnetic field and provide a wrong orientation from the sensor. The majority
of RGB cameras are equipped with a rolling shutter that generates a deformed
image when the sensor or the scene is in motion (the image acquisition is
done line by line). As a result, it is preferable to use global shutter cameras
that capture the environment instantaneously without any deformation of the
image. While a fish-eye lens makes it possible to capture a much larger area in
the real environment, and thus detect several points of interest that would help
improve recalibration, the vision algorithms used must be heavily modified.
Thus, the best available solutions at present, such as the Microsoft HoloLens
or Google Tango, use multiple vision sensors.
– Integrated computing capacity: not all AR devices are equipped with
integrated computing capacity. Some devices only offer a display feature and
must be connected to an external terminal to process information such as pose
estimation or feedback on virtual elements. Other devices integrate all the
electronic components required for the various computations (e.g. memory,
chipsets) either remotely (a box worn on the belt) or directly into the glasses.
The Democratization of VR-AR 95
However, the complexity of the computations, the increase in the number of
sensors and the need to compute the pose of the device in a few milliseconds
make it necessary to use dedicated processors. Thus, the latest devices integrate
not only a CPU but also a Graphic Processing Unit (GPU) and, more recently,
a Vision Processing Unit (VPU) or Digital Signal Processor (DSP). Apart
from their performance in carrying out signal processing and optimization
computations, the advantage of using these processors is that they have low
energy consumption, which greatly enhances the independence of the mobile
devices.
– Ergonomics: the ergonomics of an AR device is a key factor in its success.
The device may potentially be carried for several hours either for professional
use or by a general user, and thus, comfort and impeccable ease-of-use are
essential. The difficulty is in designing a device that is usually worn on the
head, and thus, thought to be light, when it comes loaded with micro-screens,
optical see-through systems, sensors, computation capacity as well as batteries.
It is, hence, essential to obtain a balanced distribution of weight and good
stability with respect to the user’s eyes. Moreover, the heat emitted by the
device must be controlled, which poses a considerable challenge given that
the number of active components keeps increasing and since the direct contact
with the user’s skin makes even a slight increase in temperature perceptible.
– Interaction: the use of AR is not limited to tasks requiring observations –
interaction interfaces must also be integrated. Several devices have a tactile
surface on the sides of the glasses, while others have started integrating
systems to track the user’s hands in 3D space and systems for gesture
recognition, which can be coupled with gaze tracking systems or vocal
recognition systems. Nonetheless, just as techniques had to be adapted to
smartphones and other tactile tablets, the ergonomics of the human–machine
interfaces specific to AR glasses must be completely rethought and adapted
to the capacity for interaction and restitution of each device. Finally, in many
cases where these devices are used, for reasons of safety, the user must have an
undistorted perception of the real environment, involving optical systems that
are as see-through as possible and which do not deform the visual perception
of the real environment.
This constitutes a real challenge both in scientific terms (designing
these modalities of interaction) and in technological terms (manufacturing
the devices while respecting the constraints on robustness, compactness,
consumption and costs).
– Mobility: AR generally makes sense if the user can move around freely in
their environment. Even though some AR glasses are connected to an external
terminal using one or more cables in the development stage, in the model
that is sold on the market, all the computation processors and batteries are
usually integrated into the glasses to make them usable as independent, mobile
devices
Given the growing number of optical see-through AR headsets or glasses,
and the rapid technological obsolescence associated with continued
technological progress, it would be futile to describe all the different systems
available on the market. However, Table 2.1 provides a non-exhaustive
overview of several AR glasses and headsets available in early 2017.
Thanks to its very low latency recalibration quality, due to its multiple
sensors and large processing capacity, the Microsoft HoloLens was the most
advanced solution to offer AR services in early 2017. However, as can be seen
in Table 2.1, there are several solutions that could represent an alternative to
the HoloLens by outdoing it in some aspects. Certain glasses with many
sensors, such as Meta 2, the Atheer Air Glasses or the ODG R-7, could offer a
performance similar to that of the HoloLens and, in certain cases, offer a form
factor that is more apt for general use. As the range of the visual field is a key
characteristic, the waveguide technology developed by Lumus is high
performing thanks to its relatively large visual field and high brightness, when
compared with its competitors. Lumus defines itself not as a manufacturer of
AR glasses, but rather as a developer of integrated optical systems. Optinvent,
which offers a waveguide system that is less efficient but substantially less
expensive, is also an expert in developing optical systems that are more
compact but offer many advantages, such as a variable accommodation
distance, as well as good tolerance of poor alignment between the user’s gaze
axis and the optical axis of the glasses (this tolerance zone is called the eye
box). Several products are also dedicated to professional use, generally hardy,
such as the Epson Moverio pro BT-2000 and especially the Daqri headset.
This headset is a helmet that is equipped with a large number of sensors,
allowing it to respond to the many needs of the industry. Finally, we have the
castAR solution, which stands apart thanks to the technology used
(projective) and the target market (the video game market).
Table 2.1. Description of optical see-through AR systems
Brand | Microsoft |
Magic Leap |
Epson | ||
Model | HoloLens | ? |
Moverio bt-200 |
Moverio bt-300 |
Moverio pro bt-2000 |
Optical system |
Waveguide | ? | Waveguide | Waveguide | Waveguide |
Display location |
Visual field |
Visual field |
Visual field |
Visual field |
Visual field |
Mono- or binocular vision |
Binocular | Binocular | Binocular | Binocular | Binocular |
Visual field |
˜35° | ? | ˜23° | ˜23° | ˜23° |
Resolution per eye |
1268 × 720 | ? | 960 × 540 | 1280 × 720 | 960 × 540 |
Light intensity |
? | ? | ? |
Si-OLED high intensity |
? |
Sensors |
IMU 9 axes, 4 cameras, 1 fish-eye camera, 3D sensor |
? |
IMU 9-axes, GPS, VGA camera |
GPS, IMU 9 axes, 5 MP camera |
GPS, IMU 9 axes, 2 5 MP stereo cameras |
Computation capacity |
CPU, GPU, VPU (HPU) |
? |
ARM Cortex A9 Dual Core 1.2 GHz |
Intel Cherry Trail Atom × 5 1.44 GHz |
ARM Cortex A9 Dual Core 1.2 GHz |
Ergonomics |
579g, gesture interaction |
? |
88g, remote touchpad |
69g, remote touchpad |
290g, remote touchpad |
Mobility |
All integrated |
? |
Remote box |
Remote box |
Remote box |
OS | Windows | ? |
Android 4.0.4 |
Android 5.1 |
Android 4.0.4 |
Version |
Dev kit and commercial |
? | Commercial | Commercial | Commercial |
Price | 3299e | ? e | 699e | 799e | 3120e |
Meta | Vuzix | Atheer |
Meta 2 |
Blade 3000 |
Air Glasses |
Mirrors (focus at 0.5 m) |
Waveguide | Waveguide |
Diagonal visual field |
Visual field |
Visual field |
Binocular | Binocular | Binocular |
˜90° | ? | ˜50° |
1280 × 1440 | ? | 1280 × 720 |
? | ? | ? |
720p camera, IMU 6 axes, sensors array |
GPS, IMU 9 axes, 1080p camera |
GPS, IMU 9 axes, 2 720p stereo cameras, depth camera |
None | ? |
NVidia Tegra K1 (quad-core CPU and Kepler GPU) |
420g, gesture interaction |
? |
Gesture interaction |
Cable | ? |
Remote box |
NA |
Android 6.0 |
Android |
Dev kit |
Commercial (2017) |
Commercial (2017) |
949$ | ? | 3950$ |
ODG | Lumus | Optinvent | Laster | Daqri |
Technical illusions |
R-7 | DK-50 | ORA 2 | Lumina |
Smart Helmet |
CastAR |
Waveguide | Waveguide | Waveguide | Mirrors | Waveguide |
Stereo projection with active glasses |
Visual field |
Visual field |
Visual field or remote |
Visual field |
Visual field |
Visual field |
Binocular | Binocular | Monocular | Binocular | Binocular | Binocular |
˜30° (proto at ˜50°) |
˜40° (proto at ˜60°) |
˜24° |
˜25° (proto at ˜50°) |
˜80° ? | ˜90° |
1280 × 720 (proto at 1920 ×1280) |
1280 × 720 | 640 × 480 | 800 × 600 | ? | 1280 × 720 |
? |
3500 cd/m2 |
> 3000 cd/m2 |
220cd/m2 | ? | ? |
IMU 9 axes, altimeter, 1080p camera |
IMU 9 axes, 4MP stereo camera |
GPS, IMU 9 axes, 5 MP camera |
GPS, IMU 9 axes, 720p camera |
IMU 9 axes, 5 cameras (360), Depth, temperature and pressure sensor |
IMU, head tracking |
Qualcomm Snapdragon 805 2.7 GHz Quad-core |
Qualcomm Snapdragon |
CPU Dual core with GPU |
MTK 6595 Octacore |
Intel Core | None |
125g, touchpad on side of glasses |
Smartphone interface for interaction |
90g, touchpad on the side of the glasses |
165g, touchpad on the side |
1000g |
100g, Joypad |
All integrated |
All integrated |
All integrated |
All integrated |
All integrated |
Cable |
Android 4.4 |
Android |
Android 4.4.2 |
Android 4.4 |
Android | NA |
Commercial | Dev Kit | Commercial | Prototype | Prototype |
Commercial (2017) |
2750$ | 3000$ | 699e | ? |
5000$– 15000$ |
˜400$ |
ugmented reality(AR) equipment相关推荐
- Augment Reality(AR)现实增强的原理
这篇博客将介绍OpenCV(Augment Reality AR)增强现实的基础知识.增强现实技术指利用真实世界中的环境,然后通过计算机生成的程序来增强这些环境,从而不断丰富环境.通常,这是通过视觉. ...
- unity+高通vuforia开发增强现实(AR)教程(一)
增强现实(Augmented Reality,简称AR),是在虚拟现实的基础上发展起来的新技术,也被称之为混合现实.是通过计算机系统提供的信息增加用户对现实世界感知的技术,将虚拟的信息应用到真实世界, ...
- 物联网、智慧城市、增强现实(AR)与虚拟现实(VR)、区块链技术、语音识别、人工智能、数字汇流是大数据未来应用的七大发展方向
大数据不仅意味着海量.多样.迅捷的数据处理,更是一种颠覆的思维方式.一项智能的基础设施.一场创新的技术变革.物联网.智慧城市.增强现实(AR)与虚拟现实(VR).区块链技术.语音识别.人工智能.数字汇 ...
- 增强现实技术(AR)及扩展应用
AR总结 增强现实技术(AR)及扩展应用 去年的11月了解过这个东西,利用摄像头去模拟场景,挺不错,有个网景浏览器就是利用AR技术做的.不过由于当时我的OpenGL不是很了解,所以没有太深入的学习过. ...
- 增强现实(AR)的全球与中国市场2022-2028年:技术、参与者、趋势、市场规模及占有率研究报告
报告页数: 150 图表数: 100 报告价格:¥16800 本文研究全球与中国市场增强现实(AR)的发展现状及未来发展趋势,分别从生产和消费的角度分析增强现实(AR)的主要生产地区.主要消费地区以及 ...
- CPU中的主要寄存器:有六类寄存器:指令寄存器(IR)、程序计数器(PC)、地址寄存器(AR)、数据寄存器(DR)、累加寄存器(AC)、程序状态字寄存器(PSW)
在CPU中至少要有六类寄存器:指令寄存器(IR).程序计数器(PC).地址寄存器(AR).数据寄存器(DR).累加寄存器(AC).程序状态字寄存器(PSW).这些寄存器用来暂存一个计算机字,其数目可以 ...
- 增强现实(AR)、虚拟现实(VR)、混合现实(MR)之间有什么区别?
刚接触沉浸式技术场景的人问的第一个问题是,如何将增强现实(AR)与虚拟现实(VR)区分开来,还有混合现实(MR)和扩展现实(XR)等术语又是什么意思.下面让我们来看看,它们究竟是什么? 什么是虚拟现实 ...
- 《增强现实(AR)C端应用白皮书》
2019年12月20日,<增强现实(AR)C端应用白皮书>正式发布,本白皮书由太平洋未来科技联合欢乐谷文化旅游发展有限公司.中兴网信旅游研究院.深圳大学国家级传媒实验教学示范中心和陀螺研究 ...
- 中国增强现实(AR)行业十四五趋势及发展机会评估报告2022-2027年
中国增强现实(AR)行业十四五趋势及发展机会评估报告2022-2027年 第1章:全球增强现实行业发展状况分析1.1 增强现实行业界定及产业链分析 1.1.1 增强现实界定 1.1.2 AR与VR ...
- 增强现实(AR)的前世今生...
"You can't really understand what is going on now without understanding what came before." ...
最新文章
- EOS之记事本智能合约
- Java发送邮件正文带表格
- Python学习---协程 1226
- python apscheduler一次只有一个job_Python使用APScheduler实现定时任务过程解析
- shell+html+div+css实现数据库冷备文件检查
- Taro button点击切换选中状态
- 如何在Eclipse配置Tomcat
- 附录G 标准模板库方法和函数
- App测试的11点建议
- java是什么类型语言_java属于什么类型语言
- 配置管理工作职责思考
- windows11关闭安全中心的病毒和威胁防护时,打不开,弹出打开应用需要打开windowsdefender链接问题
- GetLastError函数封装显示具体错误信息
- 东芝笔记本出现w ndows,夏日白色清新范 13.3英寸东芝L830评测
- 数据库表的软硬关联_Jimmy的关系型数据库设计心得 第一版
- 对数函数泰勒级数展开式
- 英语语法最终珍藏版笔记-11分词
- Blazeds(一)
- 织梦(dede)更改默认管理员名称admin技巧
- Snell法则的推导
热门文章
- go语言导出oracle数据,Go语言导出内容到Excel的方法
- Opencv图像数据结构剖析
- pythonunittest模块_python单元测试模块unittest
- css渐变颜色php,css的渐变颜色
- xcode连接iphone调试_电脑操作手机?iPhone,安卓通吃?手机还能这么玩!
- Process Monitor
- static关键字_void和void指针_函数指针
- 第4章 基本TCP套接口编程
- Luogu P2595 [ZJOI2009]多米诺骨牌 容斥,枚举,插头dp,轮廓线dp
- Ubuntu 远程使用ssh 开启服务器终端的方法