中国发展及其人自动化

I have researched automation for some time now.

我已经研究自动化一段时间了。

First, as part of my master thesis in cognitive psychology. I was researching how to design user interfaces that would make it easier for operators to monitor autonomous ships. Afterwards, I co-founded a company that is currently researching and developing a new, autonomous product.

首先,作为我的认知心理学硕士论文的一部分。 我正在研究如何设计用户界面,使操作员可以更轻松地监视自主船。 之后,我与他人共同创立了一家公司,该公司目前正在研究和开发一种新的自主产品。

Throughout this journey, I have developed a somewhat ambiguous relationship with automation.

在整个旅程中,我与自动化之间建立了某种模糊的关系。

On one hand, automation is a truly magnificent thing. It gives us possibilities which seemed like far-fetched dreams, mere years and decades ago. It allows us to spend more time doing tasks we find meaningful. And it rarely fails.

一方面,自动化确实是一件宏伟的事情。 仅仅几年和几十年前,它给我们带来了似乎遥不可及的梦想的可能性。 它使我们可以将更多的时间花费在执行有意义的任务上。 而且很少失败。

However, it does fail eventually. And when it does, it can be quite dangerous. Onnasch (2014) called this “the Lumberjack effect”; the higher the tree, the farther it falls. Put into other words, when automation does fail, it does so spectacularly.

但是,它最终确实会失败。 如果这样做,可能会很危险。 Onnasch(2014)将此称为“伐木工人效应”; 树越高,它落得越远。 换句话说,当自动化确实失败时,它就会如此惊人。

Therefore, while we aim to maximize the advantages of automation, we must design it with the utmost care to avoid potential disasters.

因此,尽管我们旨在最大程度地发挥自动化的优势,但我们必须格外谨慎地进行设计,以避免潜在的灾难。

Following, are the three major problems that I keep coming across while designing automation.

接下来,是我在设计自动化时遇到的三个主要问题。

Problem 1: Our brains were not built for monitoring automation

问题1:我们的大脑不是为监视自动化而构建的

Photo by Siavash Ghanbari on Unsplash
Siavash Ghanbari在 Unsplash上 拍摄的照片

Human beings are great at so many things. Walking, talking, tweeting, selfie-taking. However, monitoring is not one of those things.

人类擅长于很多事情。 散步,交谈,发推文,自拍照。 但是,监视不是其中之一。

In his classical experiment, Mackworth (1951) had participants monitor a clock. The clock would skip a second at random intervals. The participants were tasked to make a note every time a second was skipped.

Mackworth(1951)在他的经典实验中,让参与者监视时钟。 时钟将以随机间隔跳过一秒钟。 参与者的任务是每跳过一秒钟便做一次笔记。

The participants started great, but about half an hour into the experiment, performance dropped significantly.

参与者起步不错,但是实验进行了大约半小时,表现却明显下降。

Essentially, as humans, we are only able to do slow, boring observations for short periods of time. After about 30 minutes, we stop paying attention.

本质上,作为人类,我们只能在短时间内进行缓慢而乏味的观察。 大约30分钟后,我们不再关注。

Also, what I just described is a situation where the participants know that errors are about to happen. Yet they still only managed to pay attention for about 30 minutes. In other words, this is a biological limitation; our cognition, our brain, is simply not built for more than that.

另外,我刚才描述的是参与者知道错误即将发生的情况。 然而,他们仍然只能注意大约30分钟。 换句话说,这是生物学上的限制; 我们的认知,我们的大脑,根本不仅仅局限于此。

So, why is this a problem? Well, imagine those same humans monitoring their self-driving car. Which they do not expect to fail. But then it does.

那么,为什么这是一个问题呢? 好吧,想象那些同样的人正在监视他们的自动驾驶汽车。 他们不希望失败。 但是,确实如此。

演示地址

The higher the tree, the longer they fall. When automation fails, it does so catastrophically.

树越高,它们倒下的时间就越长。 当自动化失败时,它会造成灾难性的后果。

However, our relationship is affected by more than our biology. It is also massively influenced by psychology. In particular, by something we refer to as automation bias.

但是,我们的关系不仅受到生物学的影响。 它也受到心理学的极大影响。 特别是,我们称之为自动化偏差。

问题2:我们对自动化过于信任 (Problem 2: We put too much faith in automation)

演示地址

We tend to trust automation. A lot.

我们倾向于信任自动化。 很多。

Imagine that you are driving home and want to take the highway. Your satnav, however, tells you that crossing the bridge would be quicker. You are pretty likely to trust your satnav. And why wouldn’t you? After all, the satnav seems quite advanced. It probably makes calculations based on tons of data.

想象一下,您要开车回家并想上高速公路。 但是,您的卫星导航告诉您过桥会更快。 您很有可能会相信您的卫星导航。 而你为什么不呢? 毕竟,卫星导航似乎相当先进。 它可能基于大量数据进行计算。

The SatNav could be correct. Or it could be incorrect. Either way, we are very likely to believe whatever an automated system tells us. And we are unlikely to notice if it gives us bad advice. In psychology, we call this automation bias.

SatNav可能是正确的。 否则可能不正确。 无论哪种方式,我们都很可能相信自动化系统告诉我们的一切。 而且我们不太可能注意到它是否给了我们不好的建议。 在心理学上,我们称这种自动化为偏见。

Automation bias is the tendency to be overly reliant and/or complacent when interacting with automation (see Wiener and Curry, 1980; Parasuraman & Riley, 1997). Since automation seems to be working perfectly, we do not feel like we need to monitor it closely.

自动化偏差是与自动化交互时过于依赖和/或自满的趋势(参见Wiener和Curry,1980; Parasuraman和Riley,1997)。 由于自动化似乎运行良好,因此我们不需要密切监视它。

And most of the time, automation does work perfectly. However, ever so often, an error will occur. And when they do, we likely will not realise.

而且在大多数情况下,自动化确实可以完美运行。 但是,经常会发生错误。 而当他们这样做时,我们可能不会意识到。

This puts a lot of pressure on us, as designers. If we design something badly, it could be causing problems for a long time before anyone notices. And by the time they do, it could be costly to fix it.

作为设计师,这给我们带来了很大压力。 如果我们设计不好,可能会导致很长一段时间的问题,直到有人注意到。 到他们这样做的时候,修复它可能会很昂贵。

Imagine that you design a running app that tracks how long people run. Suddenly, you realise that the GPS was a bit off, and it has actually been overestimating running distance by 10%.

想象一下,您设计了一个正在运行的应用程序,该应用程序可以跟踪人们的运行时间。 突然,您意识到GPS有点偏离了,实际上它高估了行驶距离10%。

What are the chances your users manually checked the distances, to see if they truly ran 5k? Pretty slim.

您的用户有多少机会手动检查距离,看看他们是否真正跑了5k? 苗条

What are the chances that your users will be angry when they eventually realise that their best 5ks was actually 4.5k? Pretty high.

当用户最终意识到自己的最佳5k实际上是4.5k时,他们有什么机会生气? 很高

This also leads us to the next topic; the myth of automation removing human error.

这也将我们引向下一个主题。 消除人为错误的自动化神话。

问题3:自动化不能消除人为错误 (Problem 3: Automation does not eliminate human error)

It is a common myth that automation eliminates human error. However, there are two main reasons why this is wrong.

自动化消除人为错误是一个普遍的神话。 但是,这有两个主要原因。

First, most products tend to interact with humans at some point. And those humans can make errors when providing input.

首先,大多数产品都倾向于与人类互动。 这些人在提供输入时可能会犯错误。

演示地址

The second reason, and perhaps one that is easier to forget, is that automated products are created by a human. And the human that created the product probably made some errors at some point. We call those human errors.

第二个原因,也许是更容易忘记的一个原因是,自动化产品是由人类创造的。 而创造产品的人可能在某些时候犯了一些错误。 我们称这些人为错误。

So what does automation give us, then?

那么,自动化给我们带来了什么呢?

Well, automation does eliminate “concurrent errors” or “operator errors”. That is, a car does not crash because a person confuses the gas and brake pedal. The automated system does what it was programmed to do.

好吧,自动化确实可以消除“并发错误”或“操作员错误”。 也就是说,汽车不会因人混淆油门踏板和制动踏板而撞车。 自动化系统执行其编程要执行的操作。

However, it is impossible to predict any scenario the machine will encounter. Therefore, although the automated system is pretty smart, it probably won’t have an answer for absolutely every scenario. Especially freak scenarios.

但是,无法预测机器将遇到的任何情况。 因此,尽管自动化系统非常智能,但它可能无法针对每种情况提供答案。 特别是异常情况。

Like a parachuter misjudging the wind, and having to land on a highway. A human driver might be able to see what is happening and avoid him. A self-driving car, on the other hand, would struggle.

就像跳伞员判断风向,不得不降落在高速公路上一样。 驾驶员可能能够看到正在发生的事情并避开他。 另一方面,自动驾驶汽车会很困难。

At a basic level, the car would probably lack the camera angle to notice the parachuter. But even if it did notice, the car would probably lack the fluid intelligence to understand the situation and come up with a solution.

在基本层面上,汽车可能会缺少摄像机角度来注意到跳伞者。 但是,即使它确实注意到了,汽车也可能缺乏流体智能来了解情况并提出解决方案。

Or, an even more bizarre example: How about an automatic sandwich-maker, failing to stop when it starts to attract aggressive seagulls at sea. I stole that example from a Norwegian advertisement:

或更奇怪的例子是:自动三明治机怎么样,当它开始在海上吸引凶猛的海鸥时却停不下来。 我从挪威的广告中偷走了这个例子:

演示地址

Advertisement by REMA 1000
REMA 1000刊登的广告

Therefore, the human error still exists. Error, in the sense that automation would fail to deal appropriately with a given scenario, however bizarre.

因此,人为错误仍然存​​在。 从某种意义上说,错误是指自动化将无法适当地处理给定的场景,尽管这很奇怪。

This type of error we would call “latent error”, or a “designer error”.

我们将这种类型的错误称为“潜在错误”或“设计者错误”。

Are automatic systems with designer errors better than manual systems with operator errors?

具有设计者错误的自动系统是否比具有操作者错误的手动系统更好?

It depends.

这取决于。

Is it better to have only a few, but pretty major accidents? In that case, the automatic system is better.

只发生几次但相当大的事故会更好吗? 在这种情况下,自动系统会更好。

Is it better to have many, smaller accidents? Then the manual system is the better option.

发生很多较小的事故更好吗? 那么手动系统是更好的选择。

解决方案 (The solution)

Well, this sounds rather hopeless. Should we just stop automating things?

好吧,这听起来很绝望。 我们应该停止自动化吗?

No, absolutely not! This is not hopeless, it is simply a design challenge.

不,绝对不是! 这并非没有希望,这只是设计挑战。

Well, how do you fix all this?

好吧,您如何解决所有这些问题?

Through the magic of psychology and design.

通过魔术的心理学与设计。

There is no quick-fix for our cognitive limitations. Our brains evolve quite slowly. However, how much we rely on machines psychologically depends on design.

对于我们的认知局限性没有快速解决方案。 我们的大脑发展非常缓慢。 但是,我们在心理上对机器的依赖程度取决于设计。

One of the reasons we become so reliant on automation, is the fact that we do not really understand how it works.

我们之所以如此依赖自动化的原因之一是,我们并不真正了解自动化的工作原理。

Often, we give an automated product some input, and it gives us an answer. However, if we do not understand how it arrived at this conclusion, we cannot verify how accurate it is.

通常,我们给自动化产品一些输入,这给了我们答案。 但是,如果我们不明白它是如何得出这个结论的,那么我们将无法验证它的准确性。

演示地址

It is like a math test in school. Normally, just giving the answer is not acceptable to get full marks. You need to show your math, explaining how you arrived at your conclusion.

就像学校里的数学考试一样。 通常,仅给出答案是不能获得满分的。 您需要展示数学,解释得出结论的方式。

We can address this lack of understanding by designing automation to be transparent (Endsley, 2017). Transparency means that we design a product where the users can see how the system arrived at a conclusion.

我们可以通过将自动化设计为透明来解决这种缺乏理解的问题(Endsley,2017)。 透明度意味着我们设计的产品使用户可以看到系统如何得出结论。

This also helps to make it predictable. And once we have good transparency and predictability, we can add an option for a manual bypass. That is, letting the user skip results or alter the calculations so that the automation arrives at the correct result.

这也有助于使其可预测 并且一旦我们有了良好的透明度和可预测性,我们就可以添加一个手动绕过的选项 也就是说,让用户跳过结果或更改计算,以便自动化获得正确的结果。

The Nike Running app is a good example of successfully implementing these principles. After a run, the user is provided with a map of their run, so they can check that the app tracked them correctly (transparency). The user can also change details such as distance and speed manually (manual bypass).

Nike Running应用是成功实施这些原则的一个很好的例子。 运行后,系统会向用户提供其运行图,以便他们可以检查应用程序是否正确跟踪了它们(透明度)。 用户还可以手动更改详细信息,例如距离和速度(手动旁路)。

Available at news.nike.com
可在news.nike.com上获得

This is where the interaction between psychology and design becomes apparent. By designing true transparency, the user is given the opportunity to verify the result. This provides the confidence to make changes if something is wrong.

在这里,心理学和设计之间的相互作用变得显而易见。 通过设计真实的透明度,可以使用户有机会验证结果。 如果出现问题,这将使您有信心进行更改。

This creates a sense of trust between the user and the app. Small errors are easier to forgive if you can easily notice and correct them yourself. This is how good automation design can translate into great user experience.

这在用户和应用程序之间建立了信任感。 如果您可以自己轻松地注意到并纠正这些错误,则更容易原谅。 这就是好的自动化设计可以如何转化为出色的用户体验。

(If you are interested in learning more about these principles, feel free to read this article where I discuss them at length.)

(如果您有兴趣学习更多有关这些原理的知识,请随时阅读本文,在这里我将详细讨论它们 。)

结论 (Conclusion)

Automation is a topic as old as time, and it is becoming ever-more relevant as AI makes it way into new products. Still, even the world’s biggest tech companies, like Google, Spotify and Facebook, struggle to get it right.

自动化是一个古老的话题,随着AI进入新产品,它变得越来越重要。 尽管如此,即使是世界上最大的科技公司,例如Google,Spotify和Facebook,也都在努力使其正确。

That is also why this is such an exciting part of design. It is unknown territory. The principles for solutions I propose in this article are a good start, but it is not a definitive answer. We can be part of solving this unsolved challenge.

这就是为什么这是设计中如此令人兴奋的部分。 这是未知的领土。 我在本文中提出的解决方案原则是一个良好的开端,但这不是一个明确的答案。 我们可以解决这一尚未解决的挑战。

演示地址

Whoever solves this design challenge successfully, will become one of the most important and influential designers of this century.

成功解决此设计难题的人将成为本世纪最重要,最有影响力的设计师之一。

Are you up for the challenge?

你准备好接受挑战了吗?

翻译自: https://uxdesign.cc/3-problems-youll-face-while-designing-automation-and-how-to-solve-them-d9ae2d440103

中国发展及其人自动化


http://www.taodudu.cc/news/show-7086778.html

相关文章:

  • 高度指示器的全球与中国市场2022-2028年:技术、参与者、趋势、市场规模及占有率研究报告
  • 中国民航规章
  • 心疼你十八岁写下遗书 骄傲你不惧险跳伞救灾
  • 微信小程序点击按钮弹窗生成二维码图片+长按识别
  • 微信小程序点击移除添加class(点击改变背景颜色和字体颜色)
  • uni-app小程序设置页面背景色
  • 阿里云亮相2019联通合作伙伴大会,边缘计算助力5G时代数字化转型
  • Android安装应用和跳转(WhatsApp)应用简单记录
  • WhatsApp超链
  • 为什么需要WhatsApp聊天翻译,如何在SendWS的客服系统实现WhatsApp实时翻译群控功能?
  • whatsapp逆向协议--漏洞分析
  • whatsapp协议分析api
  • Android Studio添加多国语言
  • React 使用i18next配置多语言
  • Android适配多国语言规则
  • 了解Java语言
  • android 语言随sim变,如何修改Sim卡语言自适应
  • CentOS 7 + cuda10.0
  • 马里奥的银币1(mario1)
  • 3d打印真人手办设备价格多少钱?
  • Redis主从复制、哨兵、集群
  • Redis6.0.6_02_Redis 入门基础
  • redis 什么是冷数据_redis 冷数据存储格式
  • 中科曙光成为深交所技术产品联盟首批成员
  • 如何实现总部公司与分部公司的邮件系统统一域名管理?
  • 中科物流专家 v070331 货运部版 怎么用
  • 中科易安8周年,与你相约联网智能门锁
  • 助力数字政府建设,中科三方构建域名安全保障体系
  • 关于中科网讯信息服务平台的有关介绍
  • 中科驭数成为证券基金行业信息技术应用创新知识库首批合作厂商

中国发展及其人自动化_设计自动化时将面临的3个问题-及其解决方法相关推荐

  1. [Latex][BibTex]引用中文文献作者超过3人时用“et al”而不是“等”的解决方法

    [Latex][BibTex]引用中文文献作者超过3人时用"et al"而不是"等"的解决方法 问题描述 问题分析 问题解决 问题描述 在毕设论文,使用的是学校 ...

  2. Linux1T大文件拷贝,U盘拷贝大于4G的iso文件时提示对于目标文件系统 文件过大解决方法全集...

    [文章导读] U盘在我们日常使用过程中已经在广泛的使用了,很多人都喜欢拷贝文件,u盘传输速度上的是非常快的,但是有不少用户在使用U盘过程中遇到问题,u盘不能拷贝超过4G的大文件,提示" U盘 ...

  3. html中英文混排,EndNote中英文混排时et al和等的3种解决方法 | 科研动力

    EndNote对于英文文献的处理很出色,但是对于中文文献的处理就有点别扭,尤其是中英文文献混排时更是不爽.木有关系,EndNote的强大这处之一就是可以驯服和调教.下面介绍3种如何处理中英文混排时et ...

  4. python创建文件夹 覆盖_Python 创建新文件时避免覆盖已有的同名文件的解决方法...

    思路:创建文件时,先检查是否有同名文件(使用os.path.isfile),如果有,则在文件名后加上编号n来创建. 关键点: 1. 使用os.path.isfile判断文件是否存在 2. 使用递归函数 ...

  5. linux红帽网页中文乱码解决,【linux学习笔记】安装redhat时中文显示乱码(小方框)解决方法...

    该楼层疑似违规已被系统折叠 隐藏此楼查看此楼 ------------------------------------- 防抽------------------------------------- ...

  6. 打包解决方案后,安装时提示只能在IIS5.1以上运行解决方法

    打包解决方案后,安装时提示只能在IIS5.1以上运行解决方法 参考文章: (1)打包解决方案后,安装时提示只能在IIS5.1以上运行解决方法 (2)https://www.cnblogs.com/wp ...

  7. Linux安装软件时缺少依赖包的简单较完美解决方法!

    Linux安装软件时缺少依赖包的简单较完美解决方法! 参考文章: (1)Linux安装软件时缺少依赖包的简单较完美解决方法! (2)https://www.cnblogs.com/xiaommvik/ ...

  8. oracle中“ORA-00060: 等待资源时检测到死锁” 或存储过程编译卡死 解决方法

    oracle中"ORA-00060: 等待资源时检测到死锁" 或存储过程编译卡死 解决方法 参考文章: (1)oracle中"ORA-00060: 等待资源时检测到死锁& ...

  9. jQuery $.post()返回类型为json时不进入回调函数的原因及解决方法

    jQuery $.post()返回类型为json时不进入回调函数的原因及解决方法 参考文章: (1)jQuery $.post()返回类型为json时不进入回调函数的原因及解决方法 (2)https: ...

最新文章

  1. 使用java中replaceAll方法替换字符串中的反斜杠
  2. 非常全面的AutoML资源,看这个就够了!
  3. 自动添加端口添加至Windows防火墙脚本
  4. 去年3545款恶意App遭下架
  5. leetcode_438_Find All Anagrams in a String_哈希表_java实现
  6. 计算机还是数学竞赛内容吗,除了AMC,数学牛娃还能参加什么高含金量的数学竞赛...
  7. Apache Mesos:编写您自己的分布式框架
  8. 循环次数几次_圆柱模板循环使用次数是多少呢
  9. 测试用例设计技术之一等价类法
  10. java 读文件 解析
  11. 使用OpenSSL库函数测试AES-CCM加密算法
  12. 使用igvtools可视化测序深度分布
  13. ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon runn
  14. 基于JAVA的免费天气预报接口查询
  15. 微信支付元转分的正确姿势
  16. 移动端手指滑动的距离
  17. 经常使用网页播放器代码
  18. 大数据即席查询工具——秒级响应
  19. 期货配资的优势和劣势
  20. 计算机编程数学不好能学吗,高中数学学的不好,对学习计算机编程有影响吗?...

热门文章

  1. 百度2021批秋招笔试题解
  2. 高效能自动化港口数字化码头智慧港航,中国人工智能企业CIMCAI世界港航人工智能领军者,成熟港口码头人工智能产品中国人工智能企业
  3. 户口迁移证,报到证和毕业证
  4. python画高程图
  5. 【Angular】文本溢出鼠标移上去时显示全部的气泡卡片组件
  6. 二、移植u-boot-2016.03到Jz2440之启动过程分析
  7. PAT甲级 1003 Emergency 单源Dijkstra最短路
  8. 以色列TOP极链的区块链技术物联网的应用
  9. 公厕人脸识别取纸机厕所节纸智能管理
  10. 通过TLINK物联网平台和迈思德网关DIY制作APP实现PLC远程无线监控