协作机器人 ai算法

by Mariya Yao

姚iya(Mariya Yao)

如果我们希望人工智能为我们服务而不是不利于我们,我们需要协作设计 (If we want AI to work for us — not against us — we need collaborative design)

The trope “there’s an app for that” is becoming “there’s an AI for that.”

有人说“有一个应用程序”变成了“有一个AI的应用程序”。

Want to assess the narrative quality of a story? Disney’s got an AI for that.

想评估一个故事的叙事质量吗? 迪士尼为此拥有了AI 。

Got a shortage of doctors but still need to treat patients? IBM Watson prescribes the same treatment plan as human physicians 99% of the time.

缺少医生但仍然需要治疗患者? IBM Watson规定99%的时间与人类医师制定相同的治疗计划。

Tired of waiting for George R.R. Martin to finish writing Game of Thrones? Rest easy, because a neural network has done the hard work for him.

厌倦了等待乔治·RR·马丁完成《权力的游戏》的写作? 别紧张,因为神经网络为他做了辛苦的工作 。

But is all this rapid-fire progress good for humanity? Elon Musk, our favorite AI alarmist, recently took down Mark Zuckerberg’s positive outlook on AI. He dismissed the latter’s views as “limited”.

但是,所有这些快速发动的进步对人类都有好处吗? 我们最喜欢的AI预警员Elon Musk最近拒绝了 Mark Zuckerberg对AI的乐观看法。 他认为后者的观点“有限”。

Whether you’re in Camp Zuck of “AI is awesome” or in Camp Musk of “AI will doom us all”, one fact is clear. With AI touching all aspects of our lives, intelligent technology needs deliberate design to reflect and serve human needs and values.

无论您是在“人工智能真棒”的扎克营地,还是在“人工智能必将毁灭我们所有人”的马斯克营地中,一个事实都显而易见。 随着AI触及我们生活的方方面面,智能技术需要进行精心设计,以反映并服务于人类的需求和价值。

偏差的AI会带来意想不到的严重后果 (Biased AI has unexpected and severe consequences)

Software applications used by U.S. government agencies for crime litigation and prevention algorithmically generate information that influence human decisions about sentencing, bail, and parole. Some of these programs have been found to erroneously attribute a much higher likelihood of committing further criminal activity to black defendants. The same algorithms also err in attributing much lower risk assessment scores to white defendants.

美国政府机构用于犯罪诉讼和预防的软件应用程序通过算法生成的信息会影响人对量刑,保释和假释的决定。 已经发现其中一些程序错误地将更多的犯罪活动归咎于黑人被告。 同样的算法也会给白人被告人带来低得多的风险评估分数。

According to a study from Carnegie Mellon University, Google served targeted ads for getting high-paying jobs (those that pay more than $200,000) much more often to men (1,800 times) than women (just a paltry 300).

根据卡内基梅隆大学的一项研究 ,谷歌投放针对性广告,以获取高薪工作(薪水超过200,000美元),而男性(1800倍)要比女性(微不足道的300人)多得多。

It is unclear if the discrepancy is the result of advertisers’ preferences. Or if it is an inadvertent outcome of machine learning (ML) algorithms behind the ad recommendation engine. The outcome is that a professional landscape that already demonstrates preferential treatment for one gender over another is being reinforced at scale with technology.

目前尚不清楚差异是否是广告客户偏好的结果。 或者,如果这是广告推荐引擎背后的机器学习(ML)算法的意外结果。 结果是,已经通过技术大规模增强了一种职业格局,这种格局已经证明一种性别优先于另一种性别。

In the field of healthcare, AI systems are at risk of producing unreliable insights even if algorithms were perfectly implemented. Underlying healthcare data is driven by social inequalities. Poorer communities lack access to digital healthcare. This leaves a gaping hole in the trove of medical information that AI systems feed to algorithms. Randomized control trials often exclude groups such as pregnant women, the elderly, or those suffering from other medical complications.

在医疗保健领域,即使算法得到了完美的实施,人工智能系统也有产生不可靠洞察力的风险。 基础医疗保健数据是由社会不平等驱动的。 贫困社区无法获得数字医疗服务。 这在AI系统提供给算法的医学信息中留下了一个空白。 随机对照试验通常将孕妇,老年人或患有其他医疗并发症的人群排除在外 。

A Princeton University study demonstrated that ML systems inherit human biases found in English language texts. Since language is a reflection of culture and society, our everyday biases get picked up in the mathematical models behind natural language processing (NLP) tasks. Failing to carefully review and de-bias such models has real-world consequences. Google’s Perspective API is intended to analyze online conversations and flag “toxic” content. But it unintentionally flags non-white entities like names and food as being far more toxic than their white counterparts.

普林斯顿大学的一项研究表明 ,机器学习系统继承了英语文本中发现的人类偏见。 由于语言是文化和社会的反映,因此我们日常的偏见在自然语言处理(NLP)任务背后的数学模型中得到了体现。 未能仔细检查和消除此类模型的偏差会产生现实后果。 Google的Perspective API旨在分析在线对话并标记“有毒”内容。 但是它无意中将非白人实体(例如名称和食物) 标记为比白人具有更大的毒性。

Many gender, economic and racial biases in AI have been documented over the last few years.

在过去的几年中,已经记录了人工智能中的许多性别,经济和种族偏见。

With AI also becoming integral in the fields of security, defense and warfare, how do we design systems that don’t backfire?

随着AI在安全,防御和战争领域也变得不可或缺,我们如何设计不会适得其反的系统?

机制和宣言是一个开始…… (Mechanisms and manifestos are a start…)

AI systems can’t only succeed in completing their core tasks. They must do so without harming human society. Designing safe and ethical AI is a monumental challenge, but a critical one to tackle now.

人工智能系统不仅可以成功完成其核心任务。 他们必须在不损害人类社会的情况下这样做。 设计安全和符合道德的AI是一项艰巨的挑战,但现在却是一个至关重要的挑战。

In a joint study, Google DeepMind and The Future of Humanity Institute explored the possibility of AI going rogue. They recommended that AI be designed to have a ”big red button” that can be activated by a human operator to “prevent an AI agent from continuing a harmful sequence of actions.” In practical terms, this red button will be a trigger or a signal that will “trick” the machine to internally make a decision to stop, without recognizing it as a shutdown signal by an external agent.

在一项联合研究中 ,Google DeepMind和人类未来研究所探讨了AI流氓的可能性。 他们建议对AI进行设计,使其具有一个“红色大按钮”,操作员可以激活它,以“防止AI代理继续执行有害的操作序列。” 实际上,该红色按钮将是触发或信号,将“诱骗”机器内部做出停止决策,而不会被外部代理识别为停机信号。

Meanwhile, the world’s largest association of technical professionals Institute of Electrical and Electronics Engineers (IEEE) published its General Principles for Ethically Aligned Design. It covers all types of artificial intelligence and autonomous systems.

同时,全球最大的技术专业人士协会电气与电子工程师协会(IEEE)发布了其《道德统一设计通则》 。 它涵盖了所有类型的人工智能和自治系统。

The document sets a general standard for designers to ensure that AI and autonomous systems:

该文档为设计师设置了确保AI和自治系统的通用标准:

  1. do not infringe human rights不侵犯人权
  2. that they are transparent to a wide range of stakeholders他们对广泛的利益相关者透明
  3. that their benefits and associated risks can be extended or minimized他们的利益和相关风险可以扩展或最小化
  4. that accountability for their design and operation is clearly laid out明确规定了其设计和操作的责任制

…但是协作设计对于成功至关重要 (…but collaborative design is critical for success)

Hypothetical fail-safe mechanisms and hopeful manifestos are important. But they are insufficient to address the myriad of ways that AI systems can go wrong. Creations adopt the biases of their creators. Homogeneous development teams, insular thinking, and lack of perspective lie at the root of many of the challenges already manifesting in AI today.

假设的故障安全机制和有希望的宣言很重要。 但是它们不足以解决AI系统出错的各种方式。 创作采用创作者的偏见。 同类开发团队,孤立的思维和缺乏远见是当今AI中已经表现出的许多挑战的根源 。

Diversity and user-centered design in technology have never been so important. Luckily, as AI education and tooling becomes more accessible, designers and other domain experts are increasingly empowered to contribute to a field that was previously reserved for academics and a niche community of experts.

技术上的多样性和以用户为中心的设计从未如此重要。 幸运的是,随着AI教育和工具变得越来越容易获得 ,设计人员和其他领域专家越来越有能力为以前供学者和小众专家群体使用的领域做出贡献。

增强AI合作的三种方法 (Three approaches to enhance collaboration in AI)

方法#1:构建用户友好的产品以收集更好的AI数据 (Approach #1: Build user-friendly products to collect better data for AI)

Elaine Lee, an AI Designer at eBay emphasizes that human input and user generated data are critical for smarter AI. If the products collecting requisite data to power AI systems do not encourage positive engagement, then the data generated from user interactions tend to be incomplete, incorrect, or compromised. In Lee’s words, “We need to design experiences that incentivize engagement and improve AI.”

eBay的AI设计师Elaine Lee 强调 ,人工输入和用户生成的数据对于更智能的AI至关重要。 如果收集支持AI系统所需数据​​的产品不鼓励积极参与,那么通过用户交互生成的数据往往不完整,不正确或受到破坏。 用Lee的话来说,“我们需要设计能够激发参与度并改善AI的体验。”

Google Design’s Jess Holbrook recommends a 7-step approach to designing human-centered ML systems. He cautions against relying on algorithms to tell you what problems to solve. Instead he encourages designers to build systems that enable “co-learning and adaptation” between man and machine as technologies evolve. Holbrook also points out that many legitimate problems do not need ML to be successfully solved.

Google Design的Jess Holbrook建议采用7步方法来设计以人为中心的ML系统。 他告诫不要依靠算法告诉您要解决的问题。 取而代之的是,他鼓励设计师构建能够随着技术的发展在人机之间实现“共同学习和适应”的系统。 霍尔布鲁克还指出,许多合法问题并不需要成功解决ML。

Collaborating with users seems like a common sense procedure. But few companies go beyond cursory user research and passive behavioral data collection. The next step is to enable a productive, long-term feedback loop so that users of AI systems actively define the functionality and vision of your technology,. Yet also perform important tasks like flagging and minimizing biases.

与用户合作似乎是一个常识性过程。 但是,很少有公司能进行粗略的用户研究和被动行为数据收集。 下一步是启用有效的长期反馈循环,以便AI系统的用户积极定义技术的功能和愿景。 还要执行重要任务,例如标记和最小化偏差。

方法2:将领域专业知识和业务价值置于算法之上 (Approach: #2: Prioritize domain expertise and business value over algorithms)

Michael Schrage, research fellow at MIT Sloan, argues that “strategically speaking, a brilliant data-driven algorithm typically matters less than thoughtful UX design. Thoughtful UX designs can better train machine learning systems to become even smarter.

麻省理工学院斯隆分校的研究员迈克尔·施拉格(Michael Schrage) 认为 :“从策略上讲,出色的数据驱动算法通常比考虑周到的用户体验设计重要。 周到的UX设计可以更好地训练机器学习系统,使其变得更加智能。

“In order to develop “thoughtful UX”, you need domain expertise and business value. A common pattern in both academia and industry engineering teams is the propensity to optimize for tactical wins over strategic initiatives. While brilliant minds worry about achieving marginal improvements in competition benchmarks, the nitty gritty issues of productizing and operationalizing AI for real-world use cases is often ignored. Who cares if you can solve a problem with 99% accuracy, if no one needs that problem solved? Or your tool is so arcane, no one is sure what problem it’s trying to solve in the first place?

“为了开发“周到的UX”,您需要领域专业知识和业务价值。 学术界和工业工程团队中的一个常见模式是倾向于优化战胜战略计划的战术。 聪明的人担心要在竞争基准上实现微不足道的改进,而针对实际用例的AI生产和操作AI的棘手问题却常常被忽略。 谁在乎您是否可以以99%的精度解决问题,如果没有人需要解决该问题? 还是您的工具太神秘了,没有人确定它首先要解决的问题是什么?

“In working with Fortune 500 enterprises looking to re-invent their workflows with automation and AI, a complaint I commonly hear about promising AI startups is this: “These guys seem really smart and their product has a lot of bells and whistles. But they don’t understand my business.”

“在与《财富》 500强企业合作,希望通过自动化和AI重新改造其工作流程时,我通常听到关于有前途的AI初创公司的抱怨是:“这些家伙看起来真的很聪明,他们的产品充满了风吹草动。 但是他们不了解我的生意。”

方法3:赋予人类设计师机器智能 (Approach #3: Empower human designers with machine intelligence)

Designing AI is yet another challenge where human and machine can combine forces for superior results. Software developer, author and inventor Patrick Hebron demonstrates that machine learning can be used to simplify design tools without limiting creativity or removing control from human designers.

设计AI是另一个挑战,人与机器可以结合力量以获得卓越的结果。 软件开发人员,作家兼发明家Patrick Hebron 证明了机器学习可用于简化设计工具,而不会限制创造力或消除人类设计师的控制权。

Hebron describes several ways ML can transform how people interact with design tools. These include emergent feature sets, design through exploration, design by description, process organization, and conversational interfaces. He believes these approaches can streamline the design process and enable human designers to focus on the creative and imaginative side of the process instead of the technical aspects (i.e., how to use a particular design software). This way, “designers will lead the tool, not the other way around.”

Hebron描述了ML可以改变人们与设计工具交互方式的几种方式。 其中包括紧急功能集,通过探索进行设计,通过描述进行设计,过程组织以及对话界面。 他认为,这些方法可以简化设计过程,并使人类设计师能够专注于过程的创造性和富于想象力的方面,而不是技术方面(即,如何使用特定的设计软件)。 这样,“设计人员将主导工具,而不是相反。”

Designing AI is yet another challenge where human and machine can combine forces for superior results. Software developer, author and inventor Patrick Hebron demonstrates that machine learning can be used to simplify design tools without limiting creativity or removing control from human designers.

设计AI是另一个挑战,人与机器可以结合力量以获得卓越的结果。 软件开发人员,作家和发明家Patrick Hebron 证明了机器学习可用于简化设计工具,而不会限制创造力或消除人类设计师的控制权。

Hebron describes several ways ML can transform how people interact with design tools. These include emergent feature sets, design through exploration, design by description, process organization, and conversational interfaces. He believes these approaches can streamline the design process, and enable human designers to focus on the creative and imaginative side of the process instead of the technical aspects such as how to use a particular design software. This way, “designers will lead the tool, not the other way around.”

Hebron描述了ML可以改变人们与设计工具交互方式的几种方式。 其中包括紧急功能集,通过探索进行设计,通过描述进行设计,过程组织以及对话界面。 他认为,这些方法可以简化设计过程,并使人类设计师能够专注于过程的创造性和富于想象力的方面,而不是技术方面,例如如何使用特定的设计软件。 这样,“设计师将主导工具,而不是相反。”

The field of “AI Design” is nascent. We are still figuring out which best practices we should preserve and what new ones we need to invented. But many promising AI-driven creative tools already exist. Greater access to tools and education mean that experts from all fields and functions can help evolve a field that is traditionally driven by an elite few. With AI’s exponential impact on all aspects of our lives, this collaboration will be essential to developing technology that works for everyone, everyday.

“ AI设计”领域是新生的。 我们仍在确定应该保留哪些最佳实践以及需要发明哪些新的最佳实践。 但是已经存在许多有前途的AI驱动的创意工具。 获得更多工具和教育的机会意味着来自各个领域和职能部门的专家可以帮助发展传统上由少数精英推动的领域。 随着AI对我们生活各个方面的指数影响,这种合作对于开发适用于每一个人的技术至关重要。

Thanks for reading. You can read more of my writing on AI by following me here and checking out the TOPBOTS blog.

谢谢阅读。 您可以在这里关注我并查看TOPBOTS博客 ,以关于AI的文章 。

翻译自: https://www.freecodecamp.org/news/if-we-want-ai-to-work-for-us-and-not-against-us-we-need-collaborative-design-a627175e5d60/

协作机器人 ai算法

协作机器人 ai算法_如果我们希望人工智能为我们服务而不是不利于我们,我们需要协作设计...相关推荐

  1. FANUC协作机器人CRX系列_程序的创建与运行

    FANUC协作机器人CRX系列_程序的创建与运行 首先,需要在CRX协作机器人末端法兰上安装手爪夹具.安装完成后,需要设置有效负载和工具中心点,即TCP. 在机器人设置界面中定义有效负载工件的重量和工 ...

  2. FANUC协作机器人CRX系列_规格特点和安装调试(一)

    FANUC协作机器人CRX系列_规格特点和安装调试(一) CRX系列协作机器人目前有2种型号:CRX-10iA和CRX-10iA/L, CRX-10iA型: 该型号为标准版协作机器人,运动半径可达12 ...

  3. php麻将机器人ai算法,高性能麻将AI算法

    想要一个高性能的麻将AI算法,这个问题我们拆解成2个子集来思考,"高性能","麻将AI算法",我们先针对麻将AI算法来讨论. 麻将AI "麻将AI&q ...

  4. 斗地主机器人AI算法和策略(个人思路总结)

    一.规则和权值定义 1.斗地主中存在很多种的牌型,比如:单张,对子,三带,顺子,连队,飞机,炸弹等 ,我的机器人主要根据权重去设计的,我给每一种牌型都制定了一个权重,比如3权重是多少,其他牌型的权重有 ...

  5. python ai语义分析_易百教程人工智能python补充-NLTK包

    自然语言处理(NLP)是指使用诸如英语之类的自然语言与智能系统进行通信的AI方法. 如果您希望智能系统(如机器人)按照您的指示执行操作,希望听取基于对话的临床专家系统的决策时,则需要处理自然语言. N ...

  6. 欢颜机器人编程软件_研发视觉和人工智能应用,「敏越科技」为焊接机器人装上“眼睛和大脑”...

    机器人正在各行各业替代人力,焊接领域也不例外. 焊接作为工业界的"裁缝",重要程度不言而喻.但是焊接现场往往环境恶劣,烟尘.弧光.金属飞溅严重伤害工人身体健康,同时工人也需要长时间 ...

  7. 人机协作机器人发展趋势_移动机器人:人机协作是未来的发展趋势

    随着互联网.人工智能等技术的不断发展.在很多行业中,充分融合了智能机器人.智能设备与软件算法等,实现了一定程度上的"人机协作",以这个模式替换了传统的人工模式.目前协作机器人在汽车 ...

  8. 人机协作机器人发展趋势_人机协作引领机器人产业新趋势

    图集 原标题: 在第十九届中国国际工业博览会上,机器人新产品传递出行业发展新动向-- 人机协作引领机器人产业新趋势 在第十九届工博会上,中国工程院院士倪光南(右一)在遨博智能科技有限公司展台观看其展出 ...

  9. 人机协作机器人发展趋势_“人机协作”从概念到趋势的解析(一):从协作机器人的前世今生说起...

    [原创]"人机协作"从概念到趋势的解析(一):从协作机器人的前世今生说起 文章来源自:高工机器人网 2017-02-21 16:44:22 阅读:27003 摘要本文将协作机器人的 ...

最新文章

  1. 基于流式的md5计算-多线程下载工具Lwget介绍
  2. git clone的时候报error: RPC failed; result=18错误
  3. “数据中心运维管理VIP学习群”问题汇总(一)
  4. Android Scrollview嵌套RecyclerView导致滑动卡顿问题解决(屡试不爽)
  5. C++总结笔记(十二)—— 智能指针
  6. 业务系统里面常见的方法接口设计
  7. linux init进程原理,Linux 系统下 init 进程的前世今生
  8. 分享我的数据恢复经历,IBM文件系统及存储故障数据恢复
  9. Android MediaRecorder调用AudioRecord流程
  10. 黑苹果OC引导配置制作小工具:一键制作黑苹果OpenCore EFI文件
  11. Java程序员必读——领悟Java编程思想
  12. android 开发怎么让程序生成的图片文件不会被系统扫描到
  13. 中国氮化镓(GaN)行业“十四五”前景预测及投资风险预测报告2021年版
  14. TexMacs环境变量
  15. 如何计算摄影参数:分区基准面高程、相对航高、绝对航高、基线长度、航线间隔、航线数、每条航线的相片数、总相片数。
  16. U盘文件突然不见却占内存 解决方案
  17. JAVA7新特性1---groovy
  18. Mac安装brewLast login: Mon Feb 25 22:00:38 on console 192:~ wxh$ /usr/bin/ruby -e $(curl -fsSL https:
  19. 转 java中的session
  20. 字符串连接的5种方法

热门文章

  1. 不显示调用super_让不懂编程的人爱上iPhone开发(2017秋iOS11+Swift4+Xcode9版)-第11篇
  2. python回收机制
  3. 关于如何在Python中使用静态、类或抽象方法的权威指南
  4. python中的序列化与反序列化
  5. LeetCode-46. Permutations
  6. cogs2109 [NOIP2015] 运输计划
  7. 在新版CSDN博客添加友情链接
  8. 昨天添加的clustrMaps,忘了截屏,今天补上,就作为我在园子里的奠基。
  9. C# 读取CAD文件缩略图(DWG文件)
  10. C#的特性Attribute