生活中的观察者偏见例子

Chatbots that become racist in less than a day, facial technology that fails to recognize users with darker skin colors, ad-serving algorithms that discriminate by gender and race, an AI hate speech detector that’s racially biased itself. Flawed artificial intelligence systems perpetuate biases, which can be largely attributed to the lack of diversity within the field itself, according to a report published by the AI Now Institute.

聊天机器人在不到一天的时间内就成为了种族主义者,面部技术无法识别肤色较深的用户,广告服务算法会根据性别和种族进行区分,这是一种AI仇恨的语音检测器,种族歧视自己。 根据AI Now Institute发布的报告 ,有缺陷的人工智能系统会使偏差长期存在,这可能主要归因于该领域本身缺乏多样性。

In part 1 of this article, we covered first steps to removing bias in AI, including recognizing our own biases, building diverse teams, implementing harm reduction in the design and development process, and using tools to measure and mitigate risks. In part 2 we’ll explore gender and racial bias in particular, which AI often replicates, gain insights and practical tips to start reducing bias in AI experiences based on hands-on research, and explore a real-world project that has been created to end gender bias in AI assistants.

在本文的第1部分中 ,我们介绍了消除AI偏见的第一步,包括认识到我们自己的偏见,组建多元化的团队,在设计和开发过程中实施减少伤害以及使用工具来衡量和减轻风险。 在第2部分中,我们将特别探讨性别和种族偏见,这是AI经常复制的,获得见识和实用技巧,以便根据动手研究开始减少AI体验中的偏见,并探索一个现实世界的项目,旨在消除人工智能助手中的性别偏见。

关于人工智能和性别偏见的批判性讨论 (Critical Discussion of AI and gender bias)

More than 100 million devices with Amazon’s Alexa assistant built in had been sold by January of 2019. Given Alexa’s scale, UX designer and creative strategist Evie Cheung was curious about the embedded gender and racial biases within the product. So she examined them by facilitating a co-creation workshop.

到2019年1月,已经内置了超过1亿台内置有Amazon的Alexa助手的设备。鉴于Alexa的规模,UX设计师和创意策略师Evie Cheung对产品中嵌入的性别和种族偏见感到好奇。 因此,她通过举办联合创作研讨会对它们进行了检查。

“The participants listened to Alexa’s voice telling a story and were instructed to draw what Alexa would look like as a human being,” she explains. “They were then asked questions about Alexa’s race, political beliefs, and hobbies.”

她解释说:“参与者听了Alexa讲故事的声音,并被要求画出Alexa作为人的样子。” “然后他们被问到有关Alexa的种族,政治信仰和爱好的问题。”

The view that emerged was one of Alexa as a subservient white woman who couldn’t think for herself, apologized for everything, and was pushing a libertarian agenda.

出现的观点是,Alexa是一位屈从于世的白人妇女,她自己不能为自己想,为所有事情道歉,并推动了自由主义者的议程。

“In a vacuum, this is hilarious,” Cheung points out. “But children are now growing up with a device they’re able to order around, due to Alexa’s submissive personality and conversation design. Alexa’s ubiquity means that it has become a socializing force, influencing a child’s mental model on how they perceive female-sounding voices, and establishing a ‘norm’ for how technology is supposed to sound — in this case, female and inferior.”

“在真空中,这很有趣。”张指出。 “但是,由于Alexa柔顺的性格和对话设计,孩子们现在已经长大了可以订购的设备。 Alexa的普遍存在意味着它已成为一种社交力量,影响了孩子关于如何感知女性发音的心理模型,并为技术的发音(在这种情况下,女性和劣等)建立了“规范”。

As designers, Cheung advises, we must be hyper-aware of perpetuating existing societal gender biases and predict how products may have detrimental impacts for future generations. To combat these biases, it’s imperative to diversify teams of designers and technologists (for more on diverse teams, see Part 1), as well as the groups of users that products are tested on.

张忠谋建议,作为设计师,我们必须高度意识到使现有的社会性别偏见永存,并预测产品如何对子孙后代产生不利影响。 为了克服这些偏见,必须使设计师和技术人员的团队多样化(有关不同团队的更多信息,请参见第1部分 ),以及对其进行测试的用户组。

For more on Cheung’s research, check out her graduate thesis book Alexa, Help Me Be a Better Human: Redesigning Artificial Intelligence for Emotional Connection, based on a year-long investigation of AI as a tool to explore human psychology.

有关Cheung研究的更多信息,请查看她的研究生论文Alexa,“帮助我成为一个更好的人类:重新设计人工智能以建立情感联系” ,该研究基于对AI作为探索人类心理学工具的为期一年的调查。

In Evie Cheung’s workshop thirteen professionals from across seven industries gathered to discuss the future of artificial intelligence.
在张敬轩的工作坊中,来自七个行业的十三位专业人士聚集一堂,讨论人工智能的未来。

语音AI的第一个无性别语音 (The first genderless voice for voice AI)

Digital voice assistants often have two options for the gender the user prefers interacting with: male or female. Sometimes the default will be set differently to adapt to the culture the user is in. For example, in the U.S. Siri is female, while in the UK Siri has a male voice.

对于用户更喜欢与之交互的性别,数字语音助手通常有两种选择:男性或女性。 有时,默认设置会有所不同,以适应用户所处的文化。例如,在美国,Siri是女性,而在英国,Siri是男性。

“If you ask folks at Microsoft, Amazon, or Google, why so many of our voice assistants are female,” explains David Dylan Thomas, “they’ll tell you that according to their research, people are more comfortable hearing certain kinds of assistance or information from women than from men. On the one hand that seems like a good answer because we all live in the world of user experience, and we always say follow the research, but we also have to ask ourselves if we are okay with what the research is telling us. Is it a good thing that people are preferring to hear certain types of information from women, limiting how people view women? Are we okay with that and do we want to perpetuate it?”

“如果您问微软,亚马逊或Google的员工,为什么我们这么多的语音助手是女性,”戴维·迪伦·托马斯(David Dylan Thomas)解释说,“根据他们的研究,他们会告诉您,人们更愿意听到某些类型的帮助或女性提供的信息要多于男性。 一方面,这似乎是一个很好的答案,因为我们所有人都生活在用户体验的世界中,并且我们总是说要跟随研究,但我们还必须问自己,我们是否对研究告诉我们的观点感到满意。 人们喜欢从女性那里听到某些类型的信息,从而限制了人们对女性的看法,这是一件好事吗? 我们可以接受吗?我们想永久保留吗?”

A lot of the experts David talked to said you should leave it up to the user to decide if they want to hear a male or a female voice. Emil Asmussen, however, creative director of VICE Media’s agency Virtue, cautions that binary choice isn’t an accurate representation of the complexities of gender.

David与许多专家交谈过,您应该让用户自己决定是想听男声还是女声。 但是,VICE Media机构Virtue的创意总监Emil Asmussen告诫说,二元选择并不是性别复杂性的准确代表。

“Some people don’t identify as either male or female, and they may want their voice assistant to mirror that identity,” he explains. “As third gender options are being recognized across the globe, it feels stagnant that technology is still stuck in the past only providing two binary options.

他解释说:“有些人不认同是男性还是女性,他们可能希望语音助手反映这种身份。” “随着第三性别选项在全球范围内得到认可,人们感到停滞不前,技术仍然停留在过去,仅提供了两种二进制选项。

That’s why we created Q, the world’s first genderless voice for voice AI. Created for a future where we are no longer defined by gender.”

因此,我们创建了Q,这是世界上第一个用于语音AI的无性别语音 。 为不再由性别定义的未来而创建。”

“The project is confronting a new digital universe fraught with problems. It’s no accident that Siri, Cortana, and Alexa all have female voices — research shows that users react more positively to them than they would to a male voice. But as designers make that choice, they run the risk of reinforcing gender stereotypes, that female AI assistants should be helpful and caring, while machines like security robots should have a male voice to telegraph authority. With Q, the thinking goes, we can not only make technology more inclusive but also use that technology to spark conversation on social issues.”

“该项目正面临一个充满问题的新数字世界。 Siri,Cortana和Alexa都有女性声音绝非偶然-研究表明,用户对它们的React比对男性声音的React更积极 。 但是,当设计师做出选择时,他们冒着增强性别定型观念的风险,女性AI助手应该提供帮助和关怀,而像安全机器人这样的机器应该具有男性的声音来传达权威。 人们认为,有了Q,我们不仅可以使技术更具包容性,而且可以使用该技术引发有关社会问题的对话。”

演示地址

消除根深蒂固的种族偏见并欺骗AI (Counteract ingrained racial bias and lie to AI)

Informed by her conversations with over 30 machine learning engineers, creative technologists, and diversity and inclusion thought leaders, Evie Cheung has found that one of the most salient and urgent AI issues is biased algorithms — particularly around the topic of race.

通过与30多位机器学习工程师,创意技术人员以及多样性和包容性思想领袖的对话,张爱玲发现,最突出,最紧迫的AI问题之一是算法有偏差,尤其是围绕种族这一话题。

“We are still living through the consequences of colonialism, in which the western hegemony violently established power over the rest of the world,” Cheung explains. “These racial biases are thoroughly ingrained in society, and have the potential to be exacerbated by algorithms, such as in the criminal justice system. Significant problems include the lack of unbiased historical data, an unbalanced workforce, and limited user testing. These factors result in products like Facebook’s racist soap dispenser and Google’s image recognition algorithm that classified black folks as gorillas.”

张说:“我们仍然生活在殖民主义的后果中,西方霸权在殖民主义的后果中暴力地建立了世界其他地方的权力。” 这些种族偏见在社会中已根深蒂固,并有可能被诸如刑事司法系统之类的算法所加剧。 重大问题包括缺乏公正的历史数据,劳动力不平衡以及用户测试受限。 这些因素导致产生了诸如Facebook的种族主义皂液器和Google的图像识别算法之类的产品,该算法将黑人归为大猩猩 。”

Cheung says that we need to acknowledge the glaring truth: history is racist because humans are racist. And thus, algorithms powered by that historical data will also be racist.

张说,我们需要承认一个显而易见的事实:历史是种族主义的,因为人类是种族主义的。 因此,由该历史数据提供支持的算法也将是种族主义的。

“In the creation of AI algorithms, products, and services, designing equally for all groups is not good enough,” Cheung points out. “We need to include diverse voices who aren’t traditionally included in conversations about rising technologies. We also need to make sure that the data sets used are representative of the population that the respective algorithm will be used for. “

“指出,在创建AI算法,产品和服务时,对所有团队进行均等的设计是不够的,” Cheung指出。 “我们需要包括在新兴技术的对话中传统上不包含的各种声音。 我们还需要确保所使用的数据集代表相应算法将用于的总体。 “

David Dylan Thomas agrees that any bias in AI comes from its creators. “Often these creators will try to de-bias their AI by pointing it at ‘the real world’,” he explains. “They’ll use data sets to train the AI that are based on real-world statistics. This may seem like a logical approach, but what if those data sets represent a racist world? If you were to ask an AI who is most likely to own a home based on current statistics it will tell you ‘a white family’. If you were to ask an AI who is most likely to go to jail based on current statistics it will tell you ‘a black man’. It’s very easy to turn that into recommendations for who should own a home or go to jail — it’s happened before.”

戴维·迪伦·托马斯(David Dylan Thomas)同意,人工智能的任何偏见都来自其创造者。 他解释说:“这些创作者通常会通过将AI指向“现实世界”来尝试消除其AI偏差。 “他们将使用基于实际统计数据的数据集来训练AI。 这似乎是合乎逻辑的方法,但是如果这些数据集代表种族主义世界该怎么办? 如果您根据当前的统计数据询问最有可能拥有房屋的AI,它将告诉您“白人家庭”。 如果您要根据目前的统计数据询问最有可能入狱的AI,它将告诉您“黑人”。 很容易将其转变为有关谁应该拥有房屋或入狱的建议- 这是以前发生的事情 。”

David suggests we need to start looking at the world we want and not the world we have when creating these data sets.

David建议我们在创建这些数据集时需要着眼于我们想要的世界,而不是我们拥有的世界。

“We have to lie to AI. Give it data sets that favor equity. That overrepresent for the underrepresented. If we don’t, we risk scaling the bias that already exists.”

“我们必须对人工智能撒谎。 给它提供有利于公平的数据集。 代表人数不足的代表人数过多。 如果我们不这样做,就有冒险扩大已经存在的偏见。”

演示地址

减少AI体验偏见的四个步骤 (Four steps to reducing bias in AI experiences)

Content strategist and co-founder of Rasa Advising, Julie Polk, currently a content lead for AI applications at IBM, has come up with four essential tips you should keep in mind to combat bias in AI:

内容战略家和Rasa Advising的联合创始人朱莉·波尔克(Julie Polk)目前是IBM AI应用程序的内容负责人,他提出了四个基本技巧,您应该牢记这些技巧来消除AI的偏见:

  • It’s not enough to edit your final results. No matter how many images or phrases or search results you eliminate in one instance, they’ll show up again unless you address the underlying bias that produced them. It’s like whack-a-mole without the weird furry carnival prizes.

    仅编辑最终结果还不够。 无论您一次消除了多少图像,短语或搜索结果,它们都会再次显示,除非您解决产生它们的潜在偏见。 这就像没有怪异的毛茸茸的嘉年华奖的w鼠。

  • Require gender-neutral language in your style guide. Institutionalize words and phrases like “Hi everyone,” instead of “Hi guys,” “Chair” instead of “Chairman,” or “first-year” instead of “freshman.” I’ve been doing this work for ten years, and I’m still amazed at how pervasive and deeply embedded the assumption of male-as-neutral is. These seem like small changes, but taken together, they shift the entire context of our cultural conversation.

    在您的风格指南中要求使用不分性别的语言。 将“大家好”而不是“大家好”,“主席”而不是“主席”或“第一年”而不是“新鲜人”这样的单词和短语制度化。 我从事这项工作已经十年了,但我仍然对男性为中立性假设的普遍性和深刻性感到惊讶。 这些看似很小的变化,但综合起来,它们改变了我们文化对话的整个环境。

  • Vet your data. Garbage in, garbage out, always and forever. So dig around into how your data was generated before you build on it. If it’s research, who conducted it? Why? Who funded it? Who were the subjects? How were they chosen? What was the sample size? If it’s historical data, who does it include? More importantly, who does it exclude?

    审核您的数据。 垃圾永远进入,垃圾永远进入。 因此,在构建数据之前,请先深入了解如何生成数据。 如果是研究,是谁进行的? 为什么? 谁资助的? 谁是主题? 他们是如何选择的? 样本量是多少? 如果是历史数据,它包括谁? 更重要的是, 它排除了谁 ?

  • Don’t get sucked into solutions at the expense of inclusion. The speed and power of AI are seductive; anyone with a laptop, a skill set, and a creative mind can change how we live almost overnight. But nothing — so far, at least — can replace the human ability to understand the nuances of…well, of being human. And the biggest, shiniest solution, no matter how well-intentioned, isn’t a solution at all if it leaves damage in its wake.

    不要以牺牲包容为代价而陷入解决方案中。 AI的速度和力量令人着迷。 任何拥有笔记本电脑,技能和创新思维的人都可以改变我们几乎一夜之间的生活。 但是,至少到目前为止,没有什么能取代人类理解……好吧,成为人类的细微差别的能力。 无论意图多么好,最大,最光亮的解决方案根本不会解决任何问题。

自我调节以减少对消费者的伤害 (Self-regulate to reduce consumer harm)

Removing bias in AI and preventing it from widening the gender and race gap is a monumental challenge but it’s not impossible. From the Algorithmic Justice League to the first genderless voice for virtual assistants, there are many excellent projects that have the common goal of making AI fairer and less biased. But we need to work together, and if we include AI in a digital product, it’s every stakeholder’s responsibility to ensure it doesn’t discriminate or harm people. As Evie Chung says, “We must stay vigilant about the unintended consequences of the design decisions we make in AI-powered products.” Only then will we be able to maximize AI’s true potential to transform our lives.

消除人工智能中的偏见并防止其扩大性别和种族差距是一个巨大的挑战,但这并非不可能。 从算法正义联盟到虚拟助手的第一个无性别的声音,有许多出色的项目,其共同目标是使AI更加公平并且减少偏见。 但是我们需要共同努力,如果我们将AI包含在数字产品中,那么确保不歧视或伤害人们是每个利益相关者的责任。 正如埃维·钟(Evie Chung)所说,“我们必须保持警惕,我们在以人工智能为动力的产品中做出的设计决策会带来意想不到的后果。” 只有这样,我们才能最大限度地发挥AI改变生活的真正潜力。

For more unique insights and authentic points of view on the practice, business and impact of design, visit Adobe XD Ideas.

有关设计的实践,业务和影响的更多独特见解和真实观点,请访问Adobe XD Ideas

To learn about Adobe XD, our all-in-one design and prototyping tool:

要了解Adobe XD,我们的多合一设计和原型制作工具:

  • Download Adobe XD

    下载Adobe XD

  • Adobe XD Twitter account — also use #adobexd to talk to the team!

    Adobe XD Twitter帐户 -也使用#adobexd与团队交谈!

  • Adobe XD UserVoice ideas database

    Adobe XD UserVoice 创意数据库

  • Adobe XD forum

    Adobe XD论坛

Originally published at https://xd.adobe.com.

最初发布在 https://xd.adobe.com上

翻译自: https://medium.com/thinking-design/removing-bias-in-ai-part-2-tackling-gender-and-racial-bias-1763457fbea5

生活中的观察者偏见例子

http://www.taodudu.cc/news/show-4688537.html

相关文章:

  • 人物 | 张忠谋:老将二度离场
  • 台积电业绩出现下滑,开始进一步向中国大陆芯片企业示好
  • 普通计算机怎么改闹铃的音乐,怎么设置闹钟铃声为自己喜欢的音乐
  • 计算机闹钟3是什么音乐,【PC闹钟】教你电脑自动开机并播放音乐提醒你起床
  • swift 音乐播放单例
  • 树莓派3 打造定时播报电台音乐闹钟、天气等
  • 爬虫获取网易云音乐单曲或歌单实现音乐闹钟
  • linux挂载实验箱闹钟,Linux/Ubuntu命令行下打造一个音乐闹钟
  • 全国排名前十技术大牛,被裁只要十分钟
  • IT界6个国内技术大牛博客,全栈工程师修行的秘籍!
  • 盘点ML/DL领域国外和国内的顶级大牛
  • 程序大牛必备精品社区
  • 公司没大牛带,需要离职么?
  • 送给程序员的话 - 大牛们的经典语录
  • 国内计算机视觉CV方向的大牛/导师
  • 前端大牛的博客收集
  • 机器学习/数据挖掘之中国大牛
  • 我是大牛,我自豪
  • 产品大牛
  • 10+ 位产品大佬的经典作品,不看后悔!
  • 产品大牛网站 Axure
  • 期望、方差、协方差、协方差矩阵
  • 爬虫网易财经上市公司财务数据
  • 网易 盖楼 实现_网易严选宣布“退出鼓吹过度消费的双十一”网友:逆向营销...
  • 网易一面
  • 网易严选——迅速崛起的新消费品牌
  • 2019网易校招题
  • 瑕瑜互见的网易新财报
  • 网易校招,网络运维工程师——满满的干货点
  • 2019网易笔试(1-3题)

生活中的观察者偏见例子_消除人工智能第2部分中的偏见,解决性别和种族偏见...相关推荐

  1. 英语语音中的调核例子_英语歌曲在英语教学中扮演的重要角色

    每一种语言都是一门独到的艺术,而英语,它拥有"世界第一通用语言的称号",就足以证明它的价值有多高.当英语和音乐这两种截然不同的艺术融合在一起,另一种新的艺术诞生了,这就是英语歌曲. ...

  2. 生活中的观察者偏见例子_克服成为更好的领导者的偏见

    生活中的观察者偏见例子 What we learned at the Women in Gaming Workshop San Francisco 我们在旧金山的女性参与游戏研讨会中学到了什么 As ...

  3. java中为按钮添加图片_我们可以在Java接口中为成员定义私有和受保护的修饰符吗?...

    java中为按钮添加图片 No, it is not possible to define private and protected modifiers for the members in int ...

  4. 在python中如何判断数组中的数据为空值_缓存穿透问题,开发中真实解决方案

    前几天我们讲到了缓存的读写策略(缓存读写策略,我们开发人员都是这么用的)以及如何搭建高可用缓存系统(分布式缓存的高可用方案,我们都是这么做的),都是为了能在基础架构上让我们的缓存命中率能更高,防止大量 ...

  5. 生活中软件易用性的例子_多用“举出例子”“比如说”,来进行生活中的语言交流...

    郭明是一位作家,有一次她应邀到某大学进行演说.当她结束了精彩的演说时,听众报以热烈的掌声,接下来郭明要回答听众的问题.纸条一张张递上台去,郭明从容作答,语言得体流畅.突然,她看到一张字条上赫然写着两句 ...

  6. 生活中软件易用性的例子_大学生活中那些堪称神器的软件,真实且好用,生活学习必备...

    在发展日新月异的21世纪,不少创业者开发了优秀的手机软件,而且有不少适合大学生日常生活的学习.语言.阅读等软件. 今天小编结合大学生的实际生活,推荐这十个真实且好用的手机软件,谁用谁知道,帮助你不再虚 ...

  7. python中关于命名的例子_利用Python批量重命名文件(给非技术人员的Python实例参考)...

    Python是一门"优雅"的计算机语言,而且就算10岁的小朋友也能学会,我一直向我身边的同学朋友同事推荐,不管他会不会编程. 这一回我需要完成的任务是把"照片" ...

  8. 英语语音中的调核例子_英语调核研究.pdf

    英语调核研究 英 语 调 核 研 究 霍亚凤 (西安培华学院,陕西 西安 710025) 一 . 引言 不包含重读音节 .如果调核位于最后 一个音节 上 ,那么该语调 语调短语是 由调首 .调头 .调 ...

  9. 英语语音中的调核例子_英语调核的语用功能及其语音实现

    英语调核的语用功能及其语音实现 刘国飞 ; 周卫京 ; [期刊名称] <齐鲁师范学院学报> [年 ( 卷 ), 期] 2013(028)004 [摘要] 语调是人们信息交流的必不可少的部分 ...

最新文章

  1. 多线程开发之---线程等待
  2. Windbg+sos调试.net笔记
  3. HDU 3966 Aragorn's Story (树链剖分+线段树)
  4. python继承属性_Python中的属性继承问题
  5. 蓝桥杯真题训练 2019.3题
  6. 【ES6(2015)】Symbol
  7. 95-32-015-ChannelPipeline-DefaultChannelPipeline
  8. 推荐 | 自然语言处理、计算机视觉等机器学习实战项目练手平台
  9. 图像大小批量调整工具Image Resizer for Mac
  10. Android虚拟键盘上下左右键按下和弹起的响应事件
  11. 纯JS写一个用苹果序列号查询生产信息的小工具
  12. opencv保存设像头图片时调整白平衡功能
  13. 历史记录---4月6日
  14. jsp、servlet与javabean的区别180110
  15. 选队长游戏(Java)
  16. codeblocks出现Encoding Changed The saved doucument contained characters which were illeal
  17. adm浏览器识别为linux,QQ浏览器Linux版qqbrowserlinux_1.0.0-1_amd64.deb能正常使用
  18. Everything Is Generated In Equal Probability HDU-6595 期望DP
  19. GDAL+Python | 实现栅格影像处理之栅格矢量化及矢量栅格化
  20. pb 数据窗口更新mysql_如何在PB数据窗口中修改数据---设置数据窗口的更新属性...

热门文章

  1. 新研究!AI扫描视网膜即可预测心脏病;康奈尔大学『智能系统机器学习』课程;MLOps简化平台;公益活动报名小程序(开源);前沿论文 | ShowMeAI资讯日报
  2. 体验极速——在旭日X3派上使用双频1300M USB无线网卡
  3. 数字图像处理拓展题目——利用Matlab实现动态目标检测 二帧差法、ViBe法、高斯混合模型法,可应用于学生递东西行为检测
  4. Bighead Fighter - Boarding the Peak of the Beast
  5. IDEA 的Surround with快捷键 (例:try/catch)
  6. 教育部发布2013年全国教育事业发展统计公报
  7. 【推荐系统】召回模型线下评价指标
  8. 关于python循环结构以下描述错误的是_关于Python循环结构,以下选项中描述错误的是:()...
  9. html页面中加skype,分享个刚学会的电子邮件中加Skype即时联络标签的方法~有用请顶...
  10. 由 Cheech Chong 创作的 My Homies 照亮了 The Sandbox