如何识别媒体偏见

In recent months, the troubling issue of facial recognition has hit headlines.

近几个月来,令人困扰的面部识别问题成为头条新闻。

Following the explosion of Black Lives Matter protests around the globe, a highly publicised moratorium on facial recognition by several big tech companies, and even a segment on John Oliver’s Last Week Tonight, people have become alert to the many dangers of this mushrooming technology.

在全球范围内爆发了“黑人生活问题”抗议活动之后,几家大型科技公司都广为宣传的人脸识别禁令,甚至在约翰·奥利弗的《今晚的最后一夜》中都有部分内容,人们已经开始警惕这种Swift发展的技术的许多危险。

In light of the calls for racial justice sweeping the globe, the issue of bias in facial recognition systems has ignited particular anger. But, in many cases, the problems at the heart of facial recognition run deeper than bias alone.

鉴于席卷全球的种族正义的呼声,面部识别系统的偏见问题引起了特别的愤怒。 但是,在许多情况下,面部识别的核心问题比单独的偏见更为严重。

Digital activists have been pointing out for years that facial recognition technology identifies some faces better than others — namely, white male faces.

数位激进主义者多年来一直指出,面部识别技术比其他面部识别技术能够更好地识别某些面Kong,即白人男性面Kong。

Finally, it seems, people have started to listen. As with all artificial intelligence systems, facial recognition reflects the prejudices of its creators.

似乎人们终于开始倾听。 与所有人工智能系统一样,面部识别反映了其创建者的偏见。

Fears about where this might lead aren’t merely speculative. What activists have warned us about is already happening. Perhaps the most striking illustration of this was the viral story of one man, Robert Julian-Borchak Williams, who was wrongly identified by facial recognition and, consequently, arrested by police.

对这可能导致什么的恐惧不仅仅是投机。 维权人士警告我们的事情已经发生。 也许最引人注目的例证是一个人的病毒故事 ,罗伯特·朱利安·伯查克·威廉姆斯(Robert Julian-Borchak Williams)因面部识别错误而被警察逮捕。

This case is merely the tip of an iceberg. Speaking with Gracie Bradley of Liberty, she points out that, when it comes to automated systems, “you have an issue of scale, which massively magnifies the possibility for injustice to happen”.

这种情况仅仅是冰山一角。 与格雷西·布拉德利(Gracie Bradley)交谈 她指出, Liberty指出,在自动化系统中,“您遇到了规模问题,这极大地扩大了发生不公正现象的可能性”。

What activists have warned us about is already happening

维权人士警告我们的事情已经发生

“That’s the difference between, for example, one police officer who tries to recognise a suspect and gets it wrong, and a facial recognition algorithm that is able to scan hundreds of people in one day and is getting things wrong.”

“例如,这就是一名警察试图识别嫌疑犯并弄错了它,而面部识别算法可以在一天内扫描数百人并且出了问题。这就是区别。”

I n response to the recent outcry, a handful of tech corporations announced they were hitting pause on facial recognition for the next year . In some ways, it’s a promising development: a win for activists who’ve been tirelessly pushing back against facial recognition for years.

在回应最近的强烈抗议时,少数高科技公司宣布他们将在明年暂停面部识别。 在某些方面,这是一个令人鼓舞的发展:多年来努力不懈地反对面部识别的激进主义者的胜利。

But, on the flipside, a year isn’t a very long time — and what is arguably one publicity stunt by a handful of companies doesn’t quite get to the heart of what makes facial recognition so worrying.

但是,另一方面,一年也不是很长的时间-可以说,少数公司的宣传st头并没有完全使面部识别如此令人担忧的核心。

C

C

This isn’t just cynicism or “whataboutery”. Contrary to popular assumption, it’s not always easy to simply make facial recognition systems “less biased”. Many experts argue that the belief that the systems can easily be fixed with a few tweaks or by inputting “cleaner” data is extremely short-sighted.

这不仅仅是玩世不恭或“闲谈”。 与流行的假设相反,简单地使面部识别系统“减少偏差”并非总是容易的。 许多专家认为,通过一些调整或通过输入“更干净的”数据即可轻松修复系统的观点是极短视的。

“Where would this clean data come from?” asks Bradley. “The real issue is that you’d have to eradicate discrimination in society before.”

“这些干净的数据将从何而来?” 问布拉德利。 “真正的问题是,您必须在社会上消除歧视。”

Take the 2015 case of a Google image recognition tool that wrongly classified black people as gorillas. Instead of correcting the algorithm, Google stopped the tool from returning the label “gorilla” altogether, illustrating rather starkly that a quick fix for the root problem couldn’t be found.

以2015年的Google图片识别工具为例,该工具错误地将黑人归为大猩猩。 Google并未纠正算法,而是停止了该工具完全返回“大猩猩”标签的过程,这非常明显地表明找不到针对根本问题的快速修复方法。

These technological weaknesses conflict with a troubling human tendency: to defer to machines. People tend to trust what computers are telling them, which makes it all the more important that we’re aware of their many failings.

这些技术弱点与令人不安的人类倾向冲突:顺从机器。 人们倾向于相信计算机在告诉他们什么,这使我们意识到他们的许多失败变得更加重要。

“There’s a real risk of a kind of codification of existing bias which then takes on a veneer of ‘Oh, but it’s a computer doing it. It’s a technology doing it, so it must be right’”, says Bradley.

“存在对现有偏差进行编纂的真实风险,然后采用单板'Oh,但这是由计算机来完成的。 这是一项技术,因此一定是正确的。”,布拉德利说。

Other biometric tracking systems, such as emotion detection and gait recognition, are quickly creeping onto the market

其他生物识别系统,例如情绪检测和步态识别,正在Swift进入市场。

Putting facial recognition aside for a moment, there are also challenges posed by its close cousins. Other biometric tracking systems, such as emotion detection and gait recognition, are quickly creeping onto the market. Unsurprisingly, these throw up just as many obstacles.

暂时将面部识别放在一边,它的近亲也带来了挑战。 其他生物特征跟踪系统,例如情绪检测和步态识别,正在Swift进入市场。 毫不奇怪,这些障碍同样多。

“There’s a lot of research that shows how emotion is expressed and seen varies massively across cultures,” says Bradley. “We also know that there are a lot of background stereotypes that means some people — young black men, for example — are more likely to be seen as angry or threatening.”

布拉德利说:“有很多研究表明情感的表达和观看方式在不同文化中有很大差异。” “我们也知道,有许多背景刻板印象,这意味着某些人,例如年轻的黑人,更有可能被视为愤怒或威胁。”

For some, focusing on the matter of bias or prejudice jumps the gun. Even before we delve into the “accuracy” of these systems, there are questions around whether we should be developing such technology at all — nevermind deploying them in public spaces without consent.

F或某些人,专注于偏见或偏见的问题会使枪支跳起来。 甚至在我们深入研究这些系统的“准确性”之前,围绕我们是否应该开发这样的技术还存在一些疑问-不要在未经许可的情况下将其部署在公共场所。

It shouldn’t be radical to suggest that the potential impact of these systems should be fully understood before they’re rolled out by police and public authorities. Unfortunately, that is not always the case.

建议在警务和公共部门推出这些系统之前,应充分了解这些系统的潜在影响,这不是激进的做法。 不幸的是,并非总是如此。

“We shouldn’t be saying ‘oh, let’s see how it goes and maybe we’ll be able to fix it at a later date’”, says Bradley. “There’s too much at stake.”

布拉德利说:“我们不应该说'哦,让我们看看它的进展,也许我们以后可以修复它'。” “风险太大了。”

When I speak with Dr Seeta Peña Gangadharan, an associate professor in Media and Communications at the London School of Economics and Politics, she tells me about the facial recognition trials that were run at Granary Square in London, an open space where crowds regularly gather to sit in the sun or spend time with their families.

当我与伦敦经济与政治学院媒体与传播学副教授SeetaPeñaGangadharan博士交谈时,她告诉我有关在伦敦格兰纳里广场(Granary Square)进行的面部识别试验的情况,该场所是人群经常聚集的地方坐在阳光下或与家人共度时光。

“I can’t tell you the number of times I’ve brought my kids to that square,” she says. “Every time we go there, I point out the cameras to them so they know what surveillance infrastructure looks like.”

她说:“我不能告诉你我把孩子带到那个广场的次数。” “每次我们去那里时,我都会向他们指出摄像机,以便他们知道监视基础架构是什么样的。”

“You would never be able to get away with that in an intimate relationship, that kind of notice or consent process that you have with face recognition systems”

“您将永远无法摆脱与人脸识别系统之间的亲密关系,这种通知或同意过程”

But for months prior, Dr Gangadharan had no idea the pilot was even running. There was no option to tick a box, sign a form, or consent to being surveilled in any meaningful way.

但是几个月前,Gangadharan博士甚至不知道飞行员是否还在跑步。 没有选择框,签名表格或同意以任何有意义的方式进行监视的选项。

“That’s, I think, a taste of what’s to come,” she says. “You would never be able to get away with that in an intimate relationship, that kind of notice or consent process that you have with face recognition systems. Why should that be acceptable?”

她说:“我想这就是即将发生的事情。” “您永远无法摆脱与人脸识别系统之间的亲密关系,这种通知或同意过程。 为什么这可以接受?”

The fact that facial recognition technology around the world functions as an intensive form of mass surveillance can’t be overlooked: yet somehow this reality is becoming ever-more normalised in public spaces, including at protests, and even in schools.

吨他事实上在世界各地充当群众监督的密集形式的脸部识别技术也不容忽视:但不知何故这一现实正变得日益更在公共场所标准化,包括抗议,甚至在学校。

“It really tips the balance of power in terms of the individual and the state,” says Bradley. “[It makes it] far more difficult, potentially, to take political actions or dissenting actions. Or simply for people to choose what they do or do not disclose to the state, which is a really important part of people’s identity.”

布拉德利说:“这确实改变了个人与国家之间的力量平衡。” “ [这使]采取政治行动或异议行动变得更加困难。 或者只是让人们选择向国家披露自己的行为,或者不向国家披露,这是人们身份的真正重要组成部分。”

Of course, there are actions that can be taken: many are campaigning for fully-fledged bans on facial recognition, while others urge us to tackle the issue at its roots by demanding greater reflexivity on the part of technologists who create these products.

当然,可以采取一些行动:许多人都在争取全面禁止面部识别的运动,而另一些人则敦促我们从根本上解决这个问题,要求制造这些产品的技术人员提高反射性。

…it’s about asking ourselves: do we need digital, data-driven technology to solve this particular problem?

…这是在问自己:我们是否需要数字数据驱动技术来解决这一特殊问题?

“I feel like the question has to be answered much earlier than the development of the technology, and currently it is not,” says Dr Gangadharan. For her, it’s about asking ourselves: do we need digital, data-driven technology to solve this particular problem?

Gangadharan博士说:“我觉得必须比技术的开发早得多地回答这个问题,而目前还没有。” 对她来说,这是在问自己:我们是否需要数字数据驱动技术来解决这个特定问题?

Promisingly, there are cases where the answer has been no. Take the recent landmark ruling in the UK that deemed the use of facial recognition by police in South Wales to be a breach of human rights — a massive win for Liberty, which is campaigning for a ban on the technology.

很有希望,在某些情况下答案是否定的。 以英国最近的具有里程碑意义的裁决为例,该裁决认为南威尔士州警察使用面部识别技术是对人权的侵犯-这是Liberty的一项重大胜利, Liberty正在争取禁止使用该技术。

Take the recent landmark ruling in the UK that deemed the use of facial recognition by police in South Wales to be a breach of human rights

以英国最近的具有里程碑意义的裁决为例,该裁决认为南威尔士警方使用面部识别技术是对人权的侵犯

There’s also the two recent cases in Sweden and France, where the use of facial recognition to monitor kids’ attendance at school was deemed disproportionately invasive under the GDPR.

在瑞典和法国, 最近还有两个案例 ,根据GDPR,使用面部识别来监控孩子的入学情况被认为是不成比例的入侵。

There are chances, it seems, to stop the onward march of surveillance technology.

似乎有机会停止监视技术的发展。

“Things are moving quickly. That doesn’t mean that it’s all inevitable,” says Bradley. “When we look at just what’s happened over the last few months… in terms of Black Lives Matter, if we look back at Extinction Rebellion, it’s clear that the course of history isn’t sort of linear and fixed. People can intervene to change what happens.”

“事情发展很快。 这并不意味着一切都是不可避免的。”布拉德利说。 “当我们回顾过去几个月中发生的事情时……就《黑住问题》而言,如果我们回顾《灭绝叛乱》,很显然,历史的过程不是线性的,固定的。 人们可以干预以改变发生的事情。”

The Digital Freedom Fund supports partners in Europe to advance digital rights through strategic litigation. Read more here.

数字自由基金会支持欧洲合作伙伴通过战略诉讼来推进数字版权。 在这里

翻译自: https://medium.com/digital-freedom-fund/beyond-bias-why-we-cant-just-fix-facial-recognition-e34b6b99ff40

如何识别媒体偏见


http://www.taodudu.cc/news/show-4226936.html

相关文章:

  • 【用pandas_alive几行代码绘制竞赛动图】10.新南威尔士州 COVID 可视化(测试代码+数据集+绘图参数解析)
  • 树莓派的一生:树莓派十年
  • Elastic 7.10 发布了可搜索快照的公测版和 Kibana Lens 的正式版
  • 南威尔士警方称,2017年欧洲冠军联赛决赛使用的人脸识别技术错误率超过90%
  • 心疼南威尔士警方!成功演绎如何用人脸识别错抓罪犯
  • QFIL刷版本
  • Vue使用二维码生成微信支付
  • Android碎碎念3:支付宝和微信支付二维码的规则
  • 可实现一种个人微信二维码收款接口的方法
  • 微信支付宝收款二维码还能用吗?权威解读
  • 视频监控SVAC安全控制简介
  • 国标SVAC对飙通行标准,优势何在?
  • 视频监控系统中H.265、SVAC、GB/T28181、ONVIF、PSIA有什么区别?
  • 【Codecs系列】SVAC1.0标准解读-----整体框架分析
  • SVAC1.0逆扫描反变换反量化分析
  • SVAC-Intra-Prei 代码分析(帧内预测最佳预测角度的选择)
  • SVAC编解码技术标准:诞生、质疑与发展
  • SVAC1.0帧间预测技术分析
  • 【SVAC1】NAL单元的封装
  • SVAC1.0帧内预测技术分析
  • 最简单的h264/h265/svac和g711封装成ps流符合gb28181过检码流要求
  • SVAC 2.0安全系统组成
  • SVAC的重要Feature
  • SVACH.264AVS去块滤波比较
  • SVACH264AVS标准的去块滤波比较
  • 【SVAC】SVAC推广应用进入关键期和高峰期
  • 【SVAC1】SVAC1.0场解码相关分析
  • 【技术知识】SVAC 2.0安全技术浅析
  • 【SVAC1】SVAC1与H.264支持特性比较
  • 【SVAC】国家视频编解码标准SVAC的特色和优势

如何识别媒体偏见_超越偏见:为什么我们不能仅仅“修正”面部识别相关推荐

  1. 如何识别媒体偏见_描述性语言理解,以识别文本中的潜在偏见

    如何识别媒体偏见 TGumGum can do to bring change by utilizing our Natural Language Processing technology to s ...

  2. 标记偏见_分析师的偏见

    标记偏见 "Beware of the HiPPO in the room" - The risks and dangers of top-down, intuition-base ...

  3. 人脸识别与膜虹识别_超越人脸识别——虹膜识别vs静脉识别

    谈及人脸识别,那简直是人尽皆知,应用广泛了.以往玩笑"靠脸"吃饭,如今已不再是玩笑,随着科技发展越发迅猛,我们的生活中德"刷脸"都在慢慢落后.其中"盗 ...

  4. 变压器耦合和电容耦合_超越变压器和抱抱面的分类

    变压器耦合和电容耦合 In this post, I plan to explore aspects of cutting edge architectures in NLP like BERT/Tr ...

  5. Jmeter识别登录验证码_使用百度AI图片识别技术

    Jmeter识别登录验证码_使用百度AI图片识别技术 一.环境准备 1.下载并引用以下Jar包 2.将下载的jar包放至Jmeter中的lib目录中即可使用 二.使用步骤 1.在获得验证码的请求后使用 ...

  6. 怎样避免无意识偏见_精神病学意识到大数据和人工智能的价值和偏见

    怎样避免无意识偏见 Cure her of that! Canst thou not minister to a mind diseased, pluck from the memory a root ...

  7. 标记偏见_协作和透明的机器学习可消除偏见

    标记偏见 No one wants bias in their organization. Underrepresentation has plagued the business world for ...

  8. 什么是认知偏见_偏见

    什么是认知偏见 The term Artificial Intelligence has been in use since 1955. AI pioneer, John McCarthy descr ...

  9. 标记偏见_如何(巧妙地)扭曲视觉效果以支持您的偏见叙事

    标记偏见 Data is important - it is the logical justification for world-changing decisions. Unfortunately ...

最新文章

  1. 谷歌、阿里们的杀手锏:3大领域,10大深度学习CTR模型演化图谱(附论文)
  2. JAVA是如何传递参数的?是传值(by value)?还是传地址(by reference)?
  3. HD1561The more, The Better(树形DP+有依赖背包)
  4. GT决赛第二次讨论会议
  5. Python【每日一问】35
  6. java action例子_实例——创建ActionForm Bean
  7. showVideo C#版 摄像头驱动程序
  8. 使用HTML批量拼图
  9. vue2.5去哪儿(慕课网)学习笔记
  10. window.print打印pdf
  11. 不同方式实现IP访问限制
  12. C#(VS2019)窗体应用程序之第一个窗口
  13. UGUI源码解析(二十二)ContentSizeFitter
  14. 查看Window系列本地账户密码
  15. 基于格的 Hash 函数(SWIFFT) BKW 算法
  16. vim只读模式修改文件
  17. Zhong__Linux服务器word转PDF方案
  18. 我的兄弟姐妹中感人的亲情
  19. gcc -I -i -L -l 参数区别 / -l(静态库/动态库)
  20. 算法竞赛入门经典的java实现之QWERTYU-Demo23.java

热门文章

  1. 【MySQL】测试题02
  2. python游戏编程快速上手pdf_Python游戏编程快速上手 (斯维加特著) 中文pdf完整版[18MB]...
  3. arduino nano 蓝牙_ESP32模拟无线蓝牙鼠标自制翻页笔神器
  4. Antd给表格一个斜线分隔(通过css改变)
  5. 送书活动还有最后一本书,怎么办呢?
  6. 登录页面,登录后跳转不成功
  7. idea 提示cannot find declaration to go to 解决方法
  8. 办公网建设设备选型及报价参考(500人规模)
  9. 图片显示的优化以及blit
  10. php 星际争霸 面向对象,星际争霸之php面向对象(一)