人工智能+社交 csdn

By Matt Bailey, PEN America

美国笔会马特·贝利 ( Matt Bailey)

This article was updated on July 24, 2020; see “ISSUE 3” below.

本文于2020年7月24日更新; 请参阅下面的“问题3”。

Well. We’re half way through 2020, and we have one or two things to talk about. Alongside the surge of protests against police violence and racism, we’re contending with the COVID-19 pandemic, an accompanying “infodemic” of disinformation and misinformation, the ever-growing spread of online hate and harassment, conspiracy theories, and a raft of bad ideas about how to fix the internet. Traffic to Facebook’s web version is up something like 25 percent, Nextdoor’s is up triple that, and all this online talk — hopeful, hateful, factual, false — is being moderated, and the systems used to do it are changing faster than ever before.

好。 我们距离2020年尚有一半,我们有一两件事要谈。 除了抗议警察暴力和种族主义的抗议热潮之外,我们还应对COVID-19大流行,伴随虚假信息和错误信息的“信息流行病”,在线仇恨和骚扰的不断蔓延,阴谋论以及一系列关于如何修复互联网的坏主意 。 Facebook网络版本的访问量增长了25%,Nextdoor的访问量增长了三倍 ,而所有在线讨论(希望的,可恨的,事实的,虚假的)正在得到审核,并且用于进行此操作的系统的变化比以往任何时候都快。

Just weeks before the worldwide shutdown, Facebook said it was going to start relying more heavily on AI for moderation, but that it wouldn’t make a noticeable difference. On June 1, Facebook apologized for the automated blocking of #blacklivesmatter posts on Instagram, to the consternation of activists and free expression advocates. On June 3, Facebook apologized for blocking #sikh for months, this time apparently due to human error. What these twin incidents tell us together is this: Whether it is in the context of the anti-racism movement or the remembrance of tragedies like the 1984 Sikh Massacre, there are some big discussions we need to have ASAP about how exactly content moderation is impacting free expression domestically and around the world.

在全球停产前几周,Facebook表示将开始更多地依赖AI进行审核,但不会产生明显的变化。 6月1日,Facebook为自动阻止Instagram上的#blacklivesmatter帖子而道歉,这激怒了激进主义者和言论自由主义者。 6月3日,Facebook为封锁#sikh几个月而道歉,这次显然是由于人为错误。 这些双重事件共同告诉我们的是:无论是在反种族主义运动的背景下,还是在纪念诸如1984年锡克教大屠杀之类的悲剧时,我们都需要进行一些大讨论,以期对内容适度性到底有多大影响在国内外自由表达。

Instagram had to apologize for messages inadvertently blocked on the platform.
Instagram必须为在平台上无意中阻止的消息道歉。

Most social media companies rely on a mix of artificial intelligence and human beings to keep tabs on what’s being posted on their platforms and remove stuff that violates the law or terms of service. Over the past several years, a debate has raged about the appropriateness of each. Can AI work well enough to understand local dialects and slang or to distinguish satire from disinformation? How do we deal with the inevitable biases of human moderators? What’s the mental health cost on human moderators who have to look at all the worst things on the internet, day after day after day?

大多数社交媒体公司都依赖于人工智能和人类的混合使用,以密切关注平台上发布的内容,并删除违反法律或服务条款的内容。 在过去的几年中,关于每种方法的适当性进行了激烈的辩论。 AI能否很好地理解当地的方言和语,或区分讽刺和虚假信息? 我们如何应对人类主持人不可避免的偏见? 对于那些每天都要日复一日地审视互联网上最糟糕情况的主持人 , 心理健康成本是多少?

The companies have usually downplayed the possibility of relying too heavily on AI for content moderation. Their answer has instead been to rely heavily on human moderators, in some cases with assistance from AI to help prioritize what gets reviewed. The specifics are not very transparent, vary from platform to platform, and have evolved over time. For example, Facebook has relied on large, contracted staff working at call-center-like facilities. Reddit and Nextdoor largely outsource their moderation to channel “mods” from their user communities.

两家公司通常低估了过于依赖AI进行内容审核的可能性。 相反,他们的答案是严重依赖人类主持人,在某些情况下,还需要AI的帮助来确定要审核的内容的优先级。 具体细节不是很透明,每个平台的细节各不相同,并且随着时间的推移而不断发展。 例如,Facebook依靠大型的合同工在类似呼叫中心的设施工作。 Reddit和Nextdoor在很大程度上将其审核工作外包给用户社区中的“ mod”。

Then came the pandemic. Many platforms had to rewire their moderation systems almost overnight to confront the crisis. Facebook had to suddenly shut down entire human moderation call-center-style operations and start using a whole lot more AI. Across the board, this has meant a whole lot of novel AI. Whether it is in the form of Twitter’s experiments in automatic labeling of problematic content (or nudging users who may be readying to post some), or in the form of Facebook’s increased reliance on AI to directly remove content, the pandemic has seen not only an increased reliance on existing AI capabilities, but the deployment of new approaches and entirely new features. And those don’t appear to be going away any time soon.

后来大流行了。 许多平台几乎要在一夜之间重新连接其审核系统,以应对危机。 Facebook必须突然关闭所有人工管理的呼叫中心式操作,并开始使用更多的AI。 总体而言,这意味着很多新颖的AI。 无论是Twitter尝试对有问题的内容进行自动标记的实验形式(或煽动可能准备发布某些内容的用户),还是Facebook越来越依赖AI直接删除内容的形式,这种流行病不仅导致越来越依赖现有的 AI功能,但部署了方法和全新功能。 而且那些似乎不会很快消失。

So why should we care? Content moderation systems, whether they primarily depend on humans or AI, have a giant impact on our democracies and free expression: the quality of information, who is allowed to speak, and about what. COVID and the anti-racism movement create an opening to ask some big, urgent questions. The good news is that we don’t all have to become data scientists in the process. Let’s explore three issues:

那么,为什么我们要关心呢? 内容审核系统,无论它们主要依赖于人类还是人工智能,都对我们的民主国家和自由表达产生了巨大影响:信息质量,获准发言的人员以及相关内容。 COVID和反种族主义运动为提出一些重大而紧迫的问题提供了机会。 好消息是,在此过程中,我们不必全都成为数据科学家。 让我们探讨三个问题:

问题1:结构偏差,算法和模拟 (ISSUE 1: STRUCTURAL BIAS, ALGORITHMIC AND ANALOG)

“Algorithmic bias” refers to the well-documented quality of AI to reproduce human biases based on how it is trained. In short, AI is good at generalizations. For example, one might train an AI model by showing it pictures of daisies. Based on that training, the model, with more or less accuracy, could make a guess about whether any new picture you showed it was also of a daisy. But depending on the specific pictures you used to train it, the AI’s ability to guess correctly might be acutely limited in all kinds of ways. If most or all of the photos you used to train it were of daisies as seen from above, but the photo it was evaluating presented a side view, it would be less likely to guess correctly. It might similarly have trouble with fresh versus wilted flowers, buds, flowers as they appear at night versus during the day, and so on. The larger and more diverse the training set, the more likely that these issues would be lessened. But fundamentally, the model would be worse at making accurate guesses about variations or about circumstances it had seen less often.

“算法偏见”是指经过充分证明的AI的质量,可以根据训练的方式重现人类的偏见。 简而言之,人工智能擅长概括。 例如,可以通过显示一张雏菊图片来训练AI模型。 基于该训练,模型(或多或少地具有准确性)可以猜测您显示的任何新图片是否也属于雏菊。 但是根据您用来训练它的特定图片,AI正确猜测的能力可能会在各种方面受到严重限制。 如果从上方看,您用来训练的大部分或全部照片都是雏菊,但所评估的照片却是侧视图,那么正确猜测的可能性就较小。 类似地,当它们出现在夜间或白天时,它们与鲜花,枯萎的花朵,芽,花朵一样可能会遇到麻烦。 培训规模越大越多样化,减少这些问题的可能性就越大。 但是从根本上讲,该模型在对变化或它不常看到的情况做出准确的猜测时会更糟。

Returning to the world of social media, work by Joy Buolamwini, Timnit Gebru, and Safiya Noble, among others, helps connect the dots. Namely: people of color, transgender and gender non-conforming people, and those whose identities subject them to multiple layers of biases — such as black women — are often underrepresented in the training data, and that underrepresentation becomes modeled and amplified by the AI that is learning from it. As a memorable WIRED headline recently said, “even the best algorithms struggle to recognize black faces equally.”

重返社交媒体世界,Joy Buolamwini,Timnit Gebru和Safiya Noble等人的工作有助于联系各个方面。 即:有色人种,变性人和性别不合格的人,以及那些使自己遭受多重偏见的身份的人(例如黑人妇女)在培训数据中常常被低估,而低估率则被AI建模并放大了,从中学到东西。 正如一个令人难忘的WIRED标题最近说的那样, “即使是最好的算法也难以平等地识别黑脸。”

This disturbing, well-documented, and endemic problem applies whether we are talking about photos, text, or other kinds of information. One study found that AI moderation models flag content posted by African American users 1.5 times as often as others. In a world saturated with systems that use these approaches, ranging from customer support systems to predictive policing software, the results can range from microaggression to wrongful arrest. This means that the very people who already must overcome hurdles of discrimination to have their voices heard online, and who are already disproportionately likely to experience online harassment, are also the most likely to be silenced by biased content moderation mechanisms.

无论我们谈论的是照片,文字还是其他类型的信息,都存在这种令人不安的,有据可查的地方病。 一项研究发现,AI审核模型标记非裔美国人用户发布的内容的频率是其他人的1.5倍 。 在一个充满了使用这些方法的系统的世界中,从客户支持系统到预测性警务软件,结果可能从微侵略到错误逮捕 。 这意味着那些已经必须克服歧视障碍才能在网上听到声音的人,以及已经极有可能遭受在线骚扰的人,也最有可能被偏见的内容审核机制所压制。

But lest we think human moderation is better, it’s clear that algorithmic bias is really a sibling of the biases we find in human moderation processes. Neither AI nor human moderation exists outside its structurally biased sociopolitical environment; both have repeatedly shown their own startlingly similar flaws. The question is not whether one approach or the other is biased: They both are, and they both pose risks for free expression online. There’s also little point in debating which is “more” or “less” biased.

但是,以免我们认为人为调节会更好,很明显,算法偏差确实是我们在人为调节过程中发现的偏差的同级兄弟。 在结构上有偏见的社会政治环境之外,人工智能和人类的节制都不存在。 两者都反复展示出自己惊人的相似缺陷。 问题不在于一种方法或另一种方法是否有偏见:两者都有,而且都存在在线自由表达的风险。 辩论中“偏多”或“偏少”的论点也没有多大意义。

Each process — whether manual, automated, or some combination of the two — will evince biases based on its context, design, and implementation that must be examined not just by the companies or outside experts, but by all of us. Like any system that helps shape or control free speech and political visibility, we need to be asking about each of these moderation systems: What does it do well, what does it do poorly, whom does it benefit, and whom does it punish? To know the answer to these questions, we need real transparency into how they are working and real, publicly accountable feedback loops and paths to escalate injustices.

每个过程(无论是手动,自动还是两者的某种组合)都将避免基于其上下文,设计和实施的偏见,这些偏见不仅必须由公司或外部专家进行检查,而且还必须由我们所有人进行检查 。 像任何有助于塑造或控制言论自由和政治知名度的系统一样,我们需要询问以下每个缓和系统:哪些系统做得好,哪些系统做得不好,哪些人受益,哪些人受到惩罚? 要知道这些问题的答案,我们需要真正透明地了解它们的工作方式,以及真正,公开负责的反馈循环和加剧不公正现象的途径。

问题2:全球标准的神话 (ISSUE 2: THE MYTH OF GLOBAL STANDARDS)

So, we’ve seen that both AI and human moderation have acute limitations. But what about the sheer scale of what’s being moderated? Can a team of moderators or string of code apply standards globally and equally? In short, no.

因此,我们已经看到,人工智能和人类节制都有严重的局限性。 但是,要审核的绝对规模如何? 一组主持人或代码串可以在全球范围内平等地应用标准吗? 简而言之,没有。

Countries outside of the United States and Europe, those with less commonly spoken languages, and those with smaller potential market size are underserved. Marginalized communities within those countries, geometrically moreso. The largest platforms have been investing in staffing by region, and in some cases by country, but no platform has achieved the bar of even one local staff member (or contextually trained AI) to oversee content moderation for each supported language or country. The continents of Africa and Asia are particularly underserved.

美国和欧洲以外的国家/地区,语言使用较少的国家/地区和潜在市场规模较小的国家/地区的服务不足。 这些国家内的边缘化社区在几何上更是如此。 最大的平台一直在按地区(在某些情况下按国家)在人员配备方面进行投资,但没有哪个平台可以达到甚至只有一名本地员工(或受过上下文训练的AI)来监督每种受支持语言或国家/地区的内容审核的标准。 非洲和亚洲大陆的服务尤其欠缺。

It seems likely that the pandemic has made this even worse. At first glance, it might seem like AI would make it easier to provide global scale than human moderation. But AI has to be trained for each language and culture. That takes time and money. It also takes data, and large, high-quality datasets are not always available for smaller and emerging markets. What does that add up to? High-quality AI remains largely unavailable for many or most languages and cultures, let alone dialects and communities.

大流行似乎使这种情况更加恶化。 乍一看,似乎AI会比提供适度的人力资源更容易提供全球规模。 但是AI必须针对每种语言和文化进行培训。 这需要时间和金钱。 它还需要数据,并且大型,高质量的数据集并不总是可用于较小的新兴市场。 这加起来是什么? 高质量的AI在许多或大多数语言和文化中仍然不可用,更不用说方言和社区了。

What does that mean during the pandemic? AI-reliant moderation being “turned on” globally means that the further you get away from Silicon Valley, there’s a higher likelihood the quality of the content moderation would have gotten worse. While data is scarce on these points, it’s hard to escape the question of how much of the world is currently effectively unmoderated on Facebook, Twitter, and other platforms. That means the potential of unchecked hate speech, overmoderation of certain communities and dissenting voices, and increased vulnerability to disinformation campaigns. Data has not been forthcoming on this from the social media companies, but we know that the Secretary General of the U.N. has cited an unchecked “tsunami of hate” globally, in particular online. The silencing of free expression by those targeted should alarm us all. As with Issue 1, it’s not necessarily that we simply want a return to the pre-pandemic way of doing things — those were bad, too. But some transparency, access to relevant data for researchers and the public, and a good honest conversation could do a world of good.

在大流行期间意味着什么? 全球范围内“依赖AI的审核”意味着您离硅谷越远,内容审核的质量就越有可能变得更差。 尽管在这些方面数据稀缺,但很难逃避当前在Facebook,Twitter和其他平台上有效地管理世界多少的问题。 这意味着仇恨言论不受制止,某些社区的过度节制和反对声音的可能性,以及散布虚假信息运动的脆弱性增加的可能性。 尚未从社交媒体公司获得有关此方面的数据,但我们知道,联合国秘书长已在全球,特别是在网络上引用了未经检查的“仇恨海啸” 。 那些有针对性的人对言论自由的沉默会惊动我们所有人。 与问题1一样,我们并不一定只是希望回到流行前的做事方式-这些也很糟糕。 但是一定程度的透明性,研究人员和公众对相关数据的访问以及良好的诚实对话可能会造福整个世界。

问题3:调解的速度与上诉的速度 (ISSUE 3: THE SPEED OF MODERATION VERSUS THE SPEED OF APPEALS)

Update (7/24/20): Before diving into this section, a note on piracy. Since the publication of the original version of this blog, artists rights advocates have raised the important point that content moderation discussions often fail to acknowledge the extraordinary amount of copyright infringement that is taking place on platforms like YouTube. For writers, recording artists, and others, the resulting inability to make money from their work, whether in the form of direct sales or royalties, can become the defining problem of their artistic lives and livelihoods.

更新(7/24/20):在深入探讨此部分之前,请先阅读有关盗版的注意事项。 自该博客的原始版本发布以来,艺术家权利的拥护者 提出了一个重要的观点 ,即内容审核的讨论常常未能意识到在YouTube等平台上发生的大量侵犯版权行为。 对于作家,唱片艺术家和其他人而言,无论是通过直接销售还是特许权使用费的形式,都无法从他们的作品中赚钱,这可能成为他们艺术生活和生计的决定性问题。

The unfortunate truth is that the systemic and long-running failure to remove infringing and violative content and the failure to provide rapid, accountable appeals processes for when inevitable errors do occur, coexist and have shared roots. The platforms have chosen not to build rigorous, transparent and accountable content moderation capacities.

不幸的事实是,系统性且长期无法消除侵权和违法内容,并且未能为不可避免的错误确实发生的情况提供快速,负责任的上诉程序,这些缺陷共存并有共同的根源。 平台选择不建立严格,透明和负责的内容审核能力。

Both of these problems — albeit of unequal scale and disparate impact — are urgent. By focusing here on takedowns of non-violative content and the latency of appeals processes, we’re by no means denying the existence, or sheer scale, of piracy that damages the livelihoods of writers and artists, nor the need for more aggressive protection of those rights by the platforms (an issue robustly addressed by our colleagues at The Authors Guild). Instead the point is this: Takedown and appeals practices need to be held to account as well.

尽管规模不平等,影响不尽相同,但这两个问题都是紧迫的。 通过在此关注非侵权内容的删除和申诉流程的延迟,我们绝不否认存在盗版行为或规模庞大的盗版行为,破坏了作家和艺术家的生计,也没有必要对其进行更积极的保护平台提供的这些权利( 我们作者协会的同事 大力解决 的问题 )。 相反,重点是:删除和申诉做法也必须加以考虑。

This section has been updated to refocus on its intended subject: the impact of downtime in the context of activism and its potential impact on public expression during urgent political moments.

本部分已更新,以重新关注其预期主题:在积极行动的背景下,停机时间的影响及其在紧急政治时刻对公众表达的潜在影响。

Finally, we have the question of what happens after content is removed in error. Any way you slice it, content moderation — particularly at the scale that many companies are operating — is very, very hard. That means that some content which shouldn’t have been removed will be, and it means that the ability of users to appeal removals is critical not just for the individual, but for platforms to learn from mistakes and improve the processes over time.

最后,我们有一个问题:错误删除内容后会发生什么。 无论如何,内容审核-尤其是在许多公司正在经营的规模上-都非常困难。 这意味着某些不应该删除的内容将被删除,这意味着用户对删除内容提出申诉的能力不仅对于个人至关重要,而且对于平台从错误中吸取教训并随着时间的推移改进流程也至关重要。

It might seem reasonable to argue that because the pandemic creates increased public interest in reliable health information, we should have more tolerance for “over-moderation” of some kinds of content, in order to limit the extremity and duration of the pandemic. For example, we might be more okay than usual with jokes about “the ’rona’” getting pulled down by accident if it means that more posts telling people to drink bleach get the ax in the process. The question gets complicated pretty quickly, however, when we look a little deeper.

可能有理由认为,由于大流行对可靠的健康信息产生了越来越大的公众利益,因此,我们应该对某些内容的“过度节制”具有更大的容忍度,以限制大流行的极端性和持续时间。 例如,如果说“罗娜”被意外撞倒的笑话,我们可能会比平时还好,如果这意味着更多的告诉人们喝漂白剂的帖子在这个过程中得到了斧头。 但是,当我们深入研究时,这个问题很快就会变得复杂。

One of the less discussed impacts of over-moderation is down time. How long, on average, is a piece of content or account offline before it is restored, and what are the consequences of that outage? Down time is pure poison for writers, artists, and activists who are trying to build momentum for a cause or contribute to fast-moving cultural moments. Returning to the examples from Instagram at the top of this article, we can see that the restoration of access to #sikh and #blacklivesmatter and public acknowledgement of the outage, while important, did not necessarily make those communities and voices whole; in some contexts, the timeliness of the conversation is everything. The problem here is that, while the non-violative content or conversation may eventually be restored, audience and the news cycle move on. In the context of dissent, protest, and activism, latency can wound a movement.

过度节制的影响较少讨论的因素之一就是停机时间。 一条内容或帐户平均在恢复之前要脱机多长时间,中断的后果是什么? 停机时间对于试图为事业发展势头或为快速发展的文化时刻做出贡献的作家,艺术家和活动家而言,纯属毒药。 返回本文顶部Instagram示例,我们可以看到,恢复对#sikh和#blacklivesmatter的访问以及对中断的公共认可虽然很重要,但并不一定会使这些社区和声音完整。 在某些情况下,对话的及时性就是一切。 这里的问题是,尽管非侵权内容或对话最终可能会恢复,但观众和新闻周期仍在继续。 在持不同政见,抗议和激进主义的背景下,潜伏期会伤及运动。

What’s important to realize is that while the use of AI moderation has increased — including in directly removing content — appeals generally still rely on human beings, who review such appeals on a case-by-case basis. That means that moderated content that is non-violative can remain offline for hours, days, or weeks before ultimately being restored. This situation has very likely been worsened by the exigencies of the epidemic, with higher rates of mistaken removals and fewer human moderators available to process appeals. It is difficult based on available data to understand the degree of the problem, how it has been impacted by the pandemic, and which demographics are most impacted.

重要的是要意识到,尽管AI调节的使用有所增加(包括直接删除内容),但是上诉通常仍然依赖于人类,他们会逐案审查此类上诉。 这意味着经过审核的非违反内容可以在最终还原之前保持离线状态数小时,数天或数周。 由于流行病的紧急情况,这种情况很可能会恶化 ,错误清除的发生率更高,可用于处理上诉的人工主持人也更少。 根据现有数据很难理解问题的严重程度,大流行如何影响了这一问题以及哪些人口统计数据受到的影响最大。

Facebook’s May 2020 transparency report notes that over 800,000 pieces of content were removed from Instagram under the shared Facebook/Instagram Hate Speech policy in the first three months of 2020. Of that, some 62,000 removals were appealed, and almost 13,000 were eventually restored (just under two percent). Facebook does not break down these numbers demographically. Data regarding how long restored content was down is also unavailable. Numbers during the pandemic, including at the tail end of Q1, are also incomplete or unavailable.

Facebook 2020年5月的透明度报告指出,根据2020年头三个月的Facebook / Instagram仇恨言论共享政策,从Instagram删除了超过800,000条内容。其中,大约有62,000次删除已被提起上诉,最终恢复了近13,000次(仅低于2%)。 Facebook不会按人口统计细分这些数字。 有关还原的内容已关闭多长时间的数据也不可用。 大流行期间的数字(包括Q1末尾的数字)也不完整或不可用。

Given the current domestic and global political moment, what are the ramifications of this unexamined overmoderation? In a situation where various factions and organizers are trying to amplify messages, build momentum, and grapple with sensitive topics like race, sex, and religion, over-moderation of those engaging appropriately but on contentious issues becomes more likely. This poses risks for free expression that are particularly alarming in a context where such contentious but vital public debates are now necessarily happening in the virtual sphere; this was already true before the pandemic, and it is now even more so. Again, the question is not whether automated or human processes are better — the question is how specifically these systems are functioning and for whom.

鉴于当前的国内和全球政治时刻,这种未经审查的过度节制的后果是什么? 在各种派系和组织者试图扩大信息,建立势头并应对种族,性别和宗教等敏感话题的情况下,那些适当地参与但有争议的问题的人变得过于节制的可能性更大。 这给言论自由带来了风险,在这种争议性但至关重要的公开辩论现在必然在虚拟领域发生的情况下,这尤其令人震惊。 在大流行之前这已经是正确的,现在更是如此。 同样,问题不是自动化或人工流程是否更好—问题在于这些系统的运作方式以及针对谁。

永远不要浪费好危机。 绝对不要浪费两个。 (NEVER WASTE A GOOD CRISIS. DEFINITELY DON’T WASTE TWO.)

It’s summer and the pandemic drags on, disinformation continues to spread, and authoritarian regimes are becoming ever more brazen. In the midst of all this, writers, artists, and organizers are speaking out, sharing their truths and exercising their rights, including in the historic anti-racism movement in the United States and abroad. But some are facing a head wind. Our moderation systems, and their appeals processes, will need to do better. In pretending to treat all content as equal, we have built deeply biased and politically naive systems that will continue to magnify harms and be exploited for personal and authoritarian gain.

今年夏天,流行病继续蔓延,虚假信息继续传播,专制政权变得越来越无耻。 在这一切之中,作家,艺术家和组织者正在大声疾呼,分享自己的真理并行使自己的权利,包括在美国和国外的历史性反种族主义运动中。 但是有些人正面临着逆风。 我们的审核系统及其申诉流程将需要做得更好。 我们假装将所有内容均等地对待,建立了深深偏见和政治天真的制度,这些制度将继续扩大危害,并被利用以谋取个人和独裁政权。

Now is the time, as we rethink the handshake, the cheek kiss, and the social safety net, to rethink moderation. In order to do that, we need facts. How specifically are communities of color being served by these processes? Where specifically are our American technologies being appropriately tuned and staffed around the world to mitigate their harms? How will the platforms address the inevitable logjams of moderation/appeals pipelines that have AI at the front and humans plodding along at the back?

现在是时候重新思考握手,脸颊亲吻和社会安全网,重新思考节制。 为了做到这一点,我们需要事实。 这些过程如何专门服务于色彩社区? 其中特别是我们的美国技术被适当地调整和世界各地的工作人员,以减轻他们的危害? 平台将如何解决不可避免的中庸/上诉管道的僵局,这些管道的前面是AI,后面是人类。

In an emergency, a lot of decisions have to get made without the usual deliberation. But as parts of the world begin to reemerge — and perhaps also face a second wave of COVID-19 — and as we read headlines every day about disinformation campaigns from Russia, China, and your racist uncle with a few bucks and a botnet, there are a few big questions we need to start asking about how content moderation worked before, how it is working now, and what the plan is. November is just around the corner.

在紧急情况下,必须做出许多决定,而无需通常的审议。 但是随着世界各地的崛起-也许还面临第二波COVID-19-当我们每天都读有关俄罗斯,中国和您的种族主义叔叔的虚假宣传活动时,他们花了几美元和一个僵尸网络,我们需要开始询问几个大问题,即内容审核之前是如何工作的,现在如何运作以及计划是什么。 十一月即将来临。

Matt Bailey serves as PEN America’s digital freedom program director, focusing on issues ranging from surveillance and disinformation, to digital inclusion that affect journalists and writers around the world.

马特·贝利(Matt Bailey)担任美国笔会(PEN America)的数字自由计划主任,重点关注从监视和虚假信息到影响世界各地新闻工作者和作家的数字包容性问题。

翻译自: https://medium.com/@PENamerica/three-big-discussions-we-need-to-have-asap-about-ai-and-social-media-disinformation-997ea9bbe763

人工智能+社交 csdn


http://www.taodudu.cc/news/show-1873875.html

相关文章:

  • 标记偏见_人工智能的影响,偏见和可持续性
  • gpt2 代码自动补全_如果您认为GPT-3使编码器过时,则您可能不编写代码
  • 机器学习 深度学习 ai_什么是AI? 从机器学习到决策自动化
  • 艺术与机器人
  • 中国ai人工智能发展太快_中国的AI:开放采购和幕后玩家
  • 让我们手动计算:深入研究Logistic回归
  • vcenter接管_人工智能接管广告创意
  • 人工智能ai算法_当AI算法脱轨时
  • 人工智能 企业变革_我们如何利用(人工)情报变革医院的运营管理
  • ai 道德_AI如何提升呼叫中心的道德水平?
  • 张北草原和锡林郭勒草原区别_草原:比您不知道的恶魔还强
  • keras pytorch_使用PyTorch重新创建Keras功能API
  • 人工智能ai应用高管指南_解决AI中的种族偏见:好奇心指南
  • 人工智能ai以算法为基础_IT团队如何为AI项目奠定正确的基础
  • ai人工智能_AI偏见如何发生?
  • unityui计分_铅计分成长
  • ml工程师_ML工程师正在失业。 仍然学习ML
  • ai智能和大数据测试_测试版可帮助您根据自己的条件创建数据和AI平台
  • ai人工智能_毕竟人工智能可能不适合您
  • gpt-2 文章自动生成_有助于您理解GPT-3的文章
  • 科技公司亚马逊名字由来_名字叫什么? 为什么亚马逊的“认可”是可爱营销的灾难性尝试
  • 无人驾驶 ai算法_AI机器学习具有碳足迹,因此无人驾驶汽车也是如此
  • 讲个故事,曾祖父
  • ai审计_用于内部审计和风险管理的人工智能
  • 自动化编程 ai_人工智能,自动化和音乐
  • 机器学习--线性回归1_线性回归-进入迷人世界的第一步
  • 神经网络 神经元_神经去耦
  • ai人工智能将替代人类_人类可以信任AI吗?
  • ai人工智能可以干什么_人工智能可以解决我的业务问题吗?
  • 如何识别媒体偏见_面部识别软件:宝贵资产,还是社会偏见的体现?

人工智能+社交 csdn_关于AI和社交媒体虚假信息,我们需要尽快进行三大讨论相关推荐

  1. 社交媒体用户信息回避行为的影响因素分析

    摘  要 本研究旨在系统揭示社交媒体用户信息回避行为的影响因素,从而为研究该主题的学者的未来研究工作提供便利,同时也为相关组织的实践活动提供理论指导以减少个体用户的信息回避从而提高信息传播效果.利用系 ...

  2. 【新智元峰会】德国AI教皇盛赞中国人工智能,25位AI领袖强势打造中国新智极...

    中美史诗级贸易战,中国AI能否成为破局之剑? 2018年,AI再次成为全球经济竞争的焦点.中美的大国近期在贸易问题上的较量,背后核心原因就是人工智能等技术博弈:最近亚马逊市值超越谷歌,2018年AI云 ...

  3. 【AI人工智能】50个AI技术在商城的应用和服务

    50个AI技术在商城的应用和服务 智能客服机器人:通过 AI 技术可以实现商城的智能客服功能,为用户提供24小时在线的咨询.答疑和解决问题的服务.可以利用自然语言处理和深度学习等技术,让机器人像人类一 ...

  4. 社交购物小程序开发 社交电商小程序源码定制 社交购物系统搭建

    社交购物平台APP开发-定制,社交购物平台,社交购物系统开发,社交购物APP开发,社交购物系统平台源码定制,社交购物平台源码开发,社交购物平台制作,社交购物网站开发 拼多多.有赞的上市将社交电商推向了 ...

  5. 人工智能德国造 “弱AI“强势登场

    https://www.toutiao.com/a6673249163565597191/ 2019年德国政府用于AI研发的预算为5亿欧元.在德国联邦政府近期正式发布的<德国联邦政府人工智能战略 ...

  6. 我的世界java版联机不稳定_完善自己:“联机版游戏玩家”如何通过社交完善自己,这个社交跟你想的也许不一样1.0...

    点击左上角关注 暖暖桃林,获得更多图文 完善自己:"联机版游戏玩家"如何通过社交完善自己,这个社交跟你想的也许不一样1.0 假设我们在这个社会上,在这个世界上存在,是作为一款游戏 ...

  7. 基于社交心理过程满足的LBS社交应用研究

    2019独角兽企业重金招聘Python工程师标准>>> 基于社交心理过程满足的LBS社交应用研究 摘要:随着网络技术的普及和基于网络技术的社会化媒体的发展,现实生活中人际交往逐渐向网 ...

  8. 带你了解2021世界人工智能大会上的AI新趋势

    计算机视觉研究院专栏 作者:Edison_G 7月10日,以"智联世界,众智成城"为主题的为期三天的2021世界人工智能大会(WAIC 2021)在上海圆满落幕. 长按扫描二维码关 ...

  9. 区块链丨拒绝虚假信息,优化你的网络社交生活

    你有没有遇到过这种情况,朋友圈大量转发的"新闻"最后被澄清是假消息:同样一个视频内容,被很多人直接发到自己的社交账号,以此来博得关注. 随着社会的发展与AI等一系列新技术的兴起,人 ...

  10. “人工智能与法律”对AI产品经理有何实际借鉴意义

    音频由公众号"闪电配音"提供 前言 AI和法律.伦理.安全等方面的交叉领域,我个人一直是比较关注,因为一方面,这些基础设施会对人类产生很大影响,并直接影响部分AI领域落地的时机,另 ...

最新文章

  1. 深入理解malloc和free
  2. SUSE安装g++的两种方法
  3. 未来3年,大数据市场规模将达到万亿元
  4. 自然人税收管理系统扣缴客户端服务器超时,“自然人税收管理系统”扣缴客户端常见问题十问十答...
  5. NgRx Store createSelector 的单步调试和源代码分析
  6. ucloud对象存储装宝塔_使用UCloud优刻得云主机和宝塔面板快速搭建WP个人博客网站教程...
  7. php get获取腾讯视频vid,获取腾讯视频源地址链接的方法
  8. Powershell进阶学习(6) 部署 Windows PowerShell Web 访问
  9. java url 格式化_String.format()的使用:Java字符串格式化
  10. (转)洪小文:以科学的方式赤裸裸地剖析AI|混沌初开
  11. 《通信原理》复习笔记6----第六章数字基带传输系统(重中之重点+难上加难点)
  12. 去除图片上的文字出现马赛克?
  13. 无法打开键,请验证您对该键拥有足够的访问权限
  14. 51单片机(十六)—— 定时器0和定时器1寄存器介绍及功能描述
  15. shell中source、sh、bash、./的区别
  16. 算法分析-时间复杂度:大O、大Ω、大θ、小o、小ω
  17. 1.PSTN与VoIP基础
  18. 果园机器人是什么文体_24课果园机器人
  19. 2021届武汉理工大学计算机技术/软件工程考研成功经验分享
  20. Anylogic各个版本的功能对比

热门文章

  1. SQLServer2008 去除换行符
  2. 部署ganglia3.7
  3. codeforces 212E IT Restaurants(树形dp+背包思想)
  4. 30+简约和平铺的WordPress复古主题
  5. 2020-10-13
  6. Atitit golang开发环境搭建 目录 1. 编辑helo.go 1 1.1. 调试编译 1 2. Ide选择liteide 2 3. 问题解决 2 4. 附录 2 4.1. Go语言标准库常
  7. Atitit 重复文件清理工具 按照文件名 目录 1. 原理, 1 1.1. If base filex exist dele other files 1 1.2. Get getStartIdex
  8. Atitit 遗留系统的改造 微创技术 attilax总结 目录 1. 微创是高科技带来的革命! 1 1.1. 早期微创 1 1.2. 微创五大优点 1 2. 常用辅助设备与模块 2 2.1. 清晰
  9. Atitit 项目源码的架构,框架,配置与环境说明模板 规范 标准化
  10. Atitit atiplat_reader 基于url阅读器的新特性