人工智能ai算法

On the business blog of the Federal Trade Commission there is a piece written the 8th of April 2020 by: Andrew Smith, Director, FTC Bureau of Consumer Protection. The article was called: “Using Artificial Intelligence and Algorithms.” In this article I will summarise the blog post and comment on a few statements.

在联邦贸易委员会的商业博客上有一篇文章写于2020年4月8日,作者:FTC消费者保护局局长Andrew Smith。 该文章的标题为: “使用人工智能和算法”。 在本文中,我将总结该博客文章并评论一些声明。

The article starts by arguing that there are risks of unfairness and discriminatory practices as well as possible socioeconomic disparities.

本文首先论证存在不公平和歧视性做法以及可能存在的社会经济差异的风险。

He presents examples:

他列举了一些例子:

“Health AI offers a prime example of this tension. Research recently published in Science revealed that an algorithm used with good intentions — to target medical interventions to the sickest patients — ended up funneling resources to a healthier, white population, to the detriment of sicker, black patients.”

“卫生AI提供了这种紧张关系的典型例子。 最近发表在《 科学》杂志上的一项研究表明,一种有针对性地使用的算法-针对最病的患者进行医疗干预-最终使资源流向了更健康的白人人群,从而损害了较病的黑人患者。”

At the same time he says automated decision-making is not new.

同时,他说自动化决策并不新鲜。

“…we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers.”

“……我们在联邦贸易委员会(FTC)拥有应对由使用数据和算法来做出有关消费者决策的挑战方面的长期经验。”

Following this he mentions specific cases and violations.

在此之后,他提到了一些具体案例和违规行为。

  • “The Fair Credit Reporting Act (FCRA), enacted in 1970, and the Equal Credit Opportunity Act (ECOA), enacted in 1974, both address automated decision-making, and financial services companies have been applying these laws to machine-based credit underwriting models for decades.”

    1970年颁布的 公平信用报告法》(FCRA) 1974年颁布的 《平等信用机会法》(ECOA) 都涉及自动决策,金融服务公司已将这些法律应用于基于机器的信用承销几十年的模型。”

They have used their authority to prevent unfair and deceptive practices.

他们利用自己的权威来防止不公平和欺骗性的行为。

Consumer injury arising from the use of AI and automated decision-making has in this manner been addressed before.

以前已经以这种方式解决了因使用AI和自动决策而造成的消费者伤害。

  • In 2016, the FTC issued a report titled Big Data: A Tool for Inclusion or Exclusion?, which advised companies using big data analytics and machine learning to reduce the opportunity for bias.

    2016年,联邦贸易委员会发布了一份题为《 大数据:纳入或排除的工具? ,建议使用大数据分析和机器学习的公司减少偏差的机会。

Most recently, they held a hearing in November 2018 to explore AI, algorithms, and predictive analytics.

最近,他们在2018年11月举行了一次听证会,以探讨AI,算法和预测分析 。

The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be:

FTC的执法行动,研究和指导强调,使用AI工具应:

  1. Transparent.透明。
  2. Explainable.可解释的。
  3. Fair.公平。
  4. Empirically sound.从经验上讲。
  5. Fostering accountability.加强问责制。

They go through each of these steps in turn.

他们依次经历了每个步骤。

透明 (Being transparent)

  • Don’t deceive consumers about how you use automated tools.

    不要欺骗消费者您如何使用自动化工具。

AI is often in the background, and one should not be misled about the background.

AI通常处于后台,因此不应误以为背景。

One should not mislead consumers about the nature of the interaction — whether one talks to an algorithmic representation of a dialogue or a human being.

无论是与对话的算法表示还是与人对话,都不应误导消费者有关交互性质的信息。

“The Ashley Madison complaint alleged that the adultery-oriented dating website deceived consumers by using fake “engager profiles” of attractive mates to induce potential customers to sign up for the dating service. And the Devumi complaint alleged that the company sold fake followers, phony subscribers, and bogus “likes” to companies and individuals that wanted to boost their social media presence.”

“ 阿什利·麦迪逊 ( Ashley Madison)的投诉声称,以通奸为目的的约会网站通过使用有吸引力的伴侣的假“订婚者资料”来欺骗消费者,以诱使潜在顾客注册约会服务。 Devumi的投诉称,该公司向希望提高社交媒体影响力的公司和个人出售了假的追随者,虚假的订户和虚假的“喜欢”。

If a company is misleading it: “…could face an FTC enforcement action.”

如果一家公司误导了它: “……可能会面临FTC的强制执行行动。”

  • Be transparent when collecting sensitive data.

    收集敏感数据时要透​​明。

A larger data set and a ‘better algorithm’ may not ultimately be better for the consumer. How the data set is acquired also matters.

更大的数据集和“更好的算法”可能最终不会对消费者更好。 数据集的获取方式也很重要。

“Secretly collecting audio or visual data — or any sensitive data — to feed an algorithm could also give rise to an FTC action. Just last year, the FTC alleged that Facebook misled consumers when it told them they could opt in to facial recognition — even though the setting was on by default. As the Facebook case shows, how you get the data may matter a great deal.”

“秘密收集音频或视频数据或任何敏感数据以提供算法,也可能引发FTC行动。 就在去年,联邦贸易委员会(FTC)指控Facebook告诉消费者他们可以选择面部识别,从而误导了消费者-即使该设置默认情况下处于启用状态。 正如Facebook的案例所示,如何获取数据可能非常重要。”

  • If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice.

    如果您根据第三方供应商的信息做出自动决策,则可能需要向消费者提供“不利行动”通知。

If a vendor assembles consumer information to automate decision-making that triggers duties as the user of that information.

如果供应商组装消费者信息以自动进行决策,则触发该信息用户的职责。

It is important, and one: “…must provide consumers with certain notices under the FCRA.”

重要的是: “……必须根据FCRA向消费者提供某些通知。”

“Say you purchase a report or score from a background check company that uses AI tools to generate a score predicting whether a consumer will be a good tenant. The AI model uses a broad range of inputs about consumers, including public record information, criminal records, credit history, and maybe even data about social media usage, shopping history, or publicly-available photos and videos. If you use the report or score as a basis to deny someone an apartment, or charge them higher rent, you must provide that consumer with an adverse action notice. The adverse action notice tells the consumer about their right to see the information reported about them and to correct inaccurate information.”

“假设您从背景调查公司购买了报告或评分,该公司使用AI工具生成了一个预测消费者是否将成为好租户的评分。 AI模型使用了有关消费者的各种输入,包括公共记录信息,犯罪记录,信用历史记录,甚至可能是有关社交媒体使用情况,购物历史记录或公开可用的照片和视频的数据。 如果您使用该报告或评分作为拒绝某人公寓或向他们收取更高租金的依据,则必须向该消费者提供不利行动通知。 不良行为通知告知消费者他们有权查看有关他们的举报信息并更正不正确的信息。”

向消费者解释决定 (Explaining the decision to the consumer)

  • “If you deny consumers something of value based on algorithmic decision-making, explain why.”

    “如果您基于算法决策拒绝给消费者一些有价值的东西,请解释原因。”

There are several industries where reasoning has to be explained and a requirement to disclose the principal reason for example in the case of denied credit.

有几个行业需要解释推理,例如在信用被拒绝的情况下,要求披露其主要原因。

“…it’s not good enough simply to say “your score was too low” or “you don’t meet our criteria.” You need to be specific (e.g., “you’ve been delinquent on your credit obligations” or “you have an insufficient number of credit references”). This means that you must know what data is used in your model and how that data is used to arrive at a decision. And you must be able to explain that to the consumer. If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked.”

“……仅仅说“您的分数太低”或“您不符合我们的标准”是不够的。 您需要具体说明(例如,“您拖欠了信用额度”或“信用参考数不足”)。 这意味着您必须知道模型中使用了哪些数据以及如何使用这些数据来做出决策。 而且您必须能够向消费者解释这一点。 如果您要使用AI在任何情况下做出有关消费者的决定,请考虑如何在询问时向客户解释您的决定。”

If you are using risk factors reveal key factors that affect score, rank and importance.

如果您使用的是风险因素,请揭示影响得分,排名和重要性的关键因素。

  • If you might change the terms of a deal based on automated tools, make sure to tell consumers.

    如果您可能会基于自动化工具更改交易条款,请务必告知消费者。

“More than a decade ago, the FTC alleged that subprime credit marketer CompuCredit violated the FTC Act by deceptively failing to disclose that it used a behavioral scoring model to reduce consumers’ credit limits.”

“十多年前,FTC声称次级信贷市场营销商CompuCredit欺骗性地未能披露其使用行为评分模型来降低消费者的信用额度, 从而违反了FTC法案。”

Credit limits were reduced if cardholders went to certain places or had different type of purchasing patterns.

如果持卡人去某些地方或使用不同类型的购买方式,则信用额度会降低。

“If you’re going to use an algorithm to change the terms of the deal, tell consumers.”

“如果您要使用算法来更改交易条款,请告诉消费者。”

确保您的决定公正 (Ensure that your decisions are fair)

  • Don’t discriminate based on protected classes.

    不要基于受保护的类别进行区分。

There are equal opportunity laws in the United States.

美国有平等机会法。

ECOA and Title VII of the Civil Rights Act of 1964, is mentioned as possibly relevant.

提到了ECOA和1964年《民权法案》的第七章。

“The FTC enforces ECOA, which prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”

“ FTC强制执行ECOA,该ECOA禁止基于种族,肤色,宗教,国籍,性别,婚姻状况,年龄或某人获得公共援助的理由进行信用歧视。”

If a credit card makes credit decisions based on consumers ‘Zip Codes’, then that could result in a “disparate impact” on particular ethnic groups.

如果信用卡根据消费者的“邮政编码”做出信用决定,则可能会对特定种族造成“不同的影响”。

  • Focus on inputs, but also on outcomes.

    专注于投入,也要注重成果。

It is not only about the data that goes in, rather it is also about a wider set of responsibilities for the consequences.

不仅涉及输入的数据,还涉及后果的更广泛责任。

“…regardless of the inputs, we review the outcomes. For example, does a model, in fact, discriminate on a prohibited basis? Does a facially neutral model have an illegal disparate impact on protected classes?”

“……无论投入多少,我们都会审查结果。 例如,模型实际上是否在禁止的基础上进行歧视? 面部中立的模特对受保护的阶级有非法的不同影响吗?”

Not only what is processed, but the process.

不仅处理什么,而且处理。

  • Give consumers access and an opportunity to correct information used to make decisions about them.

    为消费者提供访问机会,并有机会纠正用于做出有关他们的决定的信息。

“The FCRA regulates data used to make decisions about consumers — such as whether they get a job, get credit, get insurance, or can rent an apartment.”

“ FCRA会对用于做出有关消费者的决定的数据进行监管-例如他们是否获得工作,获得信贷,获得保险或可以租房。”

Consumers should be able to obtain the information and dispute it. Providing a copy of the information to the consumer and allow the consumer to dispute the accuracy of the information.

消费者应该能够获得该信息并对它提出异议。 向消费者提供信息的副本,并允许消费者对信息的准确性提出质疑。

确保数据和模型稳健并凭经验证明 (Ensuring that data and models are robust and empirically sound)

  • “If you provide data about consumers to others to make decisions about consumer access to credit, employment, insurance, housing, government benefits, check-cashing or similar transactions, you may be a consumer reporting agency that must comply with the FCRA, including ensuring that the data is accurate and up to date.”

    “如果您向其他人提供有关消费者的数据,以便做出有关消费者获得信贷,就业,保险,住房,政府福利,支票兑现或类似交易的决定,那么您可能是必须遵守FCRA的消费者报告机构,包括确保数据是准确和最新的。”

If one does AI consumer reports one may not think this is the case, but according to this blog post it is — if you compile and sell consumer information used for credit, employment, insurance, housing, or other similar decisions one could be subject to the FCRA.

如果AI消费者报告了一个人可能并不认为是这种情况,但是根据此博客文章,事实是-如果您汇编和出售用于信贷,就业,保险,住房或其他类似决定的消费者信息,则可能会受到该消费者的约束。 FCRA。

In practice this mean: ”…you have an obligation to implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors.”

实际上,这意味着: “……您有义务执行合理的程序,以确保消费者报告的最大可能准确性,并为消费者提供访问其自身信息以及纠正任何错误的能力。”

“RealPage, Inc., a company that deployed software tools to match housing applicants to criminal records in real time or near real time, learned this the hard way. The company ended up paying a $3 million penalty for violating the FCRA by failing to take reasonable steps to ensure the accuracy of the information they provided to landlords and property managers.”

“ RealPage,Inc .是一家部署软件工具以实时或接近实时地将住房申请人与犯罪记录相匹配的公司,这使他们很难学。 该公司因未能采取合理措施确保向房东和物业管理者提供的信息的准确性而违反FCRA最终被罚款300万美元。”

  • If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate, even if you are not a consumer reporting agency.

    如果您向其他人提供有关客户的数据以用于自动决策,则即使您不是消费者报告机构,您也有义务确保数据的准确性。

If you provide data you may have a requirement to ensure that data was correct.

如果您提供数据,则可能需要确保数据正确。

Whoever does this: “…must have in place written policies and procedures to ensure that the data they furnish is accurate and has integrity.”

谁这样做: “……必须制定书面政策和程序,以确保提供的数据准确且完整。”

There are requirements in this manner for those who make and sell the data.

对于那些制造和出售数据的人有这种要求。

“… the FTC has brought actions, and obtained big fines, against companies that furnished information to consumer reporting agencies but that failed to maintain the required written policies and procedures to ensure that the information that they report is accurate.”

“……联邦贸易委员会对向消费者报告机构提供信息但未能遵守规定的书面政策和程序以确保其报告信息准确的公司,采取了行动,并处以巨额罚款。”

  • Make sure that your AI models are validated and revalidated to ensure that they work as intended, and do not illegally discriminate.

    确保您的AI模型经过验证和重新验证,以确保它们按预期工作,并且不会被非法歧视。

There are lending laws in the United States that encourage the use of AI tools that are “empirically derived, demonstrably and statistically sound.”

美国有借贷法律鼓励使用“凭经验得出的,可证明的和统计上合理的”人工智能工具。

The empirical comparison of sample groups must be within a reasonable preceding period of time, developed with statistical principles and methodology, and adjusted as necessary to maintain predictive ability.

样本组的经验比较必须在合理的先前时间内进行,采用统计原理和方法进行发展,并根据需要进行调整以保持预测能力。

让自己对合规性,道德,公平和非歧视负责 (Hold yourself accountable for compliance, ethics, fairness and nondiscrimination)

  • Ask questions before you use the algorithm.

    在使用算法之前先问问题。

In this blog article there are a series of questions that can be asked to avoid that outcome of adverse bias, any operator of an algorithm should ask four key questions:

在此博客文章中,可以提出一系列问题,以避免产生不利偏差的结果,算法的任何运算符都应提出四个关键问题:

  1. How representative is your data set?您的数据集的代表性如何?
  2. Does your data model account for biases?您的数据模型是否考虑了偏见?
  3. How accurate are your predictions based on big data?您根据大数据进行的预测有多准确?
  4. Does your reliance on big data raise ethical or fairness concerns?您对大数据的依赖会引起道德或公平方面的担忧吗?
  • Protect your algorithm from unauthorised use.

    保护您的算法免遭未经授权的使用。

Software tools can be misused.

软件工具可能会被滥用。

“…just last month, the FTC hosted a workshop on voice-cloning technologies. Thanks to machine learning, these technologies enable companies to use a five-second clip of a person’s actual voice to generate a realistic audio of the voice saying anything.”

“……就在上个月,FTC主持了一个有关语音克隆技术的研讨会。 得益于机器学习,这些技术使公司能够使用一个人的实际语音的五秒钟剪辑来生成说出任何声音的逼真的音频。”

There is a promise for the use for good purposes such as helping those who have lost the ability to speak. However, it could be easily abused.

可以将其用于良好目的,例如帮助失去说话能力的人。 但是,它很容易被滥用。

  • Consider your accountability mechanism.

    考虑您的问责机制。

How do you hold yourself accountable?

您如何对自己负责?

Is it possible that independent standards or independent expertise could help to take stock of your AI.

独立的标准或独立的专业知识是否可以帮助评估您的AI。

“Outside, objective observers who independently tested the algorithm were the ones who discovered the problem [in the case of Health AI]. Such outside tools and services are increasingly available as AI is used more frequently, and companies may want to consider using them.”

“另外,独立测试算法的客观观察者是发现问题的人(在Health AI中)。 随着AI的使用越来越频繁,此类外部工具和服务也越来越多,公司可能希望考虑使用它们。”

This blog post represent an insightful and well-structured outline of a few issues relating to the use or practical implications of artificial intelligence.

这篇博客文章代表了有关人工智能使用或实际含义的一些问题的有见地且结构合理的概述。

This is #500daysofAI and you are reading article 381. I am writing one new article about or related to artificial intelligence every day for 500 days.

这是#500daysofAI,您正在阅读第381条。我连续500天每天都在撰写一篇有关人工智能或与人工智能有关的新文章。

翻译自: https://medium.com/digital-diplomacy/ai-algorithms-and-the-federal-trade-commission-4ad8e6317d25

人工智能ai算法


http://www.taodudu.cc/news/show-6927740.html

相关文章:

  • 看完这篇 Android ANR 分析,就可以和面试官装逼了!
  • day74-20180901-流利阅读笔记
  • Daily English Dictation NO.1 ~ NO.30
  • ARC4加密库
  • android常见的弹窗对话框
  • Alleged RC4密码算法分析与实现
  • 微信小程序开发系列 (四) :微信小程序的页面跳转路由设计
  • 【微信小程序】发送消息模板教程
  • Springboot 那年我双手插兜,手写一个excel导出
  • Beego模板 循环和判断几个例子
  • 无法找到模块“@antv/l7-district”的声明文件。
  • matplotlib绘制散点图之基本配置——万能模板案例
  • TP5 模板渲染问题
  • MySQL中district_MySQL中distinct语句的基本原理及其与group by的比较
  • 微擎模板函数、组件
  • MySQL中district,mysql SQL的应用
  • TI MSP432P401R GY-906非接触式温度传感驱动程序
  • MH-100X微波运动传感器介绍
  • VIAVI唯亚威FI-10/-11 光纤识别仪
  • 光纤FP传感器解调分析
  • sina微博无法登陆.
  • 关于新浪微博开放平台微博登录授权后再次登录会自动登录问题的解决办法
  • HDU3905 DP
  • hdu2083
  • HDU3080
  • TOJ 3290
  • HDU3085
  • hdu3078
  • hdu3980
  • HDU - 3038

人工智能ai算法_AI算法和联邦贸易委员会相关推荐

  1. 人工智能AI课 推荐算法详解和实现

    Model-Based 协同过滤算法 随着机器学习技术的逐渐发展与完善,推荐系统也逐渐运用机器学习的思想来进行推荐.将机器学习应用到推荐系统中的方案真是不胜枚举.以下对Model-Based CF算法 ...

  2. c++svd算法_AI算法工程师面试6

    60道AI算法高频面试题 https://mp.weixin.qq.com/s/1GavvCY7wUetMvC61gxkLg​mp.weixin.qq.com 机器学习(15题) 参考: 为什么LR模 ...

  3. AI人工智能发展的经典算法

    AI人工智能发展的经典算法 文章目录 AI人工智能发展的经典算法 前言 一.智力挑战 二.计算方面的挑战 三.人工神经网络 四.因果推理 五.迁移学习 六.元学习 七.自主学习 小结 前言 近年来,计 ...

  4. 人工智能ai以算法为基础_智能扬声器和AI将为您的医师带来超强能力

    人工智能ai以算法为基础 by Kevin Seals 通过凯文海豹 智能扬声器和AI将为您的医师带来超强能力 (Smart speakers and A.I. will give your phys ...

  5. 谷歌:新人工智能(AI)算法预测人类死亡时间,意念可指挥机器人

    谷歌AI新算法 日前,谷歌新出炉的一项研究报告称,该公司已开发出一种新人工智能(AI)算法,可预测人的死亡时间,且准确率高达95%.据报道,这项AI技术对医院患者面临的一系列临床问题进行了测试.在研究 ...

  6. 2020全国人工智能大赛AI+无线通信 复赛算法分享

    2020全国人工智能大赛 AI+无线通信 复赛算法分享(第八名 无名王者队) 赛题说明 赛题背景 赛题任务 数据简介 数据说明 数据下载 评测标准 赛题分析 赛题难点 传统算法的思路 复赛算法分享 预 ...

  7. 从人工智能 (AI)发展应用看算法测试的测试策略

    https://www.toutiao.com/a6708688571563246087/ 随着人工智能的发展与应用,AI测试逐渐进入到我们的视野,传统的功能测试策略对于算法测试而言,心有余而力不足, ...

  8. 非计算机专业如何转行AI,找到算法offer?

    作者 | Nick-Atom 责编 | 琥珀 [AI科技大本营导读]目前,各行业都在尝试着用机器学习/深度学习来解决自身行业的需求.在这个过程中,最为稀缺的也是高质量人工智能人才. 这一年我们见证了不 ...

  9. 防止被算力“锁死”,人工智能进化急需革命性算法

    来源:搜狐,以上文章观点仅代表文章作者,仅供参考,以抛砖引玉! "深度学习所需的大规模样本数据对于算力产生巨大需求,但近日美国麻省理工学院等研究机构的报告显示,深度学习正在逼近算力极限,而提 ...

最新文章

  1. 学点基本功:机器学习常用损失函数小结
  2. 如何修改移动设备按钮默认样式
  3. const和define 区别
  4. Android 安全机制概述
  5. php 面向对象基础,PHP 面向对象基础
  6. 部分 DNS 查询因闰秒 bug 报错
  7. oopc——7.面向接口编程
  8. wordpress 首页调用指定分类文章_怎样给wordpress网站分类目录页面,添加文章列表和分页效果?...
  9. pytorch_pix2pix之argparse
  10. maven中常遇到的一些错误
  11. saas mysql数据库设计_SaaS模式实现架构实例分析=数据库层的设计
  12. 【多媒体技术】多媒体技术期末复习题
  13. Excel高级使用技巧汇总
  14. 推挽电路原理及应用-上N下P以及下N上P
  15. web前端之跳一跳网页版小游戏
  16. VPX视频叠加板卡学习资料第199篇:基于Xilinx FPGA XC5VFX100T的6U VPX视频叠加板卡
  17. 进制转换简单实现代码
  18. Java——SOF 与 OOM
  19. bus error的解决方法
  20. VTK:输出将样条拟合到刀具Cutter用法实战

热门文章

  1. 《数字飙榜》精选欧美十支经典摇滚乐队
  2. 《见或不见》宫里面的诗好喜欢
  3. 探访“全甲格斗”圈 文职员工穿上盔甲就成场上最凶猛的人
  4. 第6次面试:华某永道(2022-03-16)
  5. verilog D触发器
  6. 类似美图秀秀 拼图 大致原理
  7. 研究了 babel.config.js 和 babelrc,理解了为什么ES6代码没被转化
  8. 音视频笔记-----经验公式(experience formula)
  9. java grabcut_在Opencv-java中优化GrabCut的性能
  10. Es 查看集群详细信息