ai替换混合轴例子

Both explainable and interpretable AI are emerging topics in computer science. However, the difference between the two is not always obvious, even to academics. In this post I aim to provide some intuition using a simple example.

可解释和可解释的AI都是计算机科学中的新兴主题。 但是,即使对于学者而言,两者之间的差异也并不总是显而易见的。 在这篇文章中,我旨在通过一个简单的例子提供一些直觉。

介绍 (Introduction)

As techniques in both Artificial Intelligence (AI) and Machine Learning (ML) have become more complicated and more opaque, there has been a call for algorithms that humans can understand. After all, how can we identify bias or correct mistakes if we don’t understand how these techniques are reaching decisions?

随着人工智能(AI)和机器学习(ML)中的技术变得越来越复杂和不透明,人们呼吁人们可以理解的算法。 毕竟,如果我们不了解这些技术是如何做出决策的,我们如何识别偏见或纠正错误?

Two main fields have arisen in response to this: Explainable AI and Interpretable AI.¹ Explainable AI models “summarize the reasons […] for [their] behavior […] or produce insights about the causes of their decisions,” whereas Interpretable AI refers to AI systems which “describe the internals of a system in a way which is understandable to humans” (Gilpin et al. 2018).

为此,出现了两个主要领域:可解释的AI和可解释的AI。¹可解释的AI模型“概述了[[]]他们的行为[...]的原因或对他们做出决定的原因产生见解,”而可解释的AI则指人工智能系统,“以一种人类可以理解的方式描述系统的内部”( Gilpin等人,2018年 )。

Ok, so those definitions are all well and good, but what do they mean? How can I classify one technique as explainable or interpretable? This is a question that academics have disagreed on (see Section 2.1.1 in my Master’s Thesis for more details) and isn’t obvious to anyone just reading the definitions. I mean, summarizing reasons for behavior sounds a lot like describing the internals of the system right?

好的,这些定义都很好,但是它们是什么意思呢? 如何将一种技术分类为可解释的或可解释的? 这是一个学者们不同意的问题(有关更多详细信息,请参阅我的硕士论文的2.1.1节 ),并且对于任何阅读过定义的人来说都不是显而易见的。 我的意思是,总结行为原因听起来很像描述系统内部,对吗?

After a lot of thought and a lot of reading on this issue, I think I’ve found an example of a real-life application that provides some good intuition.

在对此问题进行了很多思考和大量阅读之后,我想我已经找到了一个可以提供良好直觉的真实应用程序示例。

老师的反馈 (Teacher’s Feedback)

Imagine yourself in college. After a long and sleepless reading period, you’ve submitted two final essays: one for American Literature 101 and one for Intro to Classical Studies. When it’s finally time to see your grades, you anxiously log on to see… two B+’s. Over-achiever that you are, you think there must be some mistake and go to both of your professors to ask for feedback.

我想像自己在上大学。 经过漫长而无眠的阅读之后,您提交了两篇最后的论文:一篇是针对美国文学101的,另一篇是针对古典研究的。 当终于要看您的成绩时,您急于登录以查看……两个B +。 如果您表现出色,您认为一定有一些错误,请两位教授寻求反馈。

Your American Literature professor hands you back your paper with a few marks and a short paragraph at the end outlining the strengths and weaknesses of your argument. While he agrees with your choice of The Great Gatsby as The Great American Novel, he felt your argument did not sufficiently address the symbolic use of color present throughout the novel. Based on these two comments, he has assigned you a B+ overall. You’re a little confused — how did he weight all of these things together to get a B+? Where exactly did you lose points? How did he decided which parts of the paper were important? How close were you to an A-?

您的美国文学教授会在论文的最后加上一些记号和一小段文字,以概述论点的优缺点。 尽管他同意您选择《了不起的盖茨比》作为《伟大的美国小说》,但他感到您的论点不足以解决整个小说中色彩的象征性使用。 根据这两个评论,他为您分配了总体B +。 您有些困惑-他如何权衡所有这些东西以获得B +? 您到底在哪里丢分? 他如何决定论文的哪些部分很重要? 您离A-有多近?

A little dissatisfied, you go to your Classics professor and ask her for some feedback. Instead of comments, she hands you a detailed rubric. You see you lost a few points for grammar and a few for missing citations. It also notes that you only referenced three primary sources when the project requirements mentioned five or more. With these deductions, you got a 90%, which according to your university’s policy means you got a B+. Again though, you’re a little dissatisfied. Why is each missing citation worth 2 points but each grammar mistake only worth a half point? Why was the number of sources worth 10% of your grade instead of 20%?

有点不满意,您去找古典教授,并请她提供一些反馈。 她没有给您任何评论,而是为您提供了详细的标题。 您会看到语法损失了几分,而引文丢失了几分。 它还指出,当项目需求提到五个或更多时,您仅引用了三个主要来源。 通过这些扣除,您获得了90%,根据您大学的政策,这意味着您获得了B +。 再次,您还是有点不满意。 为什么每个缺失的引文都值得2分,而每个语法错误却只值得半分? 为什么来源数量占您成绩的10%,而不是20%?

As you may have guessed, each of these approaches is meant to illustrate one of these AI techniques. The first professor, who provided written feedback, is doing something analogous to Explainable AI. You got an explanation of your grade that gave you some details about what went in to the decision-making progress. You can see your strengths and shortcomings quite clearly. However, you don’t really know how these things mapped to your exact grade. If you were given feedback like this for a classmate’s paper and asked to assign a grade, you wouldn’t really know where to start. You have some intuition for how the decision was made but couldn’t recreate it yourself. Worse yet, the professor could be biased or dishonest in his explanation. Maybe he thought the paper was only really a B paper, but bumped it up because of your class participation and choice of topic. Maybe he just assigned your grade at random and wrote feedback to justify it. You can’t really know for sure what happened.² Explainable AI systems in general have these same advantages and disadvantages — it’s hard to know how a result was arrived at, but you mostly know why (if you trust the explanation).

您可能已经猜到了,每种方法都旨在说明其中一种AI技术。 提供书面反馈的第一位教授正在做类似于“可解释的AI”的工作。 您对自己的成绩有一个解释,为您提供了一些有关决策制定过程的详细信息。 您可以很清楚地看到自己的优点和缺点。 但是,您实际上并不知道这些东西如何映射到您的确切成绩。 如果为同学的论文提供了这样的反馈并要求分配分数,那么您真的不知道从哪里开始。 您对如何做出决策有一定的直觉,但无法自己重新创建。 更糟糕的是,教授的解释可能有偏见或不诚实。 也许他认为该论文实际上只是B篇论文,但由于您的班级参与和主题选择,因此将其提高了。 也许他只是随机分配您的成绩并写了反馈以证明其合理性。 您无法真正确定到底发生了什么。²可解释的AI系统通常具有相同的优点和缺点-很难知道结果是如何得出的,但是您大多数都知道原因 (如果您相信解释)。

Ok, so this approach had some strengths and weaknesses. How about your Classics professor? Her approach is more akin to Interpretable AI. You saw from your results exactly how the grade was calculated. If you got someone else’s paper, you could follow this rubric and arrive at the exact same grade as the professor. If you notice an error, you could easily approach the professor and get points back. There are a couple of problems though. Imagine if the rubric had 1000 points on it — it would be too time-consuming for you to scrutinize every one to understand how you got your grade. You also don’t really know where that rubric came from. Did your professor base it on another course or results from previous years? Did she write the rubric subtly so that students who wrote on one topic did worse than students who wrote on another?³ Why were certain things included in the rubric and not others? Why was the required number of sources 5 and not 3 or 7? These explanations are not provided by the rubric. Explanations are precisely what strictly Interpretable AI lacks. It’s very easy to see how the algorithm arrived at its conclusion but not why each step of the decision process was created.

好的,所以这种方法有一些优点和缺点。 你的经典教授怎么样? 她的方法更类似于Interpretable AI。 您从结果中准确地看到了成绩的计算方式。 如果您得到了别人的论文,则可以遵循该原理,并达到与教授完全相同的等级。 如果发现错误,则可以轻松地联系教授并获得积分。 虽然有两个问题。 想象一下,如果评分标准上有1000分,那么仔细检查每一个人以了解您的成绩将是非常耗时的。 您还真的不知道该标题的来源。 您的教授是基于另一门课程还是前几年的成绩? 她是否巧妙地编写了标题,使写一个主题的学生比写另一个主题的学生差? 为什么要求的源数是5,而不是3或7? 这些解释没有由标题提供。 正是严格解释的AI缺乏解释。 很容易看到算法是如何得出结论的,但是却不知道为什么要创建决策过程的每个步骤。

结论 (Conclusion)

Hopefully with this example in mind, it is easier to draw lines between the two categories. Explainable AI tells you why it made the decision it did, but not how it arrived at that decision.⁴ Interpretable AI tells you how it made the decision, but not why the criteria it used is sensible.⁵ We can of course imagine systems that are both Explainable and Interpretable.⁶ In this case, a professor could provide a rubric along with written feedback and an explanation for why each part of the rubric is important.

希望牢记此示例,可以轻松地在两个类别之间画线。 可解释的AI告诉你为什么 它可以做出决定,但不能决定如何做出决定。⁴可解释的AI告诉您它是如何做出决定的,但不能告诉您为什么使用的标准是明智的。course我们当然可以想象可解释和可解释的系统.⁶在这种情况下,教授可以提供标题,书面反馈以及为什么每个部分都很重要的解释。

Overall, this distinction still remains a bit fuzzy and it’s easy for two people to have different ways of classifying the same technique. That’s why it’s important to establish clear definitions at the beginning of any argument: otherwise you’re liable to spend a lot of time trying to correct misunderstandings.

总体而言,这种区分仍然有些模糊,两个人很容易以不同的方式对同一技术进行分类。 这就是为什么在任何参数的开头都必须建立清晰的定义很重要的原因:否则,您可能会花费大量时间来纠正误解。

Thanks for reading! I hope this has been helpful in disambiguating the two topics. For more in depth information on these topics, check out the two papers cited in the section below. You can also check out my Master’s thesis here, which goes a bit more into these topics as well as a number of applications.

谢谢阅读! 我希望这有助于消除两个主题的歧义。 有关这些主题的更多详细信息,请查看以下部分引用的两篇论文。 您也可以在这里查看我的硕士论文,其中涉及这些主题以及许多应用程序。

脚注和引文 (Footnotes & Citations)

  1. We can just as easily refer to Explainable ML and Interpretable ML. The ideas are the same; I’ve chosen to go with AI in this post to avoid confusion.我们可以轻松地参考“可解释的ML”和“可解释的ML”。 想法是一样的。 在这篇文章中,我选择使用AI以避免混淆。
  2. In the Explainable/Interpretable AI field this is known as “fidelity.” Basically, an explanation has high fidelity if it is very faithful to how the model actually made its decision. Explanations with low fidelity may have little to nothing to do with how the decision was actually made.在“可解释/可解释的AI”字段中,这称为“保真度”。 基本上,如果非常忠实于模型实际做出决定的方式,那么解释就具有很高的保真度。 低保真度的解释可能与决定的制定没有任何关系。
  3. This sort of bias seems like it might be obvious but could actually be hard to detect. For this example, the teacher could provide more reference material for one topic and weight the rubric to heavily penalize students who used fewer citations.这种偏见似乎很明显,但实际上很难发现。 对于此示例,老师可以为一个主题提供更多参考材料,并为该主题增加权重,以严惩那些使用较少引用文献的学生。
  4. LIME is an example of this.

    LIME就是一个例子。

  5. A simple rules list is an example — the rules and thresholds appear arbitrary and illogical, not really offering an explanation.一个简单的规则列表就是一个例子-规则和阈值似乎是任意的和不合逻辑的,没有真正提供解释。
  6. A rules list with prototypes as mentioned here might be both explainable and interpretable. The rules list would be traceable and therefore interpretable, whereas the prototypes would serve as explanations for each rule.

    这里提到的带有原型的规则列表可能是可解释的,也可能是可解释的。 规则列表是可追溯的,因此可以解释,而原型将作为每个规则的解释。

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pages 80–89. IEEE.

吉尔平(LH),鲍(Bau),鲍(D.),袁(BZ),巴伊娃(Bajwa),A。 解释说明:机器学习的可解释性概述。 在2018年IEEE第五届数据科学与高级分析国际会议(DSAA)中,第80-89页。 IEEE。

Rudin, C. (2018). Please Stop Explaining Black Box Models for High Stakes Decisions. arXiv preprint arXiv:1811.10154.

Rudin,C.(2018年)。 请停止解释有关高额赌注决策的黑匣子模型。 arXiv预印本arXiv:181.10154。

翻译自: https://medium.com/swlh/explainable-vs-interpretable-ai-an-intuitive-example-6baf8fc6d402

ai替换混合轴例子


http://www.taodudu.cc/news/show-4278300.html

相关文章:

  • 航拍无人机 无人车_无人机将有自己的时刻
  • 域名过期 脚本_域名宝已过期! …还是垃圾?
  • 关于生活
  • apple tv 开发_Apple TV首批#madewithunity游戏发售
  • php约束性别默认为男,在表单中包含性别选项,且默认状态为“男”被选中,下列正确的是( )...
  • 唯我倾城网上购物商城设计与实现
  • 计算机专业论文在线教育,在线教育系统 计算机毕业论文.doc
  • 电子计算机司法鉴定客体特征,电子证据司法鉴定的含义和特点是什么?
  • 浅谈道路交通事故车辆安全技术鉴定
  • 计算机学识水平自我评价,计算机毕业自我鉴定范文
  • 计算机技术辅助笔迹鉴定,GB∕T 37239-2018 笔迹鉴定技术规范(高清版).pdf
  • 计算机系导师推荐意见,就业推荐表上导师评语
  • 研发思维08----嵌入式智能产品数据服务后端分析
  • 让你永远忘不了的傅里叶变换解析
  • F5 ELK可视化方案如何做到DNS运维更安全更高效
  • VOT中的EAO评判指标
  • ELK学习笔记之F5 DNS可视化让DNS运维更安全更高效-F5 ELK可视化方案系列(3)
  • 温故知新|传感器基础结构与通信原理
  • 文末福利 | 性能领域:你知道的越多,不知道的也就越多
  • CDN边缘智能助力5G
  • 今天睡眠质量77分
  • 今日睡眠质量85分
  • 今天睡眠质量76分
  • 今天睡眠质量记录77
  • 今日睡眠质量72分
  • 昨天睡眠质量记录70分
  • ML 2021 Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
  • 【JS数据结构与算法】双向链表(DoublylinkedList)封装及其方法
  • 双向链表(Doubly Linked List)
  • Policy Evaluation之Doubly Robust论文讲解

ai替换混合轴例子_可解释的vs可解释的AI:一个直观的例子相关推荐

  1. ai怎么画循环曲线_科研论文作图系列-从PPT到AI (三)

    前两期的推送中小编给大家介绍了Adobe Illustrator (AI)的特点和AI在论文排版中的一些操作技巧,本期给大家带来AI作图中页面设置和工具栏使用等相关知识. 1 新建文件 单击菜单→[文 ...

  2. ai怎么调界面大小_科研论文作图系列-从PPT到AI (一)

    这是"投必得学术"推送的第44篇文章,专注科研技能和资讯分享! 关注"投必得学术",可以看到我们所有干货和资讯! 导语: 之前的推送中,小编给大家介绍过几款科研 ...

  3. mysql中having的例子_有关mysql中having子句对组记录进行筛选的例子

    mysql中having的用法having字句,筛选成组后的各种数据,where字句在聚合前先筛选记录,即它作用在group by和having字句前,而 having子句在聚合后对组记录进行筛选. ...

  4. 西门子触摸屏修改ip地址_基于博途V15 西门子S7-1200与触摸屏一个简单的例子

    本篇是<基于博途V15 西门子S7-1200 ...>系列的后续篇,看懂本篇文章之前请大家关注我,然后查找相关文章,学习之后再学习此篇.也请专业领域的大神批评指正. 感谢大家关注与支持! ...

  5. 什么是python编程例子_案例详解:优化Python编程的4个妙招

    全文共3510字,预计学习时长7分钟 作为数据科学家,敲出最优的Python代码非常非常重要.别无他法,杂乱低效的代码笔记本会消耗你的时间,也会浪费大量项目资金.经验丰富的数据科学家和专业人士都很清楚 ...

  6. 量子运算 简单通俗例子_什么是量子计算机? 用一个简单的例子解释。

    量子运算 简单通俗例子 by YK Sugi 由YK Sugi 什么是量子计算机? 用一个简单的例子解释. (What is a quantum computer? Explained with a ...

  7. 生活中的观察者偏见例子_消除人工智能第2部分中的偏见,解决性别和种族偏见...

    生活中的观察者偏见例子 Chatbots that become racist in less than a day, facial technology that fails to recogniz ...

  8. 人工智能ai应用高管指南_营销商关于AI的完整指南

    人工智能ai应用高管指南 If you search "AI is-" into Google, you end up with sayings from brilliant pe ...

  9. python伪代码例子_函数和操作数的Python伪代码

    伪代码的基本思想是 a)使复杂代码易于理解,或 b)表达一个想法,即你将要编写代码/尚未想出如何编写代码.在 例如,如果我要制作一个需要从数据库中读取信息的工具,将其解析为字段,只获取用户请求的信息, ...

最新文章

  1. [flite源码分析一]常用数据结构cst_val
  2. python生成wheel包注意事项
  3. 用Python实现归并排序
  4. python中s和t是两个集合、对s|t描述正确的是_全国计算机等级考试二级教程--python语言程序设计(2018年版)第六章:组合数据类型...
  5. Leetcode分类
  6. 第五十节,面向对象基本介绍
  7. day21 java的数字类
  8. 企业IT构建核心基础架构解决方案
  9. Python函数之返回多值
  10. python能做什么项目-适合Python 新手的5大练手项目,你练了么?
  11. antdesign 所兼容的浏览器_React爬坑之路——Antd兼容IE
  12. 营收环比增幅近50%,星巴克在经历“劫”后重生吗?
  13. 匹配查询(Match)
  14. (20200921Solved)UnicodeDecodeError: ‘utf-8‘ codec can‘t decode byte 0xca in position 0: invalid cont
  15. 大青云不显示服务器,37大青云4月25日关服停止运营公告
  16. SEC官员:ICO指南即将发布
  17. 基于asp.net的在线音乐网站的设计与实现(完整)
  18. heapdump 攻击面利用
  19. linux midi端口,在Linux下使用MIDI软波表
  20. RCA接口(AV接口)

热门文章

  1. 交换机与路由器的基本工作原理
  2. 入驻 【集简云开发者平台】,SDK嵌入接口文档介绍
  3. 写给女孩:二十岁之后的每一年都很重要
  4. 检测ip是否为中国php,PHP判断IP是中国IP还是外国IP
  5. VS2015编译适用于XP系统sp3的dll全过程-无需vs2015运行库
  6. 如何阅读一本书 笔记
  7. 阴历转阳历java_GitHub - opprime/calendarist: 一个可实现阳历、阴历、干支历间相互转换的JAVA工具...
  8. 真·抢显卡!四川一团伙持 40cm 长刀入室抢劫 50 余张显卡,总价值超 10 万元
  9. 极限存在准则 两个重要极限
  10. delta对冲策略_期权的Delta对冲策略对比分析