肉体之爱的解释圣经

by Patrick Ferris

帕特里克·费里斯(Patrick Ferris)

可以解释的AI简介,以及我们为什么需要它 (An introduction to explainable AI, and why we need it)

Neural networks (and all of their subtypes) are increasingly being used to build programs that can predict and classify in a myriad of different settings.

神经网络(及其所有子类型)被越来越多地用于构建可以在多种不同设置中进行预测和分类的程序。

Examples include machine translation using recurrent neural networks, and image classification using a convolutional neural network. Research published by Google DeepMind has sparked interest in reinforcement learning.

示例包括使用递归神经网络的机器翻译和使用卷积神经网络的图像分类 。 谷歌DeepMind发表的研究引发了人们对强化学习的兴趣 。

All of these approaches have advanced many fields and produced usable models that can improve productivity and efficiency.

所有这些方法已在许多领域取得了进展,并产生了可提高生产率和效率的可用模型。

However, we don’t really know how they work.

但是, 我们真的不知道它们是如何工作的

I was fortunate enough to attend the Knowledge Discovery and Data Mining (KDD) conference this year. Of the talks I went to, there were two main areas of research that seem to be on a lot of people’s minds:

我很幸运地参加了今年的知识发现和数据挖掘 (KDD)会议。 在我参加的演讲中,很多人似乎都想到了两个主要的研究领域:

  • Firstly, finding a meaningful representation of graph structures to feed into neural networks. Oriol Vinyals from DeepMind gave a talk about their Message Passing Neural Networks.

    首先,找到图结构的有意义的表示以馈入神经网络。 来自DeepMind的Oriol Vinyals发表了他们的消息通过神经网络的演讲。

  • The second area, and the focus of this article, are explainable AI models. As we generate newer and more innovative applications for neural networks, the question of ‘How do they work?’ becomes more and more important.第二个领域以及本文的重点是可解释的AI模型。 随着我们为神经网络生成更新,更具创新性的应用程序,“它们如何工作?”的问题开始了。 变得越来越重要。

为什么需要可解释的模型? (Why the need for Explainable Models?)

Neural Networks are not infallible.

神经网络并非万无一失。

Besides the problems of overfitting and underfitting that we’ve developed many tools (like Dropout or increasing the data size) to counteract, neural networks operate in an opaque way.

除了我们开发了许多工具(例如Dropout或增加数据大小)来抵消过拟合和欠拟合的问题之外,神经网络还以不透明的方式运行。

We don’t really know why they make the choices they do. As models become more complex, the task of producing an interpretable version of the model becomes more difficult.

我们真的不知道他们为什么做出选择。 随着模型变得越来越复杂,生成模型的可解释版本的任务变得更加困难。

Take, for example, the one pixel attack (see here for a great video on the paper). This is carried out by using a sophisticated approach which analyzes the CNNs and applies differential evolution (a member of the evolutionary class of algorithms).

以单像素攻击为例(请参阅此处,获得纸质视频 )。 这是通过使用一种复杂的方法来执行的,该方法可以分析CNN并应用差分进化(算法进化类的成员)。

Unlike other optimisation strategies which restrict the objective function to be differentiable, this approach uses an iterative evolutionary algorithm to produce better solutions. Specifically, for this one pixel attack, the only information required was the probabilities of the class labels.

与其他限制目标函数可微的优化策略不同,此方法使用迭代进化算法来产生更好的解决方案。 具体来说,对于这一像素攻击,所需的唯一信息是类别标签的概率。

The relative ease of fooling these neural networks is worrying. Beyond this lies a more systemic problem: trusting a neural network.

愚弄这些神经网络的相对容易性令人担忧。 除此之外,还有一个系统性的问题:信任神经网络。

The best example of this is in the medical domain. Say you are building a neural network (or any black-box model) to help predict heart disease given a patient’s records.

最好的例子是在医学领域。 假设您正在建立神经网络(或任何黑匣子模型)以帮助根据患者的病情预测心脏病。

When you train and test your model, you get a good accuracy and a convincing positive predictive value. You bring it to a clinician and they agree it seems to be a powerful model.

在训练和测试模型时,您将获得良好的准确性和令人信服的积极预测价值。 您将其带给临床医生,他们同意这似乎是一个强大的模型。

But they will be hesitant to use it because you (or the model) cannot answer the simple question: “Why did you predict this person as more likely to develop heart disease?”

但是他们会犹豫使用它,因为您(或模型)无法回答一个简单的问题:“您为什么预测这个人患心脏病的可能性更大?”

This lack of transparency is a problem for the clinician who wants to understand the way the model works to help them improve their service. It is also a problem for the patient who wants a concrete reason for this prediction.

对于临床医生来说,这种缺乏透明度是一个问题,他们希望了解该模型如何帮助他们改善服务质量。 对于想要该预测的具体原因的患者来说也是一个问题。

Ethically, is it right to tell a patient that they have a higher probability of a disease if your only reasoning is that “the black-box told me so”? Health care is as much about science as it is about empathy for the patient.

从伦理上讲,如果您唯一的理由是“黑匣子告诉我”,告诉患者他们患病的可能性较高是否正确? 卫生保健既关乎科学,也关乎病人的同理心。

The field of explainable AI has grown in recent years, and this trend looks set to continue.

近年来,可解释的AI领域不断发展,而且这种趋势似乎还将继续。

What follows are some of the interesting and innovative avenues researchers and machine learning experts are exploring in their search for models which not only perform well, but can tell you why they make the choices they do.

接下来是研究人员和机器学习专家在寻找模型的过程中探索的一些有趣且创新的途径,这些模型不仅性能良好,而且可以告诉您为什么他们做出选择。

逆时注意模型(RETAIN) (Reversed Time Attention Model (RETAIN))

The RETAIN model was developed at Georgia Institute of Technology by Edward Choi et al. It was introduced to help doctors understand why a model was predicting patients to be at risk of heart failure.

RETAIN模型是由Edward Choi等人在乔治亚理工学院开发的。 引入它是为了帮助医生了解为什么模型预测患者有心力衰竭的风险。

The idea is, given a patients’ hospital visits records which also contain the events of the visit, they could predict the risk of heart failure.

这个想法是,给定患者的医院就诊记录(其中也包含就诊事件),他们可以预测心力衰竭的风险。

The researchers split the input into two recurrent neural networks. This let them use the attention mechanism on each to understand what the neural network was focusing on.

研究人员将输入分为两个循环神经网络。 这样一来,他们就可以利用每种注意力机制来了解神经网络所关注的内容。

Once trained, the model could predict a patient’s risk. But it could also make use of the alpha and beta parameters to output which hospital visits (and which events within a visit) influenced its choice.

训练后,该模型可以预测患者的风险。 但是它也可以利用alpha和beta参数来输出哪些医院就诊(以及一次就诊中的哪些事件)影响了其选择。

本地可解释模型不可知的解释(LIME) (Local Interpretable Model-Agnostic Explanations (LIME))

Another approach that has become fairly common in use is LIME.

LIME是另一种已变得相当普遍使用的方法。

This is a post-hoc model — it provides an explanation of a decision after it has been made. This means it isn’t a pure ‘glass-box’, transparent model (like decision trees) from start to finish.

这是事后模型-做出决定后提供了解释。 这意味着从始至终,它并不是一个纯粹的“玻璃盒子”,透明模型(例如决策树)。

One of the main strengths of this approach is that it’s model agnostic. It can be applied to any model in order to produce explanations for its predictions.

这种方法的主要优势之一是它与模型无关。 它可以应用于任何模型,以便为其预测提供解释。

The key concept underlying this approach is perturbing the inputs and watching how doing so affects the model’s outputs. This lets us build up a picture of which inputs the model is focusing on and using to make its predictions.

这种方法的基本概念是扰动输入并观察这样做如何影响模型的输出。 这使我们可以建立模型所关注的输入并用于进行预测的图片。

For instance, imagine some kind of CNN for image classification. There are four main steps to using the LIME model to produce an explanation:

例如,想象一下用于图像分类的CNN。 使用LIME模型产生解释的主要步骤有四个:

  • Start with a normal image and use the black-box model to produce a probability distribution over the classes.从正常图像开始,并使用黑盒模型在类上产生概率分布。
  • Then perturb the input in some way. For images, this could be hiding pixels by coloring them grey. Now run these through the black-box model to see the how the probabilities for the class it originally predicted changed.然后以某种方式干扰输入。 对于图像,这可以通过将像素着色为灰色来隐藏像素。 现在,通过黑盒模型运行它们,以查看其最初预测的类的概率如何变化。
  • Use an interpretable (usually linear) model like a decision tree on this dataset of perturbations and probabilities to extract the key features which explain the changes. The model is locally weighted — meaning that we care more about the perturbations that are most similar to the original image we were using.在这种扰动和概率数据集上使用诸如决策树之类的可解释(通常是线性)模型来提取解释这些变化的关键特征。 该模型是局部加权的-意味着我们更关心与我们使用的原始图像最相似的扰动。
  • Output the features (in our case, pixels) with the greatest weights as our explanation.输出权重最大的特征(在本例中为像素)作为说明。

分层相关传播(LRP) (Layer-wise Relevance Propagation (LRP))

This approach uses the idea of relevance redistribution and conservation.

这种方法使用了相关性重新分配和保护的思想。

We start with an input (say, an image) and its probability of a classification. Then, work backwards to redistribute this to all of the inputs (in this case pixels).

我们从输入(例如图像)及其分类概率开始。 然后,向后工作以将其重新分配给所有输入(在本例中为像素)。

The redistribution process is fairly simple from layer to layer.

每一层的重新分配过程非常简单。

In the above equation, each term represents the following ideas:

在上述等式中,每个术语表示以下想法:

  • x_j — the activation value for neuron j in layer l

    x_j —第l层中神经元j的激活值

  • w_j,k — the weighting of the connection between neuron j in layer l and neuron k in layer l + 1

    w_j,k —层l中的神经元j和层l + 1中的神经元k之间的连接权重

  • R_j — Relevance scores for each neuron in layer l

    R_j —第l层中每个神经元的相关性得分

  • R_k — Relevance scores for each neuron in layer l+1

    R_kl + 1层中每个神经元的相关性得分

The epsilon is just a small value to prevent dividing by zero.

ε只是一个很小的值,以防止被零除。

As you can see, we can work our way backwards to determine the relevance of individual inputs. Further, we can sort these in order of relevance. This lets us extract a meaningful subset of inputs as our most useful or powerful in making a prediction.

如您所见,我们可以倒退确定各个输入的相关性。 此外,我们可以按照相关性对它们进行排序。 这使我们可以提取有意义的输入子集,作为我们进行预测时最有用或最有效的输入。

接下来是什么? (What next?)

The above methods for producing explainable models are by no means exhaustive. They are a sample of some of the approaches researchers have tried using to produce interpretable predictions from black-box models.

以上产生可解释模型的方法绝不是穷举。 他们是研究人员尝试从黑盒模型中产生可解释的预测的一些方法的样本。

Hopefully this post also sheds some light onto why it is such an important area of research. We need to continue researching these methods, and develop new ones, in order for machine learning to benefit as many fields as possible — in a safe and trustworthy fashion.

希望这篇文章也能阐明为什么它是如此重要的研究领域。 我们需要继续研究这些方法,并开发新方法,以使机器学习以安全和可信赖的方式从尽可能多的领域中受益。

If you find yourself wanting more papers and areas to read about, try some of the following.

如果发现自己想论文和领域,请尝试以下一些方法。

  • DeepMind’s research on Concept Activation Vectors, as well as the slides from Victoria Krakovna’s talk at Neural Information Processing Systems (NIPS) conference.

    DeepMind对概念激活向量的研究,以及Victoria Krakovna在神经信息处理系统(NIPS)会议上的演讲的幻灯片 。

  • A paper by Dung Huk Park et al. on datasets for measuring explainable models.

    一纸由粪学今公园等。 在测量可解释模型的数据集上。

  • Finale Doshi-Velez and Been Kim’s paper on the field in general

    总决赛Doshi-Velez和Been Kim在该领域的论文

Artificial intelligence should not become a powerful deity which we follow blindly. But neither should we forget about it and the beneficial insight it can have. Ideally, we will build flexible and interpretable models that can work in collaboration with experts and their domain knowledge to provide a brighter future for everyone.

人工智能不应成为我们盲目跟随的强大神灵。 但是我们也不应该忘记它以及它可以拥有的有益的见识。 理想情况下,我们将构建可以与专家及其领域知识协作的灵活且可解释的模型,从而为每个人提供光明的未来。

翻译自: https://www.freecodecamp.org/news/an-introduction-to-explainable-ai-and-why-we-need-it-a326417dd000/

肉体之爱的解释圣经

肉体之爱的解释圣经_可以解释的AI简介,以及我们为什么需要它相关推荐

  1. 计算机网络cdm名词解释,计算机网络_名词解释

    计算机网络_名词解释 名词解释 写出术语的英文全称及中文含义 1.FDM:Frequency Division Multiplexing P50 频分复用:用户在分配到一定的频带后,在通信过程中自始至 ...

  2. ai替换混合轴例子_可解释的vs可解释的AI:一个直观的例子

    ai替换混合轴例子 Both explainable and interpretable AI are emerging topics in computer science. However, th ...

  3. 数据分析方法及名词解释总结_(面试)

    真知1. 数据分析--分析方法总结_Jack_2085的博客-CSDN博客3.aarrr模型分析方法.4.多维度拆解分析方法.11.PEST分析方法.2.5W2H分析方法.5.假设检验分析方法.10. ...

  4. 名词解释_名词解释的答题技巧

    名词解释是文常考试形式的一种,这种题型具有一定的主观性,不像填空题,选择题,答案只有对与错,所以有时候对名词解释真的没办法,能写出什么就尽量发挥.但名词解释不管是统考还是校考都会考,并且分值都不低,可 ...

  5. java 编译 解释执行_关于Java的编译执行与解释执行

    编程语言分为低级语言和高级语言,机器语言.汇编语言是低级语言,C.C++.java.python等是高级语言. 机器语言是最底层的语言,能够直接执行.而我们编写的源代码是人类语言, 计算机只能识别某些 ...

  6. 爱优腾芒“跑马圈地”,AI广告营销能拯救“盈利难”的视频平台吗?

    文/螳螂财经 作者/图霖 编辑/陈小江 根据QuestMobile 2020年7月的数据,爱奇艺和腾讯视频的月活跃用户数均已超5亿,紧跟其后的优酷视频和芒果TV的月活跃用户数也已超2亿. 巨大的用户体 ...

  7. Oracle中V$SESSION等各表的字段解释,Oracle官方解释

    https://www.cnblogs.com/grey-wolf/p/10119219.html Oracle中V$SESSION等各表的字段解释,Oracle官方解释 阅读目录 一.常用的视图 1 ...

  8. 详细介绍用MATLAB实现基于A*算法的路径规划(附完整的代码,代码逐行进行解释)(一)--------A*算法简介和环境的创建

       本系列文章主要介绍基于A*算法的路径规划的实现,并使用MATLAB进行仿真演示.本文作为本系列的第一篇文章主要介绍如何进行环境的创建,还有一定要记得读前言!!! 本系列文章链接: ------- ...

  9. 尚医通_第1章-项目简介

    尚医通_第1章-项目简介 文章目录 尚医通_第1章-项目简介 二.业务流程 三.系统架构 一.功能简介 尚医通即为网上预约挂号系统,网上预约挂号是近年来开展的一项便民就医服务,旨在缓解看病难.挂号难的 ...

最新文章

  1. BZOJ 2156 「国家集训队」星际探索(最短路)【BZOJ计划】
  2. pandas创建内容全是0的dataframe、pandas基于随机整数、随机浮点数创建dataframe(random numbers)
  3. 使用指针判断数组是否为上三角矩阵
  4. 终于有人把 Nginx 说清楚了,图文详解!
  5. BlockChain:《世界经济论坛:区块链将如何重塑金融业?》—20160812—听课笔记
  6. Ubuntu Server 上使用Docker Compose 部署Nexus(图文教程)
  7. appium+python 操作APP
  8. 容器化分布式日志组件ExceptionLess的Angular前端UI
  9. matlab记录路径,matlab对文件目录路径的操作
  10. matlab int8 函数,未定义与 'uint8' 类型的输入参数相对应的函数 'fitnessty'
  11. idea下载与安装 0913
  12. mysql mrr cost based,MySQL InnoDB MRR 优化
  13. Netty工作笔记0029---NIO 网络编程应用--群聊系统4--客户端编写2
  14. python字典值求和_Python两个字典键同值相加的几种方法
  15. (附源码)springboot电子阅览室app 毕业设计 016514
  16. 深入浅出SpringCloud
  17. Flink:No operators defined in streaming topology. Cannot execute.
  18. 数据库常用日期统计查询
  19. L1-054 福到了 (15 分)
  20. 必应词典英语学习APP案例分析

热门文章

  1. 保驾护航金三银四,含BATJM大厂
  2. 覆盖所有面试知识点,送大厂面经一份!
  3. Tensorflow2.0开启,从此忘记1.*版本
  4. C#在WinForm中打开控制台显示
  5. 在layui中使用 jquery 触发select 的 change事件无效
  6. [币严区块链]以太坊(ETH)Dapp开发入门教程之宠物商店领养游戏
  7. hightmaps 按地图上显示的统计数据
  8. KM 最优匹配 讲解
  9. 求android 中串口的发送接收数据代码
  10. 经常需要在开发中使用Excel COM,为简化使用写了这个类,感觉还是不太方便