深度学习算法和机器学习算法

Most people are either in two camps:

大多数人都在两个营地中:

  • I don’t understand these machine learning algorithms.我不了解这些机器学习算法。
  • I understand how the algorithms work, but not why they work.

    我理解的算法是如何工作的,但不是为什么他们的工作。

This article seeks to explain not only how algorithms work, but give an intuitive understanding of why they work, to deliver that lightbulb aha! moment.

本文试图解释算法不仅是如何工作的,但给的为什么他们的工作,以交付灯泡AHA一个直观的了解! 时刻。

决策树 (Decision Trees)

Decision Trees divide the feature space using horizontal and vertical lines. For example, consider a very simplistic Decision Tree below, which has one conditional node and two class nodes, indicating a condition and under which category a training point that satisfies it will fall into.

决策树使用水平线和垂直线划分要素空间。 例如,考虑下面一个非常简单的决策树,该决策树具有一个条件节点和两个类节点,指示一个条件以及满足该条件的训练点将属于哪个类别。

Note that there is a lot of overlap between the fields marked as each color and the data points within that area that actually are that color, or (roughly) entropy. The decision tree is constructed to minimize the entropy. In this scenario, we can add an additional layer of complexity. If we were to add another condition; if x is less than 6 and y is larger than 6, we can designate points in that area as red. The entropy has been lowered with this move.

请注意,标记为每种颜色的字段与该区域内实际上是该颜色或(大致) 的数据点之间存在很多重叠。 构造决策树以最小化熵。 在这种情况下,我们可以增加一层复杂性。 如果要添加另一个条件; 如果x小于6 y大于6,我们可以将该区域中的点指定为红色。 此举降低了熵。

Each step, the Decision Tree algorithm attempts to find a method to build the tree such that the entropy is minimized. Think of entropy more formally as the amount of ‘disorder’ or ‘confusion’ a certain divider (the conditions) has, and its opposite as ‘information gain’ — how much a divider adds information and insight to the model. Feature splits that have the highest information gain (as well as a lowest entropy) are placed at the top.

在每个步骤中,决策树算法都会尝试找到一种构建树的方法,以使熵最小化。 将熵更正式地看作是某个分隔线(条件)所具有的“混乱”或“混乱”的数量,而与之相反的是“信息增益”,即分隔线为模型增加了多少信息和洞察力。 具有最高信息增益(以及最低熵)的要素拆分位于顶部。

The conditions may split their one-dimensional features somewhat like this:

条件可能会将其一维特征分解为如下形式:

Note that condition 1 has clean separation, and therefore low entropy and high information gain. The same cannot be said for condition 3, which is why it is placed near the bottom of the Decision Tree. This construction of the tree ensures that it can remain as lightweight as possible.

请注意,条件1具有清晰的分隔,因此熵低且信息增益高。 条件3不能说相同,这就是为什么它位于决策树底部附近的原因。 树的这种构造确保其可以保持尽可能轻巧。

You can read more about entropy and its use in Decision Trees as well as neural networks (cross-entropy as a loss function) here.

您可以在此处阅读有关熵及其在决策树以及神经网络(交叉熵作为损失函数)中的用法的更多信息。

随机森林 (Random Forest)

Random Forest is a bagged (bootstrap aggregated) version of the Decision Tree. The primary idea is that several Decision Trees are each trained on a subset of data. Then, an input is passed through each model, and their outputs are aggregated through a function like a mean to produce a final output. Bagging is a form of ensemble learning.

随机森林是决策树的袋装(引导聚合)版本。 主要思想是对数个决策树分别训练一个数据子集。 然后,输入通过每个模型,并且它们的输出通过类似平均值的函数进行汇总以产生最终输出。 套袋是合奏学习的一种形式。

There are many analogies for why Random Forest works well. Here is a common version of one:

有许多类比说明为什么随机森林运作良好。 这是其中一个的通用版本:

You need to decide which restaurant to go to next. To ask someone for their recommendation, you must answer a variety of yes/no questions, which will lead them to make their decision for which restaurant you should go to.

您需要确定下一家餐厅。 要向某人提出建议,您必须回答各种是/否问题,这将使他们做出您应该去哪家餐厅的决定。

Would you rather only ask one friend or ask several friends, then find the mode or general consensus?

您愿意只问一个朋友还是问几个朋友,然后找到方式或普遍共识?

Unless you only have one friend, most people would answer the second. The insight this analogy provides is that each tree has some sort of ‘diversity of thought’ because they were trained on different data, and hence have different ‘experiences’.

除非您只有一个朋友,否则大多数人都会回答第二个。 该类比提供的见解是,每棵树都具有某种“思想多样性”,因为它们是在不同的数据上进行训练的,因此具有不同的“体验”。

This analogy, clean and simple as it is, never really stood out to me. In the real world, the single-friend option has less experience than all the friends in total, but in machine learning, the decision tree and random forest models are trained on the same data, and hence, same experiences. The ensemble model is not actually receiving any new information. If I could ask one all-knowing friend for a recommendation, I see no objection to that.

这种类比,干净和简单,从来没有真正让我脱颖而出。 在现实世界中,单朋友选项的经验少于所有朋友,但是在机器学习中,决策树和随机森林模型是在相同的数据上训练的,因此也具有相同的体验。 集成模型实际上没有接收任何新信息。 如果我可以向一个全知的朋友提出建议,我不会反对。

How can a model trained on the same data that randomly pulls subsets of the data to simulate artificial ‘diversity’ perform better than one trained on the data as a whole?

在相同数据上训练的,随机抽取数据子集以模拟人为“多样性”的模型如何比在整个数据上训练的模型更好?

Take a sine wave with heavy normally distributed noise. This is your single Decision Tree classifier, which is naturally a very high-variance model.

拍摄正弦波,并带有大量正态分布的噪声。 这是您的单个决策树分类器,它自然是一个高方差模型。

100 ‘approximators’ will be chosen. These approximators randomly select points along the sine wave and generate a sinusoidal fit, much like decision trees being trained on subsets of the data. These fits are then averaged to form a bagged curve. The result? — a much smoother curve.

将选择100个“近似器”。 这些逼近器沿正弦波随机选择点并生成正弦曲线拟合,就像在数据子集上训练决策树一样。 然后将这些拟合平均,以形成袋装曲线。 结果? -更平滑的曲线。

The reason why bagging works is because it reduces the variance of models, and helps improve capability to generalize, by artificially making the model more ‘confident’. This is also why bagging does not work as well on already low-variance models like logistic regression.

套袋工作的原因在于,它通过人为地使模型更具“信心”,从而减少了模型的差异并有助于提高泛化能力。 这也就是为什么装袋在诸如Logistic回归之类的低方差模型中效果不佳的原因。

You can read more about the intuition and more rigorous proof of the success of bagging here.

您可以在这里关于直觉和成功套袋更严格的证据。

支持向量机 (Support Vector Machines)

Support Vector Machines attempt to find a hyperplane that can divide the data best, relying on the concept of ‘support vectors’ to maximize the divide between the two classes.

支持向量机依靠“支持向量”的概念来最大化两个类别之间的距离,从而试图找到一种可以最好地划分数据的超平面。

Unfortunately, most datasets are not so easily separable, and if they were, SVM would likely not be the best algorithm to handle it. Consider this one-dimensional separation task; there is no good divider, since any one separation will cause two separate classes to be lumped into the same one.

不幸的是,大多数数据集并不是那么容易分离,如果是这样,SVM可能不是处理它的最佳算法。 考虑此一维分离任务; 没有很好的分隔符,因为任何一种分隔都会导致将两个单独的类归为同一类。

One proposal for a split.
一个提议分开。

SVM is powerful at solving these kinds of problems by using a so-called ‘kernel trick’, which projects data into new dimensions to make the separation task easier. For instance, let’s create a new dimension, which is simply defined as x² (x is the original dimension):

SVM通过使用所谓的“内核技巧”来强大地解决此类问题,该技巧将数据投影到新的维度上,从而使分离任务更加容易。 例如,让我们创建一个新的层面,它被简单地定义为x²(x为原始尺寸):

Now, the data is cleanly separable after the data was projected onto a new dimension (each data point represented in two dimensions as (x, x²)).

现在,数据被投影到一个新的层面后的数据是干净可分离(每个数据点在两个维度为代表( x , x ²)

Using a variety of kernels — most popularly, polynomial, sigmoid, and RBF kernels — the kernel trick does the heavy lifting to create a transformed space such that the separation task is simple.

使用各种内核(最常见的是多项式,Sigmoid和RBF内核),内核技巧使繁重的工作创造了一个转换后的空间,从而使分离任务变得简单。

神经网络 (Neural Networks)

Neural Networks are the pinnacle of machine learning. Their discovery, and that unlimited variations and improvements that can be made upon it have warranted it the subject of its own field, deep learning. Admittedly, the success of neural networks is still incomplete (“Neural networks are matrix multiplications that no one understands”), but the easiest way to explain them is through the Universal Approximation Theorem (UAT).

神经网络是机器学习的顶峰。 他们的发现以及对它的无穷变化和改进使它成为了自己领域的主题,即深度学习。 诚然,神经网络的成功仍然不完整(“神经网络是没人能理解的矩阵乘法”),但是最简单的解释方法是通过通用近似定理(UAT)。

At their core, every supervised algorithm seeks to model some underlying function of the data; usually this is either a regression plane or the feature boundary. Consider this function y = , which can be modelled to an arbitrary accuracy with several horizontal steps.

每种监督算法的核心都是试图对数据的某些基础功能进行建模。 通常这是一个回归平面或特征边界。 考虑这个函数y = ,可以用几个水平步长将其建模为任意精度。

This is essentially what a neural network can do. Perhaps it can be a little more complex and model relationships beyond horizontal steps (like quadratic and linear lines below), but at its core, the neural network is a piecewise function approximator.

这本质上就是神经网络可以做的。 也许除了水平步长(如下面的二次和线性线)之外,模型关系可能会更复杂一些,但是神经网络的核心是分段函数逼近器。

Each node is in delegated to one part of the piecewise function, and the purpose of the network is to activate certain neurons responsible for parts of the feature space. For instance, if one were to classify images of men with beards or no beards, several nodes should be delegated specifically to pixel locations where beards often appear. Somewhere in multi-dimensional space, these nodes represent a numerical range.

每个节点都委派给分段功能的一部分,而网络的目的是激活负责部分特征空间的某些神经元。 例如,如果要对有胡须或没有胡须的男性图像进行分类,则应将几个节点专门委派给经常出现胡须的像素位置。 在多维空间中的某个位置,这些节点表示一个数值范围。

Note, again, that the question “why do neural networks work” is still unanswered. The UAT doesn’t answer this question, but states that neural networks, under certain human interpretations, can model any function. The field of Explainable/Interpretable AI is emerging to answer these questions with methods like activation maximization and sensitivity analysis.

再次注意,“神经网络为什么起作用”的问题仍然没有得到回答。 UAT并未回答这个问题,但指出在某些人类的解释下,神经网络可以为任何功能建模。 可解释/可解释AI的领域正在涌现,以通过激活最大化和敏感性分析之类的方法来回答这些问题。

You can read a more in-depth explanation and view visualizations of the Universal Approximation Theorem here.

您可以在此处阅读更深入的解释,并查看通用近似定理的可视化。

In all four algorithms, and many others, these look very simplistic at a low dimensionality. A key realization in machine learning is that a lot of the ‘magic’ and ‘intelligence’ we purport to see in AI is really a simple algorithm hidden under the guise of high dimensionality.

在所有四种算法以及许多其他算法中,这些算法在低维情况下看起来都非常简单。 机器学习的一个关键实现是,我们声称在AI中看到的许多“魔术”和“智能”实际上是一个隐藏在高维伪装下的简单算法。

Decision trees splitting regions into squares is simple, but decision trees splitting high-dimensional space into hypercubes is less so. SVM performing a kernel trick to improve separability from one to two dimensions is understandable, but SVM doing the same thing on a dataset of hundreds of dimensions large is almost magic.

将区域划分为正方形的决策树很简单,但是将高维空间划分为超立方体的决策树却不那么容易。 SVM执行内核技巧以提高一维到二维的可分离性是可以理解的,但是SVM在数百个大维数据集上执行相同的操作几乎是神奇的。

Our admiration and confusion of machine learning is predicated on our lack of understanding for high dimensional spaces. Learning how to get around high dimensionality and understanding algorithms in a native space is instrumental to an intuitive understanding.

我们对机器学习的钦佩和困惑是基于我们对高维空间缺乏了解。 学习如何解决高维问题并了解本机空间中的算法,有助于直观理解。

All images created by author.

作者创作的所有图像。

翻译自: https://towardsdatascience.com/the-aha-moments-in-4-popular-machine-learning-algorithms-f7e75ef5b317

深度学习算法和机器学习算法


http://www.taodudu.cc/news/show-997372.html

相关文章:

  • 统计信息在数据库中的作用_统计在行业中的作用
  • 怎么评价两组数据是否接近_接近组数据(组间)
  • power bi 中计算_Power BI中的期间比较
  • matplotlib布局_Matplotlib多列,行跨度布局
  • 回归分析_回归
  • 线性回归算法数学原理_线性回归算法-非数学家的高级数学
  • Streamlit —使用数据应用程序更好地测试模型
  • lasso回归和岭回归_如何计划新产品和服务机会的回归
  • 贝叶斯 定理_贝叶斯定理实际上是一个直观的分数
  • 文本数据可视化_如何使用TextHero快速预处理和可视化文本数据
  • 真实感人故事_您的数据可以告诉您真实故事吗?
  • k均值算法 二分k均值算法_使用K均值对加勒比珊瑚礁进行分类
  • 衡量试卷难度信度_我们可以通过数字来衡量语言难度吗?
  • 视图可视化 后台_如何在单视图中可视化复杂的多层主题
  • python边玩边学_边听边学数据科学
  • 边缘计算 ai_在边缘探索AI!
  • 如何建立搜索引擎_如何建立搜寻引擎
  • github代码_GitHub启动代码空间
  • 腾讯哈勃_用Python的黑客统计资料重新审视哈勃定律
  • 如何使用Picterra的地理空间平台分析卫星图像
  • hopper_如何利用卫星收集的遥感数据轻松对蚱hopper中的站点进行建模
  • 华为开源构建工具_为什么我构建了用于大数据测试和质量控制的开源工具
  • 数据科学项目_完整的数据科学组合项目
  • uni-app清理缓存数据_数据清理-从哪里开始?
  • bigquery_如何在BigQuery中进行文本相似性搜索和文档聚类
  • vlookup match_INDEX-MATCH — VLOOKUP功能的升级
  • flask redis_在Flask应用程序中将Redis队列用于异步任务
  • 前馈神经网络中的前馈_前馈神经网络在基于趋势的交易中的有效性(1)
  • hadoop将消亡_数据科学家:适应还是消亡!
  • 数据科学领域有哪些技术_领域知识在数据科学中到底有多重要?

深度学习算法和机器学习算法_啊哈! 4种流行的机器学习算法的片刻相关推荐

  1. 深度学习与计算机视觉系列(4)_最优化与随机梯度下降\数据预处理,正则化与损失函数

    1. 引言 上一节深度学习与计算机视觉系列(3)_线性SVM与SoftMax分类器中提到两个对图像识别至关重要的概念: 用于把原始像素信息映射到不同类别得分的得分函数/score function 用 ...

  2. 深度学习+计算机视觉(CV)_第0章_课程介绍

    深度学习+计算机视觉(CV)_第0章_课程介绍 文章目录 深度学习+计算机视觉(CV)_第0章_课程介绍 深度学习 1.什么是深度学习 2 发展历史(了解) 计算机视觉 1.计算机视觉定义 2.常见任 ...

  3. 深度学习与自然语言处理(4)_斯坦福cs224d 大作业测验1与解答

    深度学习与自然语言处理(4)_斯坦福cs224d 大作业测验1与解答 作业内容翻译:@胡杨(superhy199148@hotmail.com) && @胥可(feitongxiaok ...

  4. 深度学习与计算机视觉系列(8)_神经网络训练与注意点

    深度学习与计算机视觉系列(8)_神经网络训练与注意点 作者:寒小阳  时间:2016年1月.  出处:http://blog.csdn.net/han_xiaoyang/article/details ...

  5. 深度学习与计算机视觉系列(10)_细说卷积神经网络

    转载自: 深度学习与计算机视觉系列(10)_细说卷积神经网络 - 龙心尘 - 博客频道 - CSDN.NET http://blog.csdn.net/longxinchen_ml/article/d ...

  6. 深度学习backbone是什么意思_一场突如其来的讨论:到底什么是深度学习?SVM其实也是深度学习吗?...

    雷锋网 AI 科技评论按:2019 年底.2020 年初,许多机器学习界活跃的教授.研究员们投身参与了一场的突如其来的讨论:深度学习是什么? 在过去十年汹涌而来的深度学习浪潮中,大家对深度学习在应用中 ...

  7. 深度学习与计算机视觉系列(1)_基础介绍

    转载自: 深度学习与计算机视觉系列(1)_基础介绍 - 龙心尘 - 博客频道 - CSDN.NET http://blog.csdn.net/longxinchen_ml/article/detail ...

  8. 深度学习与计算机视觉系列(9)_串一串神经网络之动手实现小例子

    深度学习与计算机视觉系列(9)_串一串神经网络之动手实现小例子 作者:寒小阳  时间:2016年1月.  出处:http://blog.csdn.net/han_xiaoyang/article/de ...

  9. 华南理工深度学习与神经网络期末考试_深度学习算法地图

    原创声明:本文为 SIGAI 原创文章,仅供个人学习使用,未经允许,不能用于商业目的. 其它机器学习.深度学习算法的全面系统讲解可以阅读<机器学习-原理.算法与应用>,清华大学出版社,雷明 ...

最新文章

  1. 贫困地区农品产销对接行动倡议书-中国农民丰收节交易会
  2. java 文件解析异常_java中异常的解析
  3. 图管够!灌篮高手、女儿国…阿里日_这帮程序员太会玩了!
  4. vue.js页面刷新出现闪烁问题的解决
  5. androidpn 推送初探
  6. c 无回显读取字符/不按回车即获取字符
  7. 修改数据表DataTable某一列的类型和记录值
  8. 利用python调用谷歌翻译API
  9. C语言之取反和取相反数
  10. 聊聊身边的嵌入式—英语学习利器点读笔
  11. stack的使用方法
  12. KFC肯德基带给孩子的危害(转)
  13. 新版本edge浏览器修改默认搜索引擎
  14. php_version_too_low,以太坊常见问题和错误 / Web3j error:Intrinsic gas too low - 汇智网
  15. [Poi2003 ][bzoj 2601]MAL猴子捞月
  16. mysql写系统_一个用PHP和MYSQL写的定饭系统_PHP
  17. 2018春招京东实习编程题解
  18. 我想建立网站,网站搭建需要哪些大体步骤?
  19. SSR是什么?Vue中怎么实现?
  20. JAVA NIO实现客户端与服务端通信

热门文章

  1. python数据分析常用包之Scipy
  2. 图像灰度变换及图像数组操作
  3. java线程并发库之--线程同步工具CountDownLatch用法
  4. NeHe OpenGL教程 第三十七课:卡通映射
  5. java 回调函数很好懂
  6. P2P原理及UDP穿透简单说明
  7. 如何利用Shader来渲染游戏中的3D角色
  8. 增加 processon 免费文件数
  9. 组织在召唤:如何免费获取一个js.org的二级域名
  10. Linux-RHEL5-初学者配置vsftpd注意事项