Model Representation:
To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X → Y so that h(x) is a “good” predictor for the corresponding value of y. For historical reasons, this function h is called a hypothesis. Seen pictorially, the process is therefore like this:

When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.
Cost Function:
We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference (actually a fancier version of an average) of all the results of the hypothesis with inputs from x’s and the actual output y’s.

To break it apart, it is 12x¯12x¯\frac{1}{2} \bar{x} where x¯x¯\bar{x} is the mean of the squares of hθ(xi)−yihθ(xi)−yih_\theta (x_{i}) - y_{i} , or the difference between the predicted value and the actual value.

This function is otherwise called the “Squared error function“, or “Mean squared error“. The mean is halved (12)(12)\left(\frac{1}{2}\right) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the 1212\frac{1}{2} term. The following image summarizes what the cost function does:

  • Cost Function Intuition 1
    If we try to think of it in visual terms, our training data set is scattered on the x-y plane. We are trying to make a straight line (defined by hθ(x)hθ(x)h_\theta(x) which passes through these scattered data points.
    Our objective is to get the best possible line. The best possible line will be such so that the average squared vertical distances of the scattered points from the line will be the least. Ideally, the line should pass through all the points of our training data set. In such a case, the value of J(θ0,θ1)J(θ0,θ1)J(\theta_0, \theta_1) will be 0. The following example shows the ideal situation where we have a cost function of 0.

    When θ1=1θ1=1\theta_1 = 1, we get a slope of 1 which goes through every single data point in our model. Conversely, when θ1=0.5θ1=0.5\theta_1 = 0.5, we see the vertical distance from our fit to the data points increase.

    This increases our cost function to 0.58. Plotting several other points yields to the following graph:

    Thus as a goal, we should try to minimize the cost function. In this case, θ1=1θ1=1\theta_1 = 1 is our global minimum.
  • Cost Function Intuition 2
    A contour plot is a graph that contains many contour lines. A contour line of a two variable function has a constant value at all points of the same line. An example of such a graph is the one to the right below.

    Taking any color and going along the ‘circle’, one would expect to get the same value of the cost function. For example, the three green points found on the green line above have the same value for J(θ0,θ1)J(θ0,θ1)J(\theta_0,\theta_1) and as a result, they are found along the same line. The circled x displays the value of the cost function for the graph on the left when θ0=800θ0=800\theta_0= 800 and θ1=−0.15θ1=−0.15\theta_1= -0.15. Taking another h(x) and plotting its contour plot, one gets the following graphs:

    When θ0=360θ0=360\theta_0= 360 and θ1=0θ1=0\theta_1= 0, the value of J(θ0,θ1)J(θ0,θ1)J(\theta_0,\theta_1) in the contour plot gets closer to the center thus reducing the cost function error. Now giving our hypothesis function a slightly positive slope results in a better fit of the data.

    The graph above minimizes the cost function as much as possible and consequently, the result of θ1θ1\theta_1 and θ0θ0\theta_0 tend to be around 0.12 and 250 respectively. Plotting those values on our graph to the right seems to put our point in the center of the inner most ‘circle’.

【Machine Learning】【Andrew Ng】- notes(Week 1: model and cost function)相关推荐

  1. Machine Learning课程 by Andrew Ng

    大名鼎鼎的机器学习大牛Andrew Ng的Machine Learning课程,在此mark一下: 一:Coursera: https://www.coursera.org/learn/machine ...

  2. Machine learning week 10(Andrew Ng)

    文章目录 Reinforcement learning 1. Reinforcement learning introduction 1.1. What is Reinforcement Learni ...

  3. Machine Learning Outline(Andrew Ng课程总结)

  4. 【Machine Learning】OpenCV中的K-means聚类

    在上一篇([Machine Learning]K-means算法及优化)中,我们介绍了K-means算法的基本原理及其优化的方向.opencv中也提供了K-means算法的接口,这里主要介绍一下如何在 ...

  5. 【原】Coursera—Andrew Ng机器学习—Week 9 习题—异常检测

    [原]Coursera-Andrew Ng机器学习-Week 9 习题-异常检测 参考文章: (1)[原]Coursera-Andrew Ng机器学习-Week 9 习题-异常检测 (2)https: ...

  6. 【Machine Learning 学习笔记】Stochastic Dual Coordinate Ascent for SVM 代码实现

    [Machine Learning 学习笔记]Stochastic Dual Coordinate Ascent for SVM 代码实现 通过本篇博客记录一下Stochastic Dual Coor ...

  7. 【Machine Learning 学习笔记】feature engineering中noisy feature的影响

    [Machine Learning 学习笔记]feature engineering中noisy feature的影响 通过本篇博客记录一下添加噪声对Lasso和SVM的影响,采用的数据集为sklea ...

  8. Andrew Ng Machine Learning 专题【Logistic Regression amp; Regularization】

    此文是斯坦福大学,机器学习界 superstar - Andrew Ng 所开设的 Coursera 课程:Machine Learning 的课程笔记. 力求简洁,仅代表本人观点,不足之处希望大家探 ...

  9. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 17—Large Scale Machine Learning 大规模机器学习...

    Lecture17 Large Scale Machine Learning大规模机器学习 17.1 大型数据集的学习 Learning With Large Datasets 如果有一个低方差的模型 ...

最新文章

  1. 页面自动刷新html实现
  2. Winform巧用窗体设计完成弹窗数值绑定-以重命名弹窗为例
  3. GridView实现数据编辑和删除(一)
  4. java:自定义数据库连接池
  5. 19年兰州大学计算机分数线,兰州大学2019年在广东省录取分数线
  6. (TOJ1248)Encoding
  7. 从毛坯房到精装修,阿里云企业IT治理样板间助力云上管控和治理
  8. 华硕Z77系列主板怎么进行超频设置?
  9. FGUI使用方法(一):安装和设置FGUI
  10. GJB289A总线测试工装研究
  11. 通过nali命令统计访问的IP输入地理区域等作用
  12. Bouncy Castle 密钥生成发放证书
  13. 就在昨天,我也关闭了朋友圈
  14. Dell计算机装Win8,超简单戴尔重装win7/win8系统完整教程
  15. em标签和i标签区别
  16. shell 中字符串变量处理
  17. 线程和线程间通信(C语言)
  18. python -基本编程题
  19. 求Geohash编码周围的8个编码
  20. 最新计算机cpu简介,计算机cpu的类型是什么?计算机CPU分类简介

热门文章

  1. 视觉追踪热图帮Instagram被吐槽新l
  2. “会员制营销”和“EMAIL营销”
  3. 计算机数据应用与维修,计算机检测维修与数据恢复技术及应用(原稿)
  4. 阿里巴巴集团告别 CTO,一个时代又落幕了
  5. 最新BI报表工具对比选型指标及重点注意事项---BI报表工具选型的那些事
  6. 生物科技拿起生化朋克的剧本|AI的朋友二
  7. web service和rpc的区别
  8. 怎样的内部失误使携程久久不能恢复?
  9. GeoGebra画图功能
  10. 数学计算机代码,GeoGebra(数学图形计算器)(示例代码)