非监督学习

视频

height="510" width="900" src="https://d3c33hcgiwev3.cloudfront.net/01.4-V2-Introduction-UnsupervisedLearning-FairUse.8f251df0b23611e4af4203f82f647410/full/540p/index.webm?Expires=1497657600&Signature=cgy7sBQkMxLRrHbRq-gVs4VZ5Sjr3URFsglQHmqH0nsYruHsyN6ftC~NxaxZxdfbsZgjPYoHBHSWi-rtl-Sk3yiMshR6AvQTsSZEk60I9jrF9lC4RXIfRBj4bWPiB46I9mSFqQZ2PCs4cb5b0889SIiQpPqG8Xf7wei7volq2q0_&Key-Pair-Id=APKAJLTNE6QMUY6HBC5A">

中文

English

0:00
In this video, we’ll talk about the second major type of machine learning problem, called Unsupervised Learning.
0:06
In the last video, we talked about Supervised Learning. Back then, recall data sets that look like this, where each example was labeled either as a positive or negative example, whether it was a benign or a malignant tumor.
0:20
So for each example in Supervised Learning, we were told explicitly what is the so-called right answer, whether it’s benign or malignant. In Unsupervised Learning, we’re given data that looks different than data that looks like this that doesn’t have any labels or that all has the same label or really no labels.
0:39
So we’re given the data set and we’re not told what to do with it and we’re not told what each data point is. Instead we’re just told, here is a data set. Can you find some structure in the data? Given this data set, an Unsupervised Learning algorithm might decide that the data lives in two different clusters. And so there’s one cluster
0:59
and there’s a different cluster.
1:01
And yes, Supervised Learning algorithm may break these data into these two separate clusters.
1:06
So this is called a clustering algorithm. And this turns out to be used in many places.
1:11
One example where clustering is used is in Google News and if you have not seen this before, you can actually go to this URL news.google.com to take a look. What Google News does is everyday it goes and looks at tens of thousands or hundreds of thousands of new stories on the web and it groups them into cohesive news stories.
1:30
For example, let’s look here.
1:33
The URLs here link to different news stories about the BP Oil Well story.
1:41
So, let’s click on one of these URL’s and we’ll click on one of these URL’s. What I’ll get to is a web page like this. Here’s a Wall Street Journal article about, you know, the BP Oil Well Spill stories of “BP Kills Macondo”, which is a name of the spill and if you click on a different URL
2:00
from that group then you might get the different story. Here’s the CNN story about a game, the BP Oil Spill,
2:07
and if you click on yet a third link, then you might get a different story. Here’s the UK Guardian story about the BP Oil Spill.
2:16
So what Google News has done is look for tens of thousands of news stories and automatically cluster them together. So, the news stories that are all about the same topic get displayed together. It turns out that clustering algorithms and Unsupervised Learning algorithms are used in many other problems as well.
2:35
Here’s one on understanding genomics.
2:38
Here’s an example of DNA microarray data. The idea is put a group of different individuals and for each of them, you measure how much they do or do not have a certain gene. Technically you measure how much certain genes are expressed. So these colors, red, green, gray and so on, they show the degree to which different individuals do or do not have a specific gene.
3:02
And what you can do is then run a clustering algorithm to group individuals into different categories or into different types of people.
3:10
So this is Unsupervised Learning because we’re not telling the algorithm in advance that these are type 1 people, those are type 2 persons, those are type 3 persons and so on and instead what were saying is yeah here’s a bunch of data. I don’t know what’s in this data. I don’t know who’s and what type. I don’t even know what the different types of people are, but can you automatically find structure in the data from the you automatically cluster the individuals into these types that I don’t know in advance? Because we’re not giving the algorithm the right answer for the examples in my data set, this is Unsupervised Learning.
3:44
Unsupervised Learning or clustering is used for a bunch of other applications.
3:48
It’s used to organize large computer clusters.
3:51
I had some friends looking at large data centers, that is large computer clusters and trying to figure out which machines tend to work together and if you can put those machines together, you can make your data center work more efficiently.
4:04
This second application is on social network analysis.
4:07
So given knowledge about which friends you email the most or given your Facebook friends or your Google+ circles, can we automatically identify which are cohesive groups of friends, also which are groups of people that all know each other?
4:22
Market segmentation.
4:24
Many companies have huge databases of customer information. So, can you look at this customer data set and automatically discover market segments and automatically
4:33
group your customers into different market segments so that you can automatically and more efficiently sell or market your different market segments together?
4:44
Again, this is Unsupervised Learning because we have all this customer data, but we don’t know in advance what are the market segments and for the customers in our data set, you know, we don’t know in advance who is in market segment one, who is in market segment two, and so on. But we have to let the algorithm discover all this just from the data.
5:01
Finally, it turns out that Unsupervised Learning is also used for surprisingly astronomical data analysis and these clustering algorithms gives surprisingly interesting useful theories of how galaxies are formed. All of these are examples of clustering, which is just one type of Unsupervised Learning. Let me tell you about another one. I’m gonna tell you about the cocktail party problem.
5:26
So, you’ve been to cocktail parties before, right? Well, you can imagine there’s a party, room full of people, all sitting around, all talking at the same time and there are all these overlapping voices because everyone is talking at the same time, and it is almost hard to hear the person in front of you. So maybe at a cocktail party with two people,
5:45
two people talking at the same time, and it’s a somewhat small cocktail party. And we’re going to put two microphones in the room so there are microphones, and because these microphones are at two different distances from the speakers, each microphone records a different combination of these two speaker voices.
6:05
Maybe speaker one is a little louder in microphone one and maybe speaker two is a little bit louder on microphone 2 because the 2 microphones are at different positions relative to the 2 speakers, but each microphone would cause an overlapping combination of both speakers’ voices.
6:23
So here’s an actual recording
6:26
of two speakers recorded by a researcher. Let me play for you the first, what the first microphone sounds like. One (uno), two (dos), three (tres), four (cuatro), five (cinco), six (seis), seven (siete), eight (ocho), nine (nueve), ten (y diez).
6:41
All right, maybe not the most interesting cocktail party, there’s two people counting from one to ten in two languages but you know. What you just heard was the first microphone recording, here’s the second recording.
6:57
Uno (one), dos (two), tres (three), cuatro (four), cinco (five), seis (six), siete (seven), ocho (eight), nueve (nine) y diez (ten). So we can do, is take these two microphone recorders and give them to an Unsupervised Learning algorithm called the cocktail party algorithm, and tell the algorithm - find structure in this data for you. And what the algorithm will do is listen to these audio recordings and say, you know it sounds like the two audio recordings are being added together or that have being summed together to produce these recordings that we had. Moreover, what the cocktail party algorithm will do is separate out these two audio sources that were being added or being summed together to form other recordings and, in fact, here’s the first output of the cocktail party algorithm.
7:39
One, two, three, four, five, six, seven, eight, nine, ten.
7:47
So, I separated out the English voice in one of the recordings.
7:52
And here’s the second of it. Uno, dos, tres, quatro, cinco, seis, siete, ocho, nueve y diez. Not too bad, to give you
8:03
one more example, here’s another recording of another similar situation, here’s the first microphone : One, two, three, four, five, six, seven, eight, nine, ten.
8:16
OK so the poor guy’s gone home from the cocktail party and he ‘s now sitting in a room by himself talking to his radio.
8:23
Here’s the second microphone recording.
8:28
One, two, three, four, five, six, seven, eight, nine, ten.
8:33
When you give these two microphone recordings to the same algorithm, what it does, is again say, you know, it sounds like there are two audio sources, and moreover,
8:42
the album says, here is the first of the audio sources I found.
8:47
One, two, three, four, five, six, seven, eight, nine, ten.
8:54
So that wasn’t perfect, it got the voice, but it also got a little bit of the music in there. Then here’s the second output to the algorithm.
9:10
Not too bad, in that second output it managed to get rid of the voice entirely. And just, you know, cleaned up the music, got rid of the counting from one to ten.
9:18
So you might look at an Unsupervised Learning algorithm like this and ask how complicated this is to implement this, right? It seems like in order to, you know, build this application, it seems like to do this audio processing you need to write a ton of code or maybe link into like a bunch of synthesizer Java libraries that process audio, seems like a really complicated program, to do this audio, separating out audio and so on.
9:42
It turns out the algorithm, to do what you just heard, that can be done with one line of code - shown right here.
9:50
It take researchers a long time to come up with this line of code. I’m not saying this is an easy problem, But it turns out that when you use the right programming environment, many learning algorithms can be really short programs.
10:03
So this is also why in this class we’re going to use the Octave programming environment.
10:08
Octave, is free open source software, and using a tool like Octave or Matlab, many learning algorithms become just a few lines of code to implement. Later in this class, I’ll just teach you a little bit about how to use Octave and you’ll be implementing some of these algorithms in Octave. Or if you have Matlab you can use that too.
10:27
It turns out the Silicon Valley, for a lot of machine learning algorithms, what we do is first prototype our software in Octave because software in Octave makes it incredibly fast to implement these learning algorithms.
10:38
Here each of these functions like for example the SVD function that stands for singular value decomposition; but that turns out to be a linear algebra routine, that is just built into Octave.
10:49
If you were trying to do this in C++ or Java, this would be many many lines of code linking complex C++ or Java libraries. So, you can implement this stuff as C++ or Java or Python, it’s just much more complicated to do so in those languages.
11:03
What I’ve seen after having taught machine learning for almost a decade now, is that, you learn much faster if you use Octave as your programming environment, and if you use Octave as your learning tool and as your prototyping tool, it’ll let you learn and prototype learning algorithms much more quickly.
11:22
And in fact what many people will do to in the large Silicon Valley companies is in fact, use an algorithm like Octave to first prototype the learning algorithm, and only after you’ve gotten it to work, then you migrate it to C++ or Java or whatever. It turns out that by doing things this way, you can often get your algorithm to work much faster than if you were starting out in C++.
11:44
So, I know that as an instructor, I get to say “trust me on this one” only a finite number of times, but for those of you who’ve never used these Octave type programming environments before, I am going to ask you to trust me on this one, and say that you, you will, I think your time, your development time is one of the most valuable resources.
12:04
And having seen lots of people do this, I think you as a machine learning researcher, or machine learning developer will be much more productive if you learn to start in prototype, to start in Octave, in some other language.
12:17
Finally, to wrap up this video, I have one quick review question for you.
12:24
We talked about Unsupervised Learning, which is a learning setting where you give the algorithm a ton of data and just ask it to find structure in the data for us. Of the following four examples, which ones, which of these four do you think would will be an Unsupervised Learning algorithm as opposed to Supervised Learning problem. For each of the four check boxes on the left, check the ones for which you think Unsupervised Learning algorithm would be appropriate and then click the button on the lower right to check your answer. So when the video pauses, please answer the question on the slide.
13:01
So, hopefully, you’ve remembered the spam folder problem. If you have labeled data, you know, with spam and non-spam e-mail, we’d treat this as a Supervised Learning problem.
13:11
The news story example, that’s exactly the Google News example that we saw in this video, we saw how you can use a clustering algorithm to cluster these articles together so that’s Unsupervised Learning.
13:23
The market segmentation example I talked a little bit earlier, you can do that as an Unsupervised Learning problem because I am just gonna get my algorithm data and ask it to discover market segments automatically.
13:35
And the final example, diabetes, well, that’s actually just like our breast cancer example from the last video. Only instead of, you know, good and bad cancer tumors or benign or malignant tumors we instead have diabetes or not and so we will use that as a supervised, we will solve that as a Supervised Learning problem just like we did for the breast tumor data.
13:58
So, that’s it for Unsupervised Learning and in the next video, we’ll delve more into specific learning algorithms and start to talk about just how these algorithms work and how we can, how you can go about implementing them.

机器学习-非监督学习相关推荐

  1. 机器学习非监督学习—k-means及案例分析

    一.非监督学习 无监督学习,顾名思义,就是不受监督的学习,一种自由的学习方式.该学习方式不需要先验知识进行指导,而是不断地自我认知,自我巩固,最后进行自我归纳,在机器学习中,无监督学习可以被简单理解为 ...

  2. 机器学习 -- 非监督学习 之 Clustering

    什么是非监督学习(unsupervised learning)? 在监督学习中,我们的数据特征有一部分作为输入,一个特征作为输出. 输出特征是标签好的数据,我们已经提前知道会输出的分类结果. 比如有一 ...

  3. 吴恩达机器学习笔记-非监督学习

    聚类 之前的课程中我们学习的都是监督学习相关的算法,现在来开始看非监督学习.非监督学习相对于监督非学习来看,其使用的是未标记的训练集而监督学习的是标记的训练集.换句话说,我们不知道向量y的预期结果,仅 ...

  4. 机器学习 分类监督学习_地球科学中使用无监督机器学习的相分类

    机器学习 分类监督学习 石油和天然气数据科学 (Data science in Oil and Gas) Facies are uniform sedimentary bodies of rock w ...

  5. 机器学习hierarchical clustering_材料学+AI:非监督学习预测新型固态锂离子导体材料...

    2019年11月20日,丰田北美研究院的凌晨博士与马里兰大学莫一非教授合作,使用非监督学习的方法,预测了固态电池中一些新的还未用实验验证过的固态电解质.从已知晶体库中选取几千种含锂材料,其中部分材料的 ...

  6. 机器学习实践:非监督学习-8

    机器学习实践:非监督学习 1.实验描述 本实验通过scikit-learn 工具包完成非监督学习的理解和使用,其中主要包括各种聚类分析算法及其分析能力的对比,使用PCA技术达到处理高维数据的能力等内容 ...

  7. 机器学习:非监督学习

    文章目录 机器学习:非监督学习 聚类Clustering Kmean聚类 层次聚类 (Hierarchical Clustering, HC) 单连接层次聚类(single link) 全连接层次聚类 ...

  8. 1 监督学习与非监督学习简介--机器学习基础理论入门

    1 监督学习与非监督学习简介–机器学习基础理论入门 1.1 机器学习基本概念 什么是机器学习 机器学习: 机器学习(machine learning,ML)是一门多领域交叉学科,设计概率论.统计学.逼 ...

  9. 机器学习(一)监督学习,非监督学习和强化学习

    根据机器学习的应用情况,我们又把机器学习分为三类:监督学习(SupervisedLearning, SL), 非监督学习(Unsupervised learning, UL),和强化学习(Reinfo ...

最新文章

  1. 小程序web开发框架-weweb介绍
  2. IE自动在后台运行,不知道是什么病毒?
  3. LeetCode-337 House Robber III
  4. 【企业管理】如何让管理有效
  5. 分享实录 | 深度学习技术红利下的代码补全
  6. Linux网络编程服务器模型选择之IO复用循环并发服务器
  7. 监督学习 | SVM 之非线性支持向量机原理
  8. 案例 员工信息维护系统 c# 1613925570
  9. 【最佳实践】【Blend】Triggers、Actions 和 Behaviors
  10. 学委作业管理系统c语言,c语言大作业-学生信息管理系统.doc
  11. java程序员推荐app_Java程序员面试大全app
  12. 1299: Problem 1
  13. 标准差(Standard Deviation) 和 标准误差(Standard Error)
  14. android系统一直显示通知栏_Android通知栏详解
  15. 如何用html5创作一个游戏
  16. ASM - TreeApi Method组件和接口简介
  17. SpringMVC拦截器与Filter过滤器
  18. Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling论文阅读
  19. Windows .CAB格式离线更新包的安装方法
  20. python杨辉三角两种写法详解

热门文章

  1. JAVA AES128
  2. android控件复制,修改Delphi 10.1.2 edit控件在android的复制、剪切和粘贴样式
  3. 图文并茂~ 详解交换机中的半双工与全双工网络知识
  4. 文件管理,用户管理,磁盘管理总结及感悟
  5. 微信公众号营销,别光顾吸粉
  6. Go语言Append函数的使用
  7. ElasticSearch全文检索原理及过程
  8. Qt:QtCharts绘制图表实时采集温度
  9. SOA 和 微服务的区别
  10. 基于安卓的学生学习互帮互助系统设计与实现