http://blog.csdn.net/pipisorry/article/details/48443533

海量数据挖掘Mining Massive Datasets(MMDs) -Jure Leskovec courses学习笔记之MapReduce

{A programming system for easily implementing parallel algorithms on commodity clusters.}

Distributed File Systems分布式文件系统DFS

why we need Map-Reduce in the first place?

Classical Data Mining: Look at the disk in addition to looking at CPU and memory.So the data's on disk,you can only bring in a portion of the data into memory at a time.And you can process it in batches, and write back results to disk.

但是如果有类似google如此巨大的数据集上面的方面一部分一部分的读时间上还是不能忍受。

Machine Learning, Statistics: Now the obvious thing that you think of is that it can split the data into chunks.And you can have multiple disks and CPUs.you stripe the data across multiple disks.And you can read it, and process it in parallel in multiple CPUs.

This is the fundamental idea of cluster computing.

cluster computing集群计算之架构

Note: a higher bandwidth switch can do two to ten gigabits between racks.So so we have 16 to 64 nodes in a rack.And then you, you rack up multiple racks,and you get a data center.So this is the standard classical architecture that has emerged over the last few years.

cluster computing challenges集群计算的挑战

挑战1:节点失效

Note:Persistence means that once you store the data,you're guaranteed you can read it again.

So kind of need an infrastructure(下部构造) that can hide these kinds of node failures and let the computation go to go to completion even if nodes fail.

挑战2:网络带宽瓶颈

Note: a complex computation might need to move a lot of data, and that can slow the computation down.So you need a framework that you know,doesn't move data around so much while it's doing computation.

挑战3:分布式编程实现困难

Map-Reduce解决DFS的问题

map-reduce可以解决以上3个问题

redundant storage infrastructure冗余存储下部构造


Note:

1.  distributed file system stores data across a cluster, but stores each piece of data multiple times.这就解决了挑战1.

2. the data is very rarely updated in place, it's very, very often updated through appends.

Note:

1. 一个文件分成多个chunks,并冗余存储在不同chunk servers中(一般是重复3份)。

2. the machines themselves are called chunk servers in this context.

3. replicas of a chunk are never on the same chunk server.

chunk servers & compute servers块服务器的计算服务器

Bring computation to data: chunk servers also act as compute servers.And when, whenever your computation has to access data.That computation is actually scheduled on the chunk server that actually contains the data.This way you avoid moving data to where the computation needs to run.这就解决了挑战2.

分布式文件系统三大组件

Note:

1. To keep at least one replica in a entirely different rack if possible: We do that because the most common scenario is that a single node can fail.But it's also possible that the switch on a rack can fail, and when the switch on a rack fails,the entire rack becomes inaccessible.

2. when a client,or an algorithm that needs to access the data tries to access a file it goes through the client library.。。。And once that's done the client is directly connected to the chunk servers.Where it can access the data without going through the master nodes.So the data access actually happens in peer-to-peer fashion without going through the master node.

皮皮Blog

The MapReduce Computational Model计算模型

warm up task热身任务

Task: word count词计数

Note:You can just build a Hash Table.build the index by word.The first time you see a word you initialize,you add an entry to the Hash Table,with that word, and set the count to 1.And every subsequent time you see the word,increment the count by one.And you make a single sweep through the file.

Note:

1. Well you can try to write some kind of complicated code but I like to use Unix file system commands to do this.

2. here the the command words is a little script that goes through doc.txt,outputs the words in it one per line.And once you sort it all of the all occurrences of the same word come together.And when you do uniq dash c,it takes a run of the occurrence of the same word.And then just counts the occurrences of the same word.So the output of this is going to be word count pairs.

3. unix自定义命令words(doc_name): goes through file doc_name and then output one word to a line

unix文档编辑命令uniq: 功能说明:检查及删除文本文件中重复出现的行列。
语  法:uniq [-cdu][-f<栏位>][-s<字符位置>][-w<字符位置>][--help][--version][输入文件][输出文件]
参  数:  -c或--count   在每列旁边显示该行重复出现的次数。...

4. 这个命令的实现是类似MapReduce的。

MapReduce overview概述

MapReduce的两大步骤

Note:

1. we wrote a script called words,that output one word to a line,is called a Map function in in, in, in MapReduce.The Map function scans the input file record-at-a-time.

2. each record pulls out something that you care about.In this case, it was words.And the things that you output are called keys.
3. grouped all the keys with the same value together.
4. the outline of this computation actually stays the same for any MapReduce computation.What changes is it that it change the Map function,the Reduce function, to the fit the problem that you're actually solving.
MapReduce:The Map step映射步

Apply the Map function to each input record, and create intermediate key-value pairs:

Note:

1. The intermediate key-value pairs need not have the same key as input key value-pairs.And there could be multiple of them.And the values although they look the same here, they both say v,the values could be different as well.
2. the Map function produced multiple intermediate key-value pairs.So there can be zero, one, or multiple intermediate key-value pairs,for each input key-value pair.
MapReduce:The Reduce step归约步
group intermediate key-value pairs by key:

Note: this is done by sorting by key and then by grouping together the value of.And these are all different values,although use the same same symbol v here.
MapReduce过程总结

MapReduce使用实例1:词计数

Note:

1. this whole example doesn't run on a single node.The data is actually distributed across multiple input nodes.The data's actually divided here into multiple nodes(here is 4).Now the Map tasks are going to be run on each of these four different nodes.And the the outputs of those Map tasks will therefore be produced on four different nodes.

2. The system then copies the Map outputs onto a single node.And once the data has flowed to the single node,it can then sort it by key and then do the final reduce step.In practice you use multiple Reduce nodes as well, you can tell the system to use a certain number of Reduce nodes.And it makes sure, that for any given key ,all instances, regardless of which Map node they start out from,always end up at the same Reduce node. And this is done by using a hash function.the system uses a hash function that hashes each Map key and determines a single Reduce node to shift that tuple two.And this ensures that all tuples with the same key,end up with the same Reduce node.

3. The final result is perfectly fine because you're dealing with a distributed file system,which know, knows that your file is spread across three nodes of the system.So you can still access it as a single file in your client.And the system knows to access the data from those three three independent nodes.
4. all this magic in the MapReduce magic is implemented to use,as far as possible, only sequential scans of disk as opposed to a random access is.how the Map function is applied on the input file record by record.How the sorting is done and so on.you can actually implement,all of this by using only sequential reads of disk, and never using random accesses of disk.sequential reads are much much more efficient than random accesses to disk.And the whole MapReduce system is built around doing only sequential reads of files and never random accesses.

词计数实现伪代码

MapReduce使用实例2:主机Host大小

Note: The problem, is for each host(not for each URL) we want to find the total number of bytes.there can be multiple many URLs with the same host name.

皮皮Blog

Scheduling and Data Flow调度和数据流
{go under the hood of a Map-Reduce system and understand how it actually works}

MapReduce diagram图解

Note: MapReduce过程图的解说:

In the Map step,you take a Big a document which is divided into chunks and you run a Map process on each chunk.And the map process go through each record in that chunk and outputs an intermediate key value pairs for each vector in that  chunk.
In the group by step, you group by key.you bring together all the values the same key.
And in the reduce step.You apply a reducer to each intermediate key value pair set.And you create a final output.

MapReduce在分布式系统中的实际工作图解

{The previous schematic was how MapReduce worked in a, in a centralized(集中式) system}

Note:

1. In a distributed system, you have multiple nodes and map and reduced tasks are running in pattern on multiple nodes.And producing it to be intermediate key value pairs on each of those nodes.once the intermediate key value pairs are produced, the underlying system the Map-Reduce system uses a partitioning function which is just a hash function,to each intermediate key value.And the hash function will tell the Map-Reduce system which reduce node to send that key value pair to.
2. once the reduce task has received input from all from all the map tasks.its first job is to sort it's input, and group it together by key.

3. once that is done, the reduce task then works the reduce function which is provided by the programmer on each each such group and creates the final output.
MapReduce的环境

Note:

1. The programmer provides two functions, Map and Reduce, and specifies the input file.

2. Scheduling:Figuring out where the map tasks run,where the reduce tasks run, and so on.

MapReduce Scheduling and Data Flow调度和数据流

Note:

1. close: means that,the input data is a file divided into chunks.And there are replicas of the chunks on different chunk servers.The Map-Reduce system try to schedule each map task on a chunk server that holds a copy of the corresponding chunk.So, there's no actual copy.A data copy associated with the map step of the Map-Reduce program.

2. There's some overhead to storing data in the DFS.there are multiple replicas of the data that need to be made.And network shuffling involved in storing new data in the distributed file system.So, whenever possible, intermediate results are actually stored in the local file system of the Map and Reduced workers.
coordination : Master Node主节点的调和功能

Note:

1. When a map task completes,it sends the the master the location and sizes of it's the R intermediate files that it creates.why, R intermediate files? There's one intermediate file that's created for each reducer.Because the data, the output of the mapper has to be shipped to each of the reducers,depending on the key value.So, whenever a map task completes,it stores the R intermediate files on it's local file system,and it let's the master know the names of those files.The master pushes this information to the reducers.

2. Once the reducers know that all the mappers map tasks are completed,then they copy the intermediate file from each of the map tasks.And then they can proceed with their work.

Worker节点失效处理

Map worker失效处理

Note:

1. If a map worker fails, then all the map tasks that were scheduled on that map worker may have failed.

2. If a map worker fails,then the node fails.Then all intermediate output created by all the map tasks that have ran on that worker are lost.
Reduce worker失效处理

Note: only in-progress tasks need to be set to idle: The output of the reduced worker is a final output,it's written to the distribute file system.And not to the local file system of the reduced worker.The output is not lost even if the reduce worker fails.
Master worker失效处理

Note: the client can then do something like restarting the map reduce task.The master is typically not applicated in the Map-Reduce system.
map和reduce任务数目设定的基本原则

Note:

1. If there is one map task per node in the cluster and during processing the node fails,then that map task needs to be rescheduled on another node in the cluster when it becomes available.Now since all the other nodes are processing,one of those nodes has to complete before this map task can be scheduled on that node and so the entire computation is slowed down.
2. So, instead of one map task on a given node, there are many small map tasks on a given node, and if that node fails, then those map tasks can then be spread across all the available nodes and so the entire task will complete much faster.

3. R is usually even smaller than the total number of nodes in the system.The the output file is,is spread across spread across R node where R the number of reducers.And it's usually convenient to have the output spread across a small number of nodes rather than across a large number of nodes.

皮皮Blog

Combiners And Partition Functions合成器和分区函数

{looked at how Map-Reduce model's actually implemented}

改良:Combiner合成器

Note:

1. 就是说在map阶段就聚合相同key的pairs,其中聚合函数(合成器)就类似于Reduce函数。这样paris传递到Reducer的网络开销就减少了。

2. The programmer provides a function combine. Instead of a whole bunch of tuples with the key k being shipped off to a reducer.Just a single tuple with key k and v2 is shipped off to the reducer now, usually the combiner is the same function as the reducer.

应用combiner的例子:词计数

Combiner和Reducer函数要满足的性质

Note: Reduce函数必须是可交换和可结合的,如sum函数。And because sum satisfies both these properties sum can be used as a combiner as well as a reducer.

2. Average函数就不满足这个性质,不能作为combiner函数。

但是Average可以这样实现:combiner函数返回的是(sum, count),而Reduce函数再一起计算Average

It's sometimes possible to turn a function that's not commutative or associative.Break it down into functions that are communicative or associative like sum and count and still use a combiner trick to save some foot traffic.

不能应用combiner函数的例子:求中位数函数

It turns out and it can be proved mathematically that there is no way to split the median completely into a bunch of commutative and associative computations.So you can't actually use the combiner trick, You just have to ship all the values to the reducer and compute the median at the reducer.
改良:partition function划分函数
先回忆MapReduce的下部构造:the map reduced infrastructure uses a hash function on each key in the intermediate key value set,and this hash function decides which reduced node that key gets shipped to.

用户可以自定义这个hash函数

Note: 这个自定义试图将同一个host下的url送到同一个reduce节点上,而不是每一个不同url都对应一个reduce节点。

MapReduce的实现

Note:

1. Google first implemented a file system called the Google File System which is a distributed file system that provides stable storage on top of its cluster.And then implemented the MapReduce framework on top of the Google File System.

2. Hadoop is an open-source project that's a reimplementation of Google's MapReduce.[Download]

3. many use cases of Hadoop involve doing SQL-like manipulations on data.And so there are open-source implementations called Hive and Pig that provide SQL-like abstractions of top of the Hadoop and MapReduce layer, so that you don't have to rewrite those as map and deduce functions.
MapReduce in cloud 云计算中的MapReduce

Pointers and Further Reading

Jeffrey Dean and Sanjay Ghemawat: MapReduce: Simplified Data Processing on Large Clusters
http://labs.google.com/papers/mapreduce.html
Sanjay Ghemawat, Howard Gobioff, and Shun--.Tak Leung: The Google File System
http://labs.google.com/papers/gfs.html

Hadoop Wiki Introduction  http://wiki.apache.org/lucene-hadoop/
Getting Started  http://wiki.apache.org/lucene-hadoop/GegngStartedWithHadoop   
Map/Reduce Overview  http://wiki.apache.org/lucene-hadoop/HadoopMapReduce  http://wiki.apache.org/lucene-hadoop/ HadoopMapRedClasses   
Eclipse Environment  http://wiki.apache.org/lucene-hadoop/EclipseEnvironment
Javadoc  http://lucene.apache.org/hadoop/docs/api/

ReleasesfromApachedownloadmirrors  http://www.apache.org/dyn/closer.cgi/lucene/hadoop/
Nightlybuildsofsource  http://people.apache.org/dist/lucene/hadoop/nightly/
Sourcecodefromsubversion  http://lucene.apache.org/hadoop/version_control.html11
皮皮Blog

Feedback习题反馈 — Week 1 (Basic)

from:http://blog.csdn.net/pipisorry/article/details/48443533

ref:海量数据挖掘

Python实现MapReduce

海量数据挖掘MMDS week1: MapReduce相关推荐

  1. 海量数据挖掘MMDS week3:社交网络之社区检测:基本技巧

    http:// blog.csdn.net/pipisorry/article/details/49052057 海量数据挖掘Mining Massive Datasets(MMDs) -Jure L ...

  2. 基于云计算的海量数据挖掘

    一 基于云计算的海量数据挖掘 2008年7 月,<Communications of the ACM>杂志发表了关于云计算的专辑,云计算因其清晰的商业模式而受到广泛关注,并得到工业和学术界 ...

  3. 【海量数据挖掘/数据分析】之 贝叶斯信念网络(贝叶斯信念网络、有向无环图、贝叶斯公式、贝叶斯信念网络计算实例)

    [海量数据挖掘/数据分析]之 贝叶斯信念网络(贝叶斯信念网络.有向无环图.贝叶斯公式.贝叶斯信念网络计算实例) 目录 [海量数据挖掘/数据分析]之 贝叶斯信念网络(贝叶斯信念网络.有向无环图.贝叶斯公 ...

  4. 如何利用机器学习进行海量数据挖掘

    大数据时代里,互联网用户每天都会直接或间接使用到大数据技术的成果,直接面向用户的比如搜索引擎的排序结果,间接影响用户的比如网络游戏的流失用户预测.支付平台的欺诈交易监测等等.机器学习是大数据挖掘的一大 ...

  5. 海量数据挖掘 - 读书笔记 - 第三章:寻找相似的项目

    2020/07/21 - 引言 这个章节的内容如下: 将相似度比较问题转化为集合论的问题,找到相似对也就是找到两个相交集非常大的集合 文章相似度的比较过程中,可以利用shingling的方式,就是n- ...

  6. 赠书活动 | 数据挖掘经典入门

    赠书活动在 AINLP 公众号. 这本书相当受欢迎(前两个版本累计销量超过 5 万册),尤其是受学校青睐--在此也说声抱歉,出于出版时间的原因,很多学校依然采用了旧版作为教材:同时也请知悉,新版已上架 ...

  7. 周末送新书 | 世界名校数据挖掘经典《斯坦福数据挖掘教程(第3版)》

    <斯坦福数据挖掘教程(第3版)>上架之后,这是我们第一次整篇文章介绍这本书. 这本书相当受欢迎(前两个版本累计销量超过 5 万册),尤其是受学校青睐--在此也说声抱歉,出于出版时间的原因, ...

  8. 新书上市 | 世界名校数据挖掘经典《斯坦福数据挖掘教程(第3版)》

    题图 | 作者为 Scott  Ullman <斯坦福数据挖掘教程(第3版)>上架之后,这是我们第一次整篇文章介绍这本书. 这本书相当受欢迎(前两个版本累计销量超过 5 万册),尤其是受学 ...

  9. 基于MapReduce的贝叶斯网络算法研究参考文献

    原文链接(系列):http://blog.csdn.net/XuanZuoNuo/article/details/10472219 论文: 加速贝叶斯网络:Accelerating Bayesian ...

  10. 数据挖掘:如何寻找相关项

    导读:随着大数据时代浪潮的到来数据科学家这一新兴职业也越来越受到人们的关注.本文作者Alexandru Nedelcu就将数学挖掘算法与大数据有机的结合起来,并无缝的应用在面临大数据浪潮的网站之中. ...

最新文章

  1. SMO学习笔记(三)——效验数据库备份文件
  2. 知识点讲解四:栈溢出(stack overflow)问题解决方案
  3. Deeplab unexpected label
  4. 不安装cudnn可不可以_Linux非root用户如何优雅的安装cuda和cudnn
  5. KNN算法(K近邻学习)
  6. 城市智慧路灯综合管理平台、图监控、灯箱实时监控、策略管理、故障报警、灯具管理、数据统计、故障分析、开关灯记录分析、区域管理、分组管理、DIV+CSS布局设计、HTML/Bootstrp/jQuery
  7. 去掉Tomcat的管理页面
  8. 为什么要使用自增ID作为主键
  9. 使用直播网站源码,搭建直播平台的最难点
  10. Python爬虫教程,利用Python采集QQ群成员信息
  11. minimumsnap(1)微分平坦特性(Differential Flatness)
  12. 网线插座板上网络模块接线
  13. 机器学习(1)——Python数据处理与绘图
  14. IE加载ocx时提示控件不安全的解决方法
  15. python 计算机网络课程设计(网络主机扫描程序)
  16. Hadoop小文件解决方案-Hadoop小文件的前因后果
  17. load opencv videoio gstreamer453 64d dll failed 打开摄像头慢
  18. Zircon 与 LK
  19. 普通用户执行docker命令,用不用sudo由你决定
  20. zenly安卓_zenly安卓app下载-zenly安卓app下载最新版 v3.26.0

热门文章

  1. opencl初探-sobel检测
  2. 精彩编码 【进制转换】
  3. 一个对iBatis的总结写的不错(转载)
  4. CSS实现段落首行缩进
  5. Fiddler工具中显示服务器IP地址的方法
  6. 【搜索-剪枝-偏难】PAT-天梯赛-L3-015. 球队“食物链”
  7. 支付宝APP支付里设置应用网关和授权回调地址是不必填的
  8. eclipse从SVN检出的项目大面积报错怎么办
  9. iOS archive(归档)的总结 (序列化和反序列化,持久化到文件)
  10. AAAI'22 | 多模态摘要任务中的知识蒸馏和分层语义关联