目录

    • 概述
      • Tungsten
    • Broadcast
    • topN
    • Accumulator
    • spark blockmanager
    • API调优
    • 参数调优
    • 资源调优
    • 其他
    • 代码实战
    • 减少数据结构的大小
    • 选择数据格式
    • 调整分区数
    • 资源调优
    • 代码实战
    • 构建应用
  • 常用算子
    • Transformations (return a new RDD)
    • Actions (launch a job to return a value to the user program)
    • shuffle
  • shuffle
    • unsafeShuffleWriter
    • SortShuffleWriter
    • BypassMergeSortShuffleWriter
  • 全局ID

概述

Apache Spark是用于大规模数据处理的统一分析引擎。它提供Java,Scala,Python和R中的高级API,以及支持常规执行图的优化引擎。它还支持一组丰富的更高级别的工具,包括星火SQL用于SQL和结构化数据的处理,MLlib机器学习,GraphX用于图形处理,以及结构化流的增量计算和流处理。

Tungsten

Memory Management and Binary Processing: off-heap管理内存,降低对象的开销和消除JVM GC带来的延时。
Cache-aware computation: 优化存储,提升CPU L1/ L2/L3缓存命中率。
Code generation: 优化Spark SQL的代码生成部分,提升CPU利用率。

Broadcast

每个节点的executor一份副本
blockmanager 获取 ->Driver 获取 保存
其他节点拉取
HttpBroadcat TorrentBroadcast

BitTorrent
BitTorrent的协议,通过该协议把广播数据分发到各个executor中。这些操作其实是在类TorrentBroadcast中实现的。
高效的软件分发系统和点对点技术共享大体积文件(如一部电影或电视节目),并使每个用户像网络重新分配结点那样提供上传服务。一般的下载服务器为每一个发出下载请求的用户提供下载服务,而BitTorrent的工作方式与之不同。分配器或文件的持有者将文件发送给其中一名用户,再由这名用户转发给其它用户,用户之间相互转发自己所拥有的文件部分,直到每个用户的下载都全部完成。

task处理的记录数
变量在Driver端 累加的过程在excutor端 executor无法读取其值
累加的过程中executor端 无法读取其值 Driver端才能读取

广播变量在每个Executor保存一个副本,此Executor的所有task共用此广播变量,这让变量产生的副本数量大大减少。

会在自己本地的Executor对应的BlockManager中尝试获取变量,如果本地没有,BlockManager就会从Driver或者其他节点的BlockManager上远程拉取变量的复本,并由本地的BlockManager进行管理;之后此Executor的所有task都会直接从本地的BlockManager中获取变量。

Kryo需要用户在使用前注册需要序列化的类型,不够方便,但从Spark 2.0.0版本开始,简单类型、简单类型数组、字符串类型的Shuffling RDDs 已经默认使用Kryo序列化方式了。

public class MyKryoRegistrator implements KryoRegistrator
{@Overridepublic void registerClasses(Kryo kryo){kryo.register(StartupReportLogs.class);}
}//创建SparkConf对象
val conf = new SparkConf().setMaster(…).setAppName(…)//使用Kryo序列化库,如果要使用Java序列化库,需要把该行屏蔽掉
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");  //在Kryo序列化库中注册自定义的类集合,如果要使用Java序列化库,需要把该行屏蔽掉
conf.set("spark.kryo.registrator", "alex.com.MyKryoRegistrator");

topN

按照key对数据进行聚合(groupByKey)
将value转换为数组,利用sortBy或者sortWith进行排序
缺点:数据量太大,会OOM
取出所有的key
对key进行迭代,每次取出一个key利用spark的排序算子进行排序
自定义分区器,按照key进行分区,使不同的key进到不同的分区
对每个分区运用spark的排序算子进行排序

Accumulator

Driver端定义一个共享变量 将数据累加到该变量上
一般应用在计数的场景下

add 在Executor端发生的.value

spark blockmanager

BlockManager 在一个 spark 应用中作为一个本地缓存运行在所有的节点上, 包括所有 driver 和 executor上。 BlockManager 对本地和远程提供一致的 get 和set 数据块接口, BlockManager 本身使用不同的存储方式来存储这些数据, 包括 memory, disk, off-heap

API调优

combineByKey 在组合元素时可以使用,但是返回类型与输入值类型不同。
foldByKey 使用关联函数和中性的“零值”合并每个键的值。
SELECT CAST (VID AS String),
VIN,
TIME,
2201 AS speed,
2614 AS charge,
2615 AS soc,
2613 AS totalVoltage,
2603 AS secondaryCellMaxVoltage,
2606 AS secondaryCellMinVoltage,
2304 AS engineTemp,
2609 AS accuisitionPointMaxTemp,
2612 AS accuisitionPointMinTemp,
2202 AS mileage,
2502 AS longitude,
2503 AS latitude,
CAST(2203 AS INT) AS peaking
FROM $inputTableName
WHERE vid IS NOT NULL AND $whereCondition
cache() – MEMORY_ONLY = persist() = persist(MEMORY_ONLY)
persist() --MEMORY_ONLY | MEMORY_ONLY_SER | MEMORY_AND_DISK | MEMORY_AND_DISK_SER
Checkpoint

广播变量+filter 代替join
广播变量+filter、广播变量+map、广播变量+flatMap

reduceByKey:这个算子在map端是有combiner的,在一些场景中可以使用reduceByKey代替groupByKey算子。
combineByKey
AggregateByKey

使用reduceByKey替代groupByKey
使用mapPartition替代map
使用foreachPartition替代foreach
对RDD使用filter进行大量数据过滤之后使用coalesce减少分区数
使用repartitionAndSortWithinPartitions替代repartition与sort类操作
使用repartition和coalesce算子操作分区。

尽量使用字符串替代对象,使用原始类型(比如Int、Long)替代字符串,使用数组替代集合类型,这样尽可能地减少内存占用,从而降低GC频率,提升性能

repartitions / coalesce
reduceByKey / groupByKey / join
Partitioner
如果读取的数据是在 SparkStreaming 中,Receive 模式的话,并行度是由 batch Interval 和 block Interval 来决定的,默认分别是 5 秒和 200 ms。
Direct 模式的话,kafka 的 topic 的分区数就是 RDD 的分区的并行度
MEMORY_AND_DISK_SER
cache / persist / checkpoint
reduceByKey, 所以在有些场景下可以代替 groupByKey 。
aggregateByKey, 可以自定义在 map 和 reduce 端的逻辑。
combineByKey
使用 reduceByKey 代替 groupBykey。
使用 mapPartition 代替 map。
使用 foreachPartition 代替 foreach。
filter 之后使用 coalesce 减少分区。
使用 repartition 和 coalesce 来操作分区

参数调优

sparkconf
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer").registerKryoClasses(new Class[]{SpeedSortKey.class})
.set("spark.kryoserializer.buffer.max", "128m")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer").registerKryoClasses(Array[Class[_]](classOf[Geometry], classOf[Coordinate], classOf[Point]))

fastutil

spark.locality.wait (以下三个参数的默认值参照spark.locality.wait)
• spark.locality.wait.process
• spark.locality.wait.node
• spark.locality.wait.rack

在考察数据时发现,request与error大部分数据关联不上,但是与response有98%以上的数据能关联上,那就先left join,再full join 。这样以来,request.request_id 做为左表的字段,都不会为null并且还唯一,最重要的是,在再行full join 的时候,数据不会膨胀。

不改变原来的sql顺序,left join 的key值如果为null,用随机数来代替

sortshuffleManager

spark.shuffle.file.buffer(默认值为32K)
该参数是缓冲区的缓冲内存,如果可用的内存资源较为充足的话,可以将缓冲区的值设置大点,这样会较少磁盘IO次数.,如果合理调节该参数,性能会提升1%~5%… 可以设置为64K。

spark.reducer.max.SizeFlight(默认为48M)
该参数是stage的每一个task就需要将上一个stage的计算结果中的所有相同key,从各个节点上通过网络都拉取到自己所在的节点上,然后进行key的聚合或连接等操作,如果合理调节该参数(增大),性能会提升1%~5%…

spark.shuffle.memoryFraction(默认20%)shuffle聚合内存的比例
该参数是数据根据不同的shuffle算子将数据写入内存结构中,内存结构达到阈值会溢出临时文件,这个参数就是则是内存结构的阈值百分比的,不是内存结构的内存大小. 如果内存充足,而且很少使用持久化操作,建议调高这个比例,可以减少频繁对磁盘进行IO操作,合理调节该参数可以将性能提升10%左右。

spark.shuffle.io.maxRetries拉取数据重试次数(默认3次)
该参数是stage的task向上一个stage的task计算结果拉取数据,也就是上面那个操作,有时候会因为网络异常原因,导致拉取失败,失败时候默认重新拉取三次,三次过还是失败的话作业就执行失败了,根据具体的业务可以考虑将默认值增大,这样可以避免由于JVM的一些原因或者网络不稳定等因素导致的数据拉取失败.也有助于提高spark作业的稳定性. 可以适当的提升重新拉取的次数,最大为60次.

spark.shuffle.io.retryWait(默认是5s)----- 重试间隔时间60s
是每次拉取数据的间隔时间… 建议加大间隔时长(比60s),以增加shuffle操作的稳定性

spark.shuffle.consolidateFiles=true
spark.shuffle.sort.bypassMergeThreshold – 200 SortShuffle bypass机制 200次
spark.core.connection.ack.wait.timeout=300
spark.yarn.executor.memoryOverhead=2048
spark.executor.memoryOverhead
spark.dynamicAllocation.enabled true 开启动态资源分配

资源调优

executor-cores num-executors * executor-cores不要超过队列总CPU core的1/3~1/2左右比较合适
每一个Executor进程的内存设置为4G~8G较为合适 不要超过资源队列最大总内存的1/3~1/2
spark.driver.memory 1G

spark.default.parallelism 500~1000
num-executors * executor-cores的2~3倍

spark.sql.shuffle.partitions = 200
N=batchInterval/blockInterval

spark 为任务分配更多的资源,在一定范围内,增加资源的分配与性能的提升是成正比的

资源队列有400G内存,100个CPU core,
那么指定50个Executor,每个Executor分配8G内存,2个CPU core。

在资源允许的情况下,增加Executor的个数可以提高执行task的并行度。
比如有4个Executor,每个Executor有2个CPU core,那么可以并行执行8个task,如果将Executor的个数增加到8个(资源允许的情况下),那么可以并行执行16个task,此时的并行能力提升了一倍

在资源允许的情况下,增加每个Executor的Cpu core个数,可以提高执行task的并行度。比如有4个Executor,每个Executor有2个CPU core,那么可以并行执行8个task,如果将每个Executor的CPU core个数增加到4个(资源允许的情况下),那么可以并行执行16个task,此时的并行能力提升了一倍。

在资源允许的情况下,增加每个Executor的内存量以后,对性能的提升有三点:
1.可以缓存更多的数据(即对RDD进行cache),写入磁盘的数据相应减少,甚至可以不写入磁盘,减少了可能的磁盘IO;
可以为shuffle操作提供更多内存,即有更多空间来存放reduce端拉取的数据,写入磁盘的数据相应减少,甚至可以不写入磁盘,减少了可能的磁盘

2.可以为shuffle操作提供更多内存,即有更多空间来存放reduce端拉取的数据,写入磁盘的数据相应减少,甚至可以不写入磁盘,减少了可能的磁盘IO;

3、task的执行提供更多内存,在task的执行过程中可能创建很多对象,内存较小时会引发频繁的GC,增加内存后,可以避免频繁的GC,提升整体性能

/usr/local/spark/bin/spark-submit \
--class com.alex.spark.WordCount \
--num-executors 80 \
--driver-memory 6g \
--executor-memory 6g \
--executor-cores 3 \
--master yarn-cluster \
--queue root.default \
--conf spark.yarn.executor.memoryOverhead=2048 \
--conf spark.core.connection.ack.wait.timeout=300 \
/usr/local/spark/spark.jar
--num-executors:50~100
--driver-memory:1G~5G
--executor-memory:6G~10G
--executor-cores:3
--master:实际生产环境一定使用yarn-cluster

使用序列化的方式减小数据体积
数据的可靠性要求很高,并且内存充足,可以使用副本机制

在资源允许的前提下,并行度要设置的尽可能大,达到可以充分利用集群资源
task数量应该设置为Spark作业总CPU core数量的2~3倍
val conf = new SparkConf()
.set(“spark.default.parallelism”, “500”)

其他

Spark希望task能够运行在它要计算的数据所在的节点

Spark会等待一段时间,默认3s,如果等待指定时间后仍然无法在指定节点运行,那么会自动降级,尝试将task分配到比较差的本地化级别所对应的节点上,比如将task分配到离它要计算的数据比较近的一个节点,然后进行计算,如果当前级别仍然不行,那么继续降级。

PROCESS_LOCAL 进程本地化,task和数据在同一个Executor中,性能最好。
NODE_LOCAL 节点本地化,task和数据在同一个节点中,但是task和数据不在同一个Executor中,数据需要在进程间进行传输。
RACK_LOCAL 机架本地化,task和数据在同一个机架的两个节点上,数据需要通过网络在节点之间进行传输。
NO_PREF 对于task来说,从哪里获取都一样,没有好坏之分。
ANY task和数据可以在集群的任何地方,而且不在一个机架中,性能最差。

通过延长本地化等待时长,看看task的本地化级别有没有提升,并观察Spark作业的运行时间有没有缩短。

val conf = new SparkConf()
.set(“spark.locality.wait”, “6”)

一个task处理一个RDD的partition,那么一个task只会执行一次function,function一次接收所有的partition数据,效率比较高。

如果使用mapPartitions算子,但数据量非常大时,function一次处理一个分区的数据,如果一旦内存不足,此时无法回收内存,就可能会OOM,即内存溢出。

估算一下RDD的数据量、每个partition的数据量,以及分配给每个Executor的内存资源,如果资源允许,可以考虑使用mapPartitions算子代替map。

使用了foreachPartition算子后,可以获得以下的性能提升:

  1. 对于我们写的function函数,一次处理一整个分区的数据;
  2. 对于一个分区内的数据,创建唯一的数据库连接;
  3. 只需要向数据库发送一次SQL语句和多组参数;
    在生产环境中,全部都会使用foreachPartition算子完成数据库操作。

针对第一个问题,既然分区的数据量变小了,我们希望可以对分区数据进行重新分配,比如将原来4个分区的数据转化到2个分区中,这样只需要用后面的两个task进行处理即可,避免了资源的浪费。

针对第二个问题,解决方法和第一个问题的解决方法非常相似,对分区数据重新分配,让每个partition中的数据量差不多,这就避免了数据倾斜问题。

假设我们希望将原本的分区个数A通过重新分区变为B,那么有以下几种情况:
1.A > B(多数分区合并为少数分区)
① A与B相差值不大
此时使用coalesce即可,无需shuffle过程。
② A与B相差值很大
此时可以使用coalesce并且不启用shuffle过程,但是会导致合并过程性能低下,所以推荐设置coalesce的第二个参数为true,即启动shuffle过程。

假设我们希望将原本的分区个数A通过重新分区变为B,那么有以下几种情况:
1.A > B(多数分区合并为少数分区)
① A与B相差值不大
此时使用coalesce即可,无需shuffle过程。
② A与B相差值很大
此时可以使用coalesce并且不启用shuffle过程,但是会导致合并过程性能低下,所以推荐设置coalesce的第二个参数为true,即启动shuffle过程。
2.A < B(少数分区分解为多数分区)
此时使用repartition即可,如果使用coalesce需要将shuffle设置为true,否则coalesce无效。

我们可以在filter操作之后,使用coalesce算子针对每个partition的数据量各不相同的情况,压缩partition的数量,而且让每个partition的数据量尽量均匀紧凑,以便于后面的task进行计算操作,在某种程度上能够在一定程度上提升性能。

local模式是进程内模拟集群运行,已经对并行度和分区数量有了一定的内部优化,因此不用去设置并行度和分区数量。

代码实战

对于Java开发语言,正确示例:
//创建SparkContext时所需引入的类。
import org.apache.spark.api.java.JavaSparkContext
//RDD操作时引入的类。
import org.apache.spark.api.java.JavaRDD
//创建SparkConf时引入的类。
import org.apache.spark.SparkConf对于Scala开发语言,正确示例:
//创建SparkContext时所需引入的类。
import org.apache.spark.SparkContext
//RDD操作时引入的类。
import org.apache.spark.SparkContext._
//创建SparkConf时引入的类。
import org.apache.spark.SparkConf

//抛出异常时,写出详细描述信息。
throw new IOException("Writing data error! Data: " + data.toString());

场景:现有用户位置数据,按照时间排序生成用户轨迹。在Scala中,按时间排序的代码如下。

/* 函数实现的功能是得到某个用户的位置轨迹。
* 参数trajectory:由两部分组成-用户名和位置点(时间,经度,维度)*/
private def getTimesOfOneUser(trajectory: (String, Seq[(String, Float, Float)]), zone: Zone, arrive: Boolean): Int =
{// 先将用户位置点按时间排序
val sorted: Seq[(String, Float, Float)] = trajectory._2.sortBy(x => x._1);
…
}

若用java实现上述功能,则需要将对trajectory._2重新生成对象,而不能直接对trajectory._2进行排序操作。原因是,java不支持Collections.sort(trajectory._2)这个操作,是由于该操作改变了trajectory._2这个对象本身,这违背了RDD元素不可更改这条规则;而Scala代码之所以能够正常运行,是因为sortBy( )这个函数生成了一个新的对象,它并不对trajectory._2直接操作。下面分别列出java实现的正确示例和错误示例。

//将用户的位置点从新生成一个对象。
List<Tuple3< String, Float, Float >> list = new ArrayList<Tuple3< String, Float, Float >>( trajectory._2);
//对新对象进行排序。
Collections.sort(list);

集群模式下,应注意Driver和worker节点之间的参数传递
在Spark编程时,总是有一些代码逻辑中需要根据输入参数来判断,这种时候往往会想到这种做法,将参数设置为全局变量,先给定一个空值(null),在main函数中,实例化SparkContext对象之前对这个变量赋值。然而,在集群模式下,执行程序的jar包会被发送到每个Worker上执行。如果只在main函数的节点上全局变量改变了,而未传给执行任务的函数中,将会报空指针异常。

object Test
{private var testArg: String = null;
def main(args: Array[String])
{testArg = …;
val sc: SparkContext = new SparkContext(…);sc.textFile(…)
.map(x => testFun(x, testArg));
}private def testFun(line: String, testArg: String): String =
{testArg.split(…);
return …;
}
}

减少数据结构的大小

数据以记录的形式流经Spark。一条记录有两种表示形式:反序列化的Java对象表示形式和序列化的二进制表示形式。通常,Spark将反序列化表示形式用于内存中的记录,将序列化表示形式用于存储在磁盘上或通过网络传输的记录。对于基于排序的混洗,内存中的混洗数据以序列化形式存储。

这 火花序列化器属性控制用于在这两种表示形式之间进行转换的序列化器。Cloudera建议使用Kryo序列化程序, org.apache.spark.serializer.KryoSerializer。

这两种表示形式的记录足迹对Spark性能有重要影响。查看传递的数据类型,并寻找减小其大小的位置。较大的反序列化对象会导致Spark更频繁地将数据泄漏到磁盘上,并减少Spark可以缓存的反序列化记录的数量(例如,记忆 存储级别)。Apache Spark调整指南介绍了如何减小此类对象的大小。较大的序列化对象会导致更大的磁盘和网络I / O,并减少Spark可以缓存的序列化记录的数量(例如,MEMORY_SER 存储级别。)请确保将您使用的所有自定义类注册到 SparkConf#registerKryoClasses API。

选择数据格式

将数据存储在磁盘上时,请使用可扩展的二进制格式(例如Avro,Parquet,Thrift或Protobuf)并存储在Sequence File序列文件中

调整分区数

val rdd2 = rdd1.reduceByKey(_ + _,numPartitions = X)
确定X的最佳值需要进行实验。在父数据集中找到分区数,然后将其乘以1.5,直到性能不再提高。

资源调优

考虑一个具有六台运行NodeManager的主机的群集,每台主机配备16个内核和64 GB内存。
NodeManager的能力, yarn.nodemanager.resource.memory-mb 和 yarn.nodemanager.resource.cpu-vcores,应分别设置为63 * 1024 = 64512(兆字节)和15。避免将100%的资源分配给YARN容器,因为主机需要一些资源来运行OS和Hadoop守护程序。在这种情况下,请为这些系统进程保留1 GB和1个内核。
Cloudera Manager解决这些问题并自动配置这些YARN属性。

您可以考虑使用 --num-executors 6 --executor-cores 15 --executor-memory 63G。但是,这种方法不起作用:
63 GB加上执行程序的内存开销不适合NodeManager的63 GB容量。
ApplicationMaster在其中一台主机上使用内核,因此该主机上没有容纳15内核执行程序的空间。
每个执行程序15个内核可能会导致HDFS I / O吞吐量下降。

相反,使用 --num-executors 17 --executor-cores 5 --executor-memory 19G:
这将导致所有主机上的三个执行器,除了带有ApplicationMaster的一个执行器(其中有两个执行器)。
–executor-memory is computed as (63/3 executors per host) = 21. 21 * 0.07 = 1.47. 21 - 1.47 ~

代码实战

SparkSession.builder.config(rdd.sparkContext.getConf).getOrCreate()import spark.implicits._// Convert RDD[String] to DataFrameval wordsDataFrame = rdd.toDF("word")val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("WordCount")val sc = new SparkContext(conf)sc.textFile("/input")
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.saveAsTextFile("/output")sc.stop()手写Spark-WordCount
val conf: SparkConf =
new SparkConf().setMaster("local[*]").setAppName("WordCount")val sc = new SparkContext(conf)sc.textFile("/input").flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _).saveAsTextFile("/output")sc.stop()要求:(a,1) (a,3) (b,3) (b,5) (c,4),求每个key对应value的平均值
rdd.combineByKey(v=>(v,1),(acc:(Int,Int),newV)=>(acc._1+newV,acc._2+1),(acc1:(Int,Int),acc2:(Int,Int))=>(acc1._1+acc2._1,acc1._2+acc2._2))

构建应用

构建Spark应用程序
您可以使用apachemaven构建使用Java和Scala开发的Spark应用程序。
有关CDH组件的Maven属性,请参阅使用cdh6maven存储库。有关卡夫卡的Maven属性,请参见Maven Artifacts For Kafka。

建筑应用
在构建Spark Scala和Java应用程序时,请遵循以下最佳实践:

根据运行的Spark的相同版本编译。
构建一个包含所有依赖项的程序集JAR(“Uber”JAR)。在Maven中,添加Maven程序集插件以构建包含所有依赖项的JAR:

<plugin><artifactId>maven-assembly-plugin</artifactId><configuration><descriptorRefs><descriptorRef>jar-with-dependencies</descriptorRef></descriptorRefs></configuration><executions><execution><id>make-assembly</id><phase>package</phase><goals><goal>single</goal></goals></execution></executions>
</plugin>

这个插件管理构建期间所有可用JAR文件的合并过程。从程序集JAR中排除Spark、Hadoop和Kafka(cdh5.5及更高版本)类,因为它们已经在集群上可用并且包含在运行时类路径中。在Maven中,指定Spark、Hadoop和Kafka依赖项,并提供范围。例如:

<dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.11</artifactId><version>2.2.0-cdh6.0.0-beta1</version><scope>provided</scope>
</dependency>

构建可重用模块
在sparkshell中使用现有的Scala和Java类需要有效的部署过程和依赖关系管理。为了简单可靠地重用Scala和Java类以及完整的第三方库,您可以使用一个模块,它是由Maven创建的自包含工件。此模块可由多个用户共享。本主题演示如何使用Maven创建包含所有依赖项的模块。

Create a Maven Project
Use Maven to generate the project directory:
$ mvn archetype:generate -DgroupId=com.mycompany -DartifactId=mylibrary
-DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
Download and Deploy Third-Party Libraries
Prepare a location for all third-party libraries that are not available through Maven Central but are required for the project:
mkdir libs
cd libs
Download the required artifacts.
Use Maven to deploy the library JAR.
Add the library to the dependencies section of the POM file.
Repeat steps 2-4 for each library. For example, to add the JIDT library:
Download and decompress the zip file:
curl http://lizier.me/joseph/software/jidt/download.php?file=infodynamics-dist-1.3.zip > infodynamics-dist.1.3.zip
unzip infodynamics-dist-1.3.zip
Deploy the library JAR:
$ mvn deploy:deploy-file
-Durl=file:///HOME/.m2/repository -Dfile=libs/infodynamics.jar
-DgroupId=org.jlizier.infodynamics -DartifactId=infodynamics -Dpackaging=jar -Dversion=1.3
Add the library to the dependencies section of the POM file:

<dependency><groupId>org.jlizier.infodynamics</groupId><artifactId>infodynamics</artifactId><version>1.3</version>
</dependency>

Add the Maven assembly plug-in to the plugins section in the pom.xml file.
Package the library JARs in a module:
mvn clean package
Run and Test the Spark Module
Run the Spark shell, providing the module JAR in the --jars option:
spark-shell --jars target/mylibrary-1.0-SNAPSHOT-jar-with-dependencies.jar
In the Environment tab of the Spark Web UI application (http://driver_host:4040/environment/), validate that the spark.jars property contains the library. For example:

In the Spark shell, test that you can import some of the required Java classes from the third-party library. For example, if you use the JIDT library, import MatrixUtils:
$ spark-shell

scala> import infodynamics.utils.MatrixUtils;
Packaging Different Versions of Libraries with an Application
To use a version of a library in your application that is different than the version of that library that is shipped with Spark, use the Apache Maven Shade Plugin. This process is technically known as “relocation”, and often referred to as “shading”.

常用算子

Transformations (return a new RDD)

  /*** :: DeveloperApi ::* Implemented by subclasses to compute a given partition.*/@DeveloperApidef compute(split: Partition, context: TaskContext): Iterator[T]/*** Implemented by subclasses to return the set of partitions in this RDD. This method will only* be called once, so it is safe to implement a time-consuming computation in it.** The partitions in this array must satisfy the following property:*   `rdd.partitions.zipWithIndex.forall { case (partition, index) => partition.index == index }`*/protected def getPartitions: Array[Partition]/*** Implemented by subclasses to return how this RDD depends on parent RDDs. This method will only* be called once, so it is safe to implement a time-consuming computation in it.*/protected def getDependencies: Seq[Dependency[_]] = deps/*** Optionally overridden by subclasses to specify placement preferences.*/protected def getPreferredLocations(split: Partition): Seq[String] = Nil/** Optionally overridden by subclasses to specify how they are partitioned. */@transient val partitioner: Option[Partitioner] = None// =======================================================================// Methods and fields available on all RDDs// =======================================================================/** The SparkContext that created this RDD. */def sparkContext: SparkContext = sc/** A unique ID for this RDD (within its SparkContext). */val id: Int = sc.newRddId()/** A friendly name for this RDD */@transient var name: String = _/** Assign a name to this RDD */def setName(_name: String): this.type = {name = _namethis}/*** Mark this RDD for persisting using the specified level.** @param newLevel the target storage level* @param allowOverride whether to override any existing level with the new one*/private def persist(newLevel: StorageLevel, allowOverride: Boolean): this.type = {// TODO: Handle changes of StorageLevelif (storageLevel != StorageLevel.NONE && newLevel != storageLevel && !allowOverride) {throw new UnsupportedOperationException("Cannot change storage level of an RDD after it was already assigned a level")}// If this is the first time this RDD is marked for persisting, register it// with the SparkContext for cleanups and accounting. Do this only once.if (storageLevel == StorageLevel.NONE) {sc.cleaner.foreach(_.registerRDDForCleanup(this))sc.persistRDD(this)}storageLevel = newLevelthis}/*** Set this RDD's storage level to persist its values across operations after the first time* it is computed. This can only be used to assign a new storage level if the RDD does not* have a storage level set yet. Local checkpointing is an exception.*/def persist(newLevel: StorageLevel): this.type = {if (isLocallyCheckpointed) {// This means the user previously called localCheckpoint(), which should have already// marked this RDD for persisting. Here we should override the old storage level with// one that is explicitly requested by the user (after adapting it to use disk).persist(LocalRDDCheckpointData.transformStorageLevel(newLevel), allowOverride = true)} else {persist(newLevel, allowOverride = false)}}/*** Persist this RDD with the default storage level (`MEMORY_ONLY`).*/def persist(): this.type = persist(StorageLevel.MEMORY_ONLY)/*** Persist this RDD with the default storage level (`MEMORY_ONLY`).*/def cache(): this.type = persist()/*** Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.** @param blocking Whether to block until all blocks are deleted (default: false)* @return This RDD.*/def unpersist(blocking: Boolean = false): this.type = {logInfo(s"Removing RDD $id from persistence list")sc.unpersistRDD(id, blocking)storageLevel = StorageLevel.NONEthis}/** Get the RDD's current storage level, or StorageLevel.NONE if none is set. */def getStorageLevel: StorageLevel = storageLevel/*** Lock for all mutable state of this RDD (persistence, partitions, dependencies, etc.).  We do* not use `this` because RDDs are user-visible, so users might have added their own locking on* RDDs; sharing that could lead to a deadlock.** One thread might hold the lock on many of these, for a chain of RDD dependencies; but* because DAGs are acyclic, and we only ever hold locks for one path in that DAG, there is no* chance of deadlock.** Executors may reference the shared fields (though they should never mutate them,* that only happens on the driver).*/private val stateLock = new Serializable {}// Our dependencies and partitions will be gotten by calling subclass's methods below, and will// be overwritten when we're checkpointed@volatile private var dependencies_ : Seq[Dependency[_]] = _// When we overwrite the dependencies we keep a weak reference to the old dependencies// for user controlled cleanup.@volatile @transient private var legacyDependencies: WeakReference[Seq[Dependency[_]]] = _@volatile @transient private var partitions_ : Array[Partition] = _/** An Option holding our checkpoint RDD, if we are checkpointed */private def checkpointRDD: Option[CheckpointRDD[T]] = checkpointData.flatMap(_.checkpointRDD)/*** Get the list of dependencies of this RDD, taking into account whether the* RDD is checkpointed or not.*/final def dependencies: Seq[Dependency[_]] = {checkpointRDD.map(r => List(new OneToOneDependency(r))).getOrElse {if (dependencies_ == null) {stateLock.synchronized {if (dependencies_ == null) {dependencies_ = getDependencies}}}dependencies_}}/*** Get the list of dependencies of this RDD ignoring checkpointing.*/final private def internalDependencies: Option[Seq[Dependency[_]]] = {if (legacyDependencies != null) {legacyDependencies.get} else if (dependencies_ != null) {Some(dependencies_)} else {// This case should be infrequent.stateLock.synchronized {if (dependencies_ == null) {dependencies_ = getDependencies}Some(dependencies_)}}}/*** Get the array of partitions of this RDD, taking into account whether the* RDD is checkpointed or not.*/final def partitions: Array[Partition] = {checkpointRDD.map(_.partitions).getOrElse {if (partitions_ == null) {stateLock.synchronized {if (partitions_ == null) {partitions_ = getPartitionspartitions_.zipWithIndex.foreach { case (partition, index) =>require(partition.index == index,s"partitions($index).partition == ${partition.index}, but it should equal $index")}}}}partitions_}}/*** Returns the number of partitions of this RDD.*/@Since("1.6.0")final def getNumPartitions: Int = partitions.length/*** Get the preferred locations of a partition, taking into account whether the* RDD is checkpointed.*/final def preferredLocations(split: Partition): Seq[String] = {checkpointRDD.map(_.getPreferredLocations(split)).getOrElse {getPreferredLocations(split)}}/*** Internal method to this RDD; will read from cache if applicable, or otherwise compute it.* This should ''not'' be called by users directly, but is available for implementors of custom* subclasses of RDD.*/final def iterator(split: Partition, context: TaskContext): Iterator[T] = {if (storageLevel != StorageLevel.NONE) {getOrCompute(split, context)} else {computeOrReadCheckpoint(split, context)}}/*** Return the ancestors of the given RDD that are related to it only through a sequence of* narrow dependencies. This traverses the given RDD's dependency tree using DFS, but maintains* no ordering on the RDDs returned.*/private[spark] def getNarrowAncestors: Seq[RDD[_]] = {val ancestors = new mutable.HashSet[RDD[_]]def visit(rdd: RDD[_]): Unit = {val narrowDependencies = rdd.dependencies.filter(_.isInstanceOf[NarrowDependency[_]])val narrowParents = narrowDependencies.map(_.rdd)val narrowParentsNotVisited = narrowParents.filterNot(ancestors.contains)narrowParentsNotVisited.foreach { parent =>ancestors.add(parent)visit(parent)}}visit(this)// In case there is a cycle, do not include the root itselfancestors.filterNot(_ == this).toSeq}/*** Compute an RDD partition or read it from a checkpoint if the RDD is checkpointing.*/private[spark] def computeOrReadCheckpoint(split: Partition, context: TaskContext): Iterator[T] ={if (isCheckpointedAndMaterialized) {firstParent[T].iterator(split, context)} else {compute(split, context)}}/*** Gets or computes an RDD partition. Used by RDD.iterator() when an RDD is cached.*/private[spark] def getOrCompute(partition: Partition, context: TaskContext): Iterator[T] = {val blockId = RDDBlockId(id, partition.index)var readCachedBlock = true// This method is called on executors, so we need call SparkEnv.get instead of sc.env.SparkEnv.get.blockManager.getOrElseUpdate(blockId, storageLevel, elementClassTag, () => {readCachedBlock = falsecomputeOrReadCheckpoint(partition, context)}) match {// Block hit.case Left(blockResult) =>if (readCachedBlock) {val existingMetrics = context.taskMetrics().inputMetricsexistingMetrics.incBytesRead(blockResult.bytes)new InterruptibleIterator[T](context, blockResult.data.asInstanceOf[Iterator[T]]) {override def next(): T = {existingMetrics.incRecordsRead(1)delegate.next()}}} else {new InterruptibleIterator(context, blockResult.data.asInstanceOf[Iterator[T]])}// Need to compute the block.case Right(iter) =>new InterruptibleIterator(context, iter.asInstanceOf[Iterator[T]])}}/*** Execute a block of code in a scope such that all new RDDs created in this body will* be part of the same scope. For more detail, see {{org.apache.spark.rdd.RDDOperationScope}}.** Note: Return statements are NOT allowed in the given body.*/private[spark] def withScope[U](body: => U): U = RDDOperationScope.withScope[U](sc)(body)// Transformations (return a new RDD)/*** Return a new RDD by applying a function to all elements of this RDD.*/def map[U: ClassTag](f: T => U): RDD[U] = withScope {val cleanF = sc.clean(f)new MapPartitionsRDD[U, T](this, (_, _, iter) => iter.map(cleanF))}/***  Return a new RDD by first applying a function to all elements of this*  RDD, and then flattening the results.*/def flatMap[U: ClassTag](f: T => TraversableOnce[U]): RDD[U] = withScope {val cleanF = sc.clean(f)new MapPartitionsRDD[U, T](this, (_, _, iter) => iter.flatMap(cleanF))}/*** Return a new RDD containing only the elements that satisfy a predicate.*/def filter(f: T => Boolean): RDD[T] = withScope {val cleanF = sc.clean(f)new MapPartitionsRDD[T, T](this,(_, _, iter) => iter.filter(cleanF),preservesPartitioning = true)}/*** Return a new RDD containing the distinct elements in this RDD.*/def distinct(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {def removeDuplicatesInPartition(partition: Iterator[T]): Iterator[T] = {// Create an instance of external append only map which ignores values.val map = new ExternalAppendOnlyMap[T, Null, Null](createCombiner = _ => null,mergeValue = (a, b) => a,mergeCombiners = (a, b) => a)map.insertAll(partition.map(_ -> null))map.iterator.map(_._1)}partitioner match {case Some(_) if numPartitions == partitions.length =>mapPartitions(removeDuplicatesInPartition, preservesPartitioning = true)case _ => map(x => (x, null)).reduceByKey((x, _) => x, numPartitions).map(_._1)}}/*** Return a new RDD containing the distinct elements in this RDD.*/def distinct(): RDD[T] = withScope {distinct(partitions.length)}/*** Return a new RDD that has exactly numPartitions partitions.** Can increase or decrease the level of parallelism in this RDD. Internally, this uses* a shuffle to redistribute data.** If you are decreasing the number of partitions in this RDD, consider using `coalesce`,* which can avoid performing a shuffle.*/def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {coalesce(numPartitions, shuffle = true)}/*** Return a new RDD that is reduced into `numPartitions` partitions.** This results in a narrow dependency, e.g. if you go from 1000 partitions* to 100 partitions, there will not be a shuffle, instead each of the 100* new partitions will claim 10 of the current partitions. If a larger number* of partitions is requested, it will stay at the current number of partitions.** However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,* this may result in your computation taking place on fewer nodes than* you like (e.g. one node in the case of numPartitions = 1). To avoid this,* you can pass shuffle = true. This will add a shuffle step, but means the* current upstream partitions will be executed in parallel (per whatever* the current partitioning is).** @note With shuffle = true, you can actually coalesce to a larger number* of partitions. This is useful if you have a small number of partitions,* say 100, potentially with a few partitions being abnormally large. Calling* coalesce(1000, shuffle = true) will result in 1000 partitions with the* data distributed using a hash partitioner. The optional partition coalescer* passed in must be serializable.*/def coalesce(numPartitions: Int, shuffle: Boolean = false,partitionCoalescer: Option[PartitionCoalescer] = Option.empty)(implicit ord: Ordering[T] = null): RDD[T] = withScope {require(numPartitions > 0, s"Number of partitions ($numPartitions) must be positive.")if (shuffle) {/** Distributes elements evenly across output partitions, starting from a random partition. */val distributePartition = (index: Int, items: Iterator[T]) => {var position = new Random(hashing.byteswap32(index)).nextInt(numPartitions)items.map { t =>// Note that the hash code of the key will just be the key itself. The HashPartitioner// will mod it with the number of total partitions.position = position + 1(position, t)}} : Iterator[(Int, T)]// include a shuffle step so that our upstream tasks are still distributednew CoalescedRDD(new ShuffledRDD[Int, T, T](mapPartitionsWithIndexInternal(distributePartition, isOrderSensitive = true),new HashPartitioner(numPartitions)),numPartitions,partitionCoalescer).values} else {new CoalescedRDD(this, numPartitions, partitionCoalescer)}}/*** Return a sampled subset of this RDD.** @param withReplacement can elements be sampled multiple times (replaced when sampled out)* @param fraction expected size of the sample as a fraction of this RDD's size*  without replacement: probability that each element is chosen; fraction must be [0, 1]*  with replacement: expected number of times each element is chosen; fraction must be greater*  than or equal to 0* @param seed seed for the random number generator** @note This is NOT guaranteed to provide exactly the fraction of the count* of the given [[RDD]].*/def sample(withReplacement: Boolean,fraction: Double,seed: Long = Utils.random.nextLong): RDD[T] = {require(fraction >= 0,s"Fraction must be nonnegative, but got ${fraction}")withScope {require(fraction >= 0.0, "Negative fraction value: " + fraction)if (withReplacement) {new PartitionwiseSampledRDD[T, T](this, new PoissonSampler[T](fraction), true, seed)} else {new PartitionwiseSampledRDD[T, T](this, new BernoulliSampler[T](fraction), true, seed)}}}/*** Randomly splits this RDD with the provided weights.** @param weights weights for splits, will be normalized if they don't sum to 1* @param seed random seed** @return split RDDs in an array*/def randomSplit(weights: Array[Double],seed: Long = Utils.random.nextLong): Array[RDD[T]] = {require(weights.forall(_ >= 0),s"Weights must be nonnegative, but got ${weights.mkString("[", ",", "]")}")require(weights.sum > 0,s"Sum of weights must be positive, but got ${weights.mkString("[", ",", "]")}")withScope {val sum = weights.sumval normalizedCumWeights = weights.map(_ / sum).scanLeft(0.0d)(_ + _)normalizedCumWeights.sliding(2).map { x =>randomSampleWithRange(x(0), x(1), seed)}.toArray}}/*** Internal method exposed for Random Splits in DataFrames. Samples an RDD given a probability* range.* @param lb lower bound to use for the Bernoulli sampler* @param ub upper bound to use for the Bernoulli sampler* @param seed the seed for the Random number generator* @return A random sub-sample of the RDD without replacement.*/private[spark] def randomSampleWithRange(lb: Double, ub: Double, seed: Long): RDD[T] = {this.mapPartitionsWithIndex( { (index, partition) =>val sampler = new BernoulliCellSampler[T](lb, ub)sampler.setSeed(seed + index)sampler.sample(partition)}, isOrderSensitive = true, preservesPartitioning = true)}/*** Return a fixed-size sampled subset of this RDD in an array** @param withReplacement whether sampling is done with replacement* @param num size of the returned sample* @param seed seed for the random number generator* @return sample of specified size in an array** @note this method should only be used if the resulting array is expected to be small, as* all the data is loaded into the driver's memory.*/def takeSample(withReplacement: Boolean,num: Int,seed: Long = Utils.random.nextLong): Array[T] = withScope {val numStDev = 10.0require(num >= 0, "Negative number of elements requested")require(num <= (Int.MaxValue - (numStDev * math.sqrt(Int.MaxValue)).toInt),"Cannot support a sample size > Int.MaxValue - " +s"$numStDev * math.sqrt(Int.MaxValue)")if (num == 0) {new Array[T](0)} else {val initialCount = this.count()if (initialCount == 0) {new Array[T](0)} else {val rand = new Random(seed)if (!withReplacement && num >= initialCount) {Utils.randomizeInPlace(this.collect(), rand)} else {val fraction = SamplingUtils.computeFractionForSampleSize(num, initialCount,withReplacement)var samples = this.sample(withReplacement, fraction, rand.nextInt()).collect()// If the first sample didn't turn out large enough, keep trying to take samples;// this shouldn't happen often because we use a big multiplier for the initial sizevar numIters = 0while (samples.length < num) {logWarning(s"Needed to re-sample due to insufficient sample size. Repeat #$numIters")samples = this.sample(withReplacement, fraction, rand.nextInt()).collect()numIters += 1}Utils.randomizeInPlace(samples, rand).take(num)}}}}/*** Return the union of this RDD and another one. Any identical elements will appear multiple* times (use `.distinct()` to eliminate them).*/def union(other: RDD[T]): RDD[T] = withScope {sc.union(this, other)}/*** Return the union of this RDD and another one. Any identical elements will appear multiple* times (use `.distinct()` to eliminate them).*/def ++(other: RDD[T]): RDD[T] = withScope {this.union(other)}/*** Return this RDD sorted by the given key function.*/def sortBy[K](f: (T) => K,ascending: Boolean = true,numPartitions: Int = this.partitions.length)(implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T] = withScope {this.keyBy[K](f).sortByKey(ascending, numPartitions).values}/*** Return the intersection of this RDD and another one. The output will not contain any duplicate* elements, even if the input RDDs did.** @note This method performs a shuffle internally.*/def intersection(other: RDD[T]): RDD[T] = withScope {this.map(v => (v, null)).cogroup(other.map(v => (v, null))).filter { case (_, (leftGroup, rightGroup)) => leftGroup.nonEmpty && rightGroup.nonEmpty }.keys}/*** Return the intersection of this RDD and another one. The output will not contain any duplicate* elements, even if the input RDDs did.** @note This method performs a shuffle internally.** @param partitioner Partitioner to use for the resulting RDD*/def intersection(other: RDD[T],partitioner: Partitioner)(implicit ord: Ordering[T] = null): RDD[T] = withScope {this.map(v => (v, null)).cogroup(other.map(v => (v, null)), partitioner).filter { case (_, (leftGroup, rightGroup)) => leftGroup.nonEmpty && rightGroup.nonEmpty }.keys}/*** Return the intersection of this RDD and another one. The output will not contain any duplicate* elements, even if the input RDDs did.  Performs a hash partition across the cluster** @note This method performs a shuffle internally.** @param numPartitions How many partitions to use in the resulting RDD*/def intersection(other: RDD[T], numPartitions: Int): RDD[T] = withScope {intersection(other, new HashPartitioner(numPartitions))}/*** Return an RDD created by coalescing all elements within each partition into an array.*/def glom(): RDD[Array[T]] = withScope {new MapPartitionsRDD[Array[T], T](this, (_, _, iter) => Iterator(iter.toArray))}/*** Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of* elements (a, b) where a is in `this` and b is in `other`.*/def cartesian[U: ClassTag](other: RDD[U]): RDD[(T, U)] = withScope {new CartesianRDD(sc, this, other)}/*** Return an RDD of grouped items. Each group consists of a key and a sequence of elements* mapping to that key. The ordering of elements within each group is not guaranteed, and* may even differ each time the resulting RDD is evaluated.** @note This operation may be very expensive. If you are grouping in order to perform an* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`* or `PairRDDFunctions.reduceByKey` will provide much better performance.*/def groupBy[K](f: T => K)(implicit kt: ClassTag[K]): RDD[(K, Iterable[T])] = withScope {groupBy[K](f, defaultPartitioner(this))}/*** Return an RDD of grouped elements. Each group consists of a key and a sequence of elements* mapping to that key. The ordering of elements within each group is not guaranteed, and* may even differ each time the resulting RDD is evaluated.** @note This operation may be very expensive. If you are grouping in order to perform an* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`* or `PairRDDFunctions.reduceByKey` will provide much better performance.*/def groupBy[K](f: T => K,numPartitions: Int)(implicit kt: ClassTag[K]): RDD[(K, Iterable[T])] = withScope {groupBy(f, new HashPartitioner(numPartitions))}/*** Return an RDD of grouped items. Each group consists of a key and a sequence of elements* mapping to that key. The ordering of elements within each group is not guaranteed, and* may even differ each time the resulting RDD is evaluated.** @note This operation may be very expensive. If you are grouping in order to perform an* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`* or `PairRDDFunctions.reduceByKey` will provide much better performance.*/def groupBy[K](f: T => K, p: Partitioner)(implicit kt: ClassTag[K], ord: Ordering[K] = null): RDD[(K, Iterable[T])] = withScope {val cleanF = sc.clean(f)this.map(t => (cleanF(t), t)).groupByKey(p)}/*** Return an RDD created by piping elements to a forked external process.*/def pipe(command: String): RDD[String] = withScope {// Similar to Runtime.exec(), if we are given a single string, split it into words// using a standard StringTokenizer (i.e. by spaces)pipe(PipedRDD.tokenize(command))}/*** Return an RDD created by piping elements to a forked external process.*/def pipe(command: String, env: Map[String, String]): RDD[String] = withScope {// Similar to Runtime.exec(), if we are given a single string, split it into words// using a standard StringTokenizer (i.e. by spaces)pipe(PipedRDD.tokenize(command), env)}/*** Return an RDD created by piping elements to a forked external process. The resulting RDD* is computed by executing the given process once per partition. All elements* of each input partition are written to a process's stdin as lines of input separated* by a newline. The resulting partition consists of the process's stdout output, with* each line of stdout resulting in one element of the output partition. A process is invoked* even for empty partitions.** The print behavior can be customized by providing two functions.** @param command command to run in forked process.* @param env environment variables to set.* @param printPipeContext Before piping elements, this function is called as an opportunity*                         to pipe context data. Print line function (like out.println) will be*                         passed as printPipeContext's parameter.* @param printRDDElement Use this function to customize how to pipe elements. This function*                        will be called with each RDD element as the 1st parameter, and the*                        print line function (like out.println()) as the 2nd parameter.*                        An example of pipe the RDD data of groupBy() in a streaming way,*                        instead of constructing a huge String to concat all the elements:*                        {{{*                        def printRDDElement(record:(String, Seq[String]), f:String=>Unit) =*                          for (e <- record._2) {f(e)}*                        }}}* @param separateWorkingDir Use separate working directories for each task.* @param bufferSize Buffer size for the stdin writer for the piped process.* @param encoding Char encoding used for interacting (via stdin, stdout and stderr) with*                 the piped process* @return the result RDD*/def pipe(command: Seq[String],env: Map[String, String] = Map(),printPipeContext: (String => Unit) => Unit = null,printRDDElement: (T, String => Unit) => Unit = null,separateWorkingDir: Boolean = false,bufferSize: Int = 8192,encoding: String = Codec.defaultCharsetCodec.name): RDD[String] = withScope {new PipedRDD(this, command, env,if (printPipeContext ne null) sc.clean(printPipeContext) else null,if (printRDDElement ne null) sc.clean(printRDDElement) else null,separateWorkingDir,bufferSize,encoding)}/*** Return a new RDD by applying a function to each partition of this RDD.** `preservesPartitioning` indicates whether the input function preserves the partitioner, which* should be `false` unless this is a pair RDD and the input function doesn't modify the keys.*/def mapPartitions[U: ClassTag](f: Iterator[T] => Iterator[U],preservesPartitioning: Boolean = false): RDD[U] = withScope {val cleanedF = sc.clean(f)new MapPartitionsRDD(this,(_: TaskContext, _: Int, iter: Iterator[T]) => cleanedF(iter),preservesPartitioning)}/*** [performance] Spark's internal mapPartitionsWithIndex method that skips closure cleaning.* It is a performance API to be used carefully only if we are sure that the RDD elements are* serializable and don't require closure cleaning.** @param preservesPartitioning indicates whether the input function preserves the partitioner,*                              which should be `false` unless this is a pair RDD and the input*                              function doesn't modify the keys.* @param isOrderSensitive whether or not the function is order-sensitive. If it's order*                         sensitive, it may return totally different result when the input order*                         is changed. Mostly stateful functions are order-sensitive.*/private[spark] def mapPartitionsWithIndexInternal[U: ClassTag](f: (Int, Iterator[T]) => Iterator[U],preservesPartitioning: Boolean = false,isOrderSensitive: Boolean = false): RDD[U] = withScope {new MapPartitionsRDD(this,(_: TaskContext, index: Int, iter: Iterator[T]) => f(index, iter),preservesPartitioning = preservesPartitioning,isOrderSensitive = isOrderSensitive)}/*** [performance] Spark's internal mapPartitions method that skips closure cleaning.*/private[spark] def mapPartitionsInternal[U: ClassTag](f: Iterator[T] => Iterator[U],preservesPartitioning: Boolean = false): RDD[U] = withScope {new MapPartitionsRDD(this,(_: TaskContext, _: Int, iter: Iterator[T]) => f(iter),preservesPartitioning)}/*** Return a new RDD by applying a function to each partition of this RDD, while tracking the index* of the original partition.** `preservesPartitioning` indicates whether the input function preserves the partitioner, which* should be `false` unless this is a pair RDD and the input function doesn't modify the keys.*/def mapPartitionsWithIndex[U: ClassTag](f: (Int, Iterator[T]) => Iterator[U],preservesPartitioning: Boolean = false): RDD[U] = withScope {val cleanedF = sc.clean(f)new MapPartitionsRDD(this,(_: TaskContext, index: Int, iter: Iterator[T]) => cleanedF(index, iter),preservesPartitioning)}/*** Return a new RDD by applying a function to each partition of this RDD, while tracking the index* of the original partition.** `preservesPartitioning` indicates whether the input function preserves the partitioner, which* should be `false` unless this is a pair RDD and the input function doesn't modify the keys.** `isOrderSensitive` indicates whether the function is order-sensitive. If it is order* sensitive, it may return totally different result when the input order* is changed. Mostly stateful functions are order-sensitive.*/private[spark] def mapPartitionsWithIndex[U: ClassTag](f: (Int, Iterator[T]) => Iterator[U],preservesPartitioning: Boolean,isOrderSensitive: Boolean): RDD[U] = withScope {val cleanedF = sc.clean(f)new MapPartitionsRDD(this,(_: TaskContext, index: Int, iter: Iterator[T]) => cleanedF(index, iter),preservesPartitioning,isOrderSensitive = isOrderSensitive)}/*** Zips this RDD with another one, returning key-value pairs with the first element in each RDD,* second element in each RDD, etc. Assumes that the two RDDs have the *same number of* partitions* and the *same number of elements in each partition* (e.g. one was made through* a map on the other).*/def zip[U: ClassTag](other: RDD[U]): RDD[(T, U)] = withScope {zipPartitions(other, preservesPartitioning = false) { (thisIter, otherIter) =>new Iterator[(T, U)] {def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match {case (true, true) => truecase (false, false) => falsecase _ => throw new SparkException("Can only zip RDDs with " +"same number of elements in each partition")}def next(): (T, U) = (thisIter.next(), otherIter.next())}}}/*** Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by* applying a function to the zipped partitions. Assumes that all the RDDs have the* *same number of partitions*, but does *not* require them to have the same number* of elements in each partition.*/def zipPartitions[B: ClassTag, V: ClassTag](rdd2: RDD[B], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B]) => Iterator[V]): RDD[V] = withScope {new ZippedPartitionsRDD2(sc, sc.clean(f), this, rdd2, preservesPartitioning)}def zipPartitions[B: ClassTag, V: ClassTag](rdd2: RDD[B])(f: (Iterator[T], Iterator[B]) => Iterator[V]): RDD[V] = withScope {zipPartitions(rdd2, preservesPartitioning = false)(f)}def zipPartitions[B: ClassTag, C: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B], Iterator[C]) => Iterator[V]): RDD[V] = withScope {new ZippedPartitionsRDD3(sc, sc.clean(f), this, rdd2, rdd3, preservesPartitioning)}def zipPartitions[B: ClassTag, C: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C])(f: (Iterator[T], Iterator[B], Iterator[C]) => Iterator[V]): RDD[V] = withScope {zipPartitions(rdd2, rdd3, preservesPartitioning = false)(f)}def zipPartitions[B: ClassTag, C: ClassTag, D: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C], rdd4: RDD[D], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B], Iterator[C], Iterator[D]) => Iterator[V]): RDD[V] = withScope {new ZippedPartitionsRDD4(sc, sc.clean(f), this, rdd2, rdd3, rdd4, preservesPartitioning)}def zipPartitions[B: ClassTag, C: ClassTag, D: ClassTag, V: ClassTag](rdd2: RDD[B], rdd3: RDD[C], rdd4: RDD[D])(f: (Iterator[T], Iterator[B], Iterator[C], Iterator[D]) => Iterator[V]): RDD[V] = withScope {zipPartitions(rdd2, rdd3, rdd4, preservesPartitioning = false)(f)}

Actions (launch a job to return a value to the user program)

/*** Applies a function f to all elements of this RDD.*/def foreach(f: T => Unit): Unit = withScope {val cleanF = sc.clean(f)sc.runJob(this, (iter: Iterator[T]) => iter.foreach(cleanF))}/*** Applies a function f to each partition of this RDD.*/def foreachPartition(f: Iterator[T] => Unit): Unit = withScope {val cleanF = sc.clean(f)sc.runJob(this, (iter: Iterator[T]) => cleanF(iter))}/*** Return an array that contains all of the elements in this RDD.** @note This method should only be used if the resulting array is expected to be small, as* all the data is loaded into the driver's memory.*/def collect(): Array[T] = withScope {val results = sc.runJob(this, (iter: Iterator[T]) => iter.toArray)Array.concat(results: _*)}/*** Return an iterator that contains all of the elements in this RDD.** The iterator will consume as much memory as the largest partition in this RDD.** @note This results in multiple Spark jobs, and if the input RDD is the result* of a wide transformation (e.g. join with different partitioners), to avoid* recomputing the input RDD should be cached first.*/def toLocalIterator: Iterator[T] = withScope {def collectPartition(p: Int): Array[T] = {sc.runJob(this, (iter: Iterator[T]) => iter.toArray, Seq(p)).head}partitions.indices.iterator.flatMap(i => collectPartition(i))}/*** Return an RDD that contains all matching values by applying `f`.*/def collect[U: ClassTag](f: PartialFunction[T, U]): RDD[U] = withScope {val cleanF = sc.clean(f)filter(cleanF.isDefinedAt).map(cleanF)}/*** Return an RDD with the elements from `this` that are not in `other`.** Uses `this` partitioner/partition size, because even if `other` is huge, the resulting* RDD will be &lt;= us.*/def subtract(other: RDD[T]): RDD[T] = withScope {subtract(other, partitioner.getOrElse(new HashPartitioner(partitions.length)))}/*** Return an RDD with the elements from `this` that are not in `other`.*/def subtract(other: RDD[T], numPartitions: Int): RDD[T] = withScope {subtract(other, new HashPartitioner(numPartitions))}/*** Return an RDD with the elements from `this` that are not in `other`.*/def subtract(other: RDD[T],p: Partitioner)(implicit ord: Ordering[T] = null): RDD[T] = withScope {if (partitioner == Some(p)) {// Our partitioner knows how to handle T (which, since we have a partitioner, is// really (K, V)) so make a new Partitioner that will de-tuple our fake tuplesval p2 = new Partitioner() {override def numPartitions: Int = p.numPartitionsoverride def getPartition(k: Any): Int = p.getPartition(k.asInstanceOf[(Any, _)]._1)}// Unfortunately, since we're making a new p2, we'll get ShuffleDependencies// anyway, and when calling .keys, will not have a partitioner set, even though// the SubtractedRDD will, thanks to p2's de-tupled partitioning, already be// partitioned by the right/real keys (e.g. p).this.map(x => (x, null)).subtractByKey(other.map((_, null)), p2).keys} else {this.map(x => (x, null)).subtractByKey(other.map((_, null)), p).keys}}/*** Reduces the elements of this RDD using the specified commutative and* associative binary operator.*/def reduce(f: (T, T) => T): T = withScope {val cleanF = sc.clean(f)val reducePartition: Iterator[T] => Option[T] = iter => {if (iter.hasNext) {Some(iter.reduceLeft(cleanF))} else {None}}var jobResult: Option[T] = Noneval mergeResult = (_: Int, taskResult: Option[T]) => {if (taskResult.isDefined) {jobResult = jobResult match {case Some(value) => Some(f(value, taskResult.get))case None => taskResult}}}sc.runJob(this, reducePartition, mergeResult)// Get the final result out of our Option, or throw an exception if the RDD was emptyjobResult.getOrElse(throw new UnsupportedOperationException("empty collection"))}/*** Reduces the elements of this RDD in a multi-level tree pattern.** @param depth suggested depth of the tree (default: 2)* @see [[org.apache.spark.rdd.RDD#reduce]]*/def treeReduce(f: (T, T) => T, depth: Int = 2): T = withScope {require(depth >= 1, s"Depth must be greater than or equal to 1 but got $depth.")val cleanF = context.clean(f)val reducePartition: Iterator[T] => Option[T] = iter => {if (iter.hasNext) {Some(iter.reduceLeft(cleanF))} else {None}}val partiallyReduced = mapPartitions(it => Iterator(reducePartition(it)))val op: (Option[T], Option[T]) => Option[T] = (c, x) => {if (c.isDefined && x.isDefined) {Some(cleanF(c.get, x.get))} else if (c.isDefined) {c} else if (x.isDefined) {x} else {None}}partiallyReduced.treeAggregate(Option.empty[T])(op, op, depth).getOrElse(throw new UnsupportedOperationException("empty collection"))}/*** Aggregate the elements of each partition, and then the results for all the partitions, using a* given associative function and a neutral "zero value". The function* op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object* allocation; however, it should not modify t2.** This behaves somewhat differently from fold operations implemented for non-distributed* collections in functional languages like Scala. This fold operation may be applied to* partitions individually, and then fold those results into the final result, rather than* apply the fold to each element sequentially in some defined ordering. For functions* that are not commutative, the result may differ from that of a fold applied to a* non-distributed collection.** @param zeroValue the initial value for the accumulated result of each partition for the `op`*                  operator, and also the initial value for the combine results from different*                  partitions for the `op` operator - this will typically be the neutral*                  element (e.g. `Nil` for list concatenation or `0` for summation)* @param op an operator used to both accumulate results within a partition and combine results*                  from different partitions*/def fold(zeroValue: T)(op: (T, T) => T): T = withScope {// Clone the zero value since we will also be serializing it as part of tasksvar jobResult = Utils.clone(zeroValue, sc.env.closureSerializer.newInstance())val cleanOp = sc.clean(op)val foldPartition = (iter: Iterator[T]) => iter.fold(zeroValue)(cleanOp)val mergeResult = (_: Int, taskResult: T) => jobResult = op(jobResult, taskResult)sc.runJob(this, foldPartition, mergeResult)jobResult}/*** Aggregate the elements of each partition, and then the results for all the partitions, using* given combine functions and a neutral "zero value". This function can return a different result* type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U* and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are* allowed to modify and return their first argument instead of creating a new U to avoid memory* allocation.** @param zeroValue the initial value for the accumulated result of each partition for the*                  `seqOp` operator, and also the initial value for the combine results from*                  different partitions for the `combOp` operator - this will typically be the*                  neutral element (e.g. `Nil` for list concatenation or `0` for summation)* @param seqOp an operator used to accumulate results within a partition* @param combOp an associative operator used to combine results from different partitions*/def aggregate[U: ClassTag](zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): U = withScope {// Clone the zero value since we will also be serializing it as part of tasksvar jobResult = Utils.clone(zeroValue, sc.env.serializer.newInstance())val cleanSeqOp = sc.clean(seqOp)val cleanCombOp = sc.clean(combOp)val aggregatePartition = (it: Iterator[T]) => it.aggregate(zeroValue)(cleanSeqOp, cleanCombOp)val mergeResult = (_: Int, taskResult: U) => jobResult = combOp(jobResult, taskResult)sc.runJob(this, aggregatePartition, mergeResult)jobResult}/*** Aggregates the elements of this RDD in a multi-level tree pattern.* This method is semantically identical to [[org.apache.spark.rdd.RDD#aggregate]].** @param depth suggested depth of the tree (default: 2)*/def treeAggregate[U: ClassTag](zeroValue: U)(seqOp: (U, T) => U,combOp: (U, U) => U,depth: Int = 2): U = withScope {require(depth >= 1, s"Depth must be greater than or equal to 1 but got $depth.")if (partitions.length == 0) {Utils.clone(zeroValue, context.env.closureSerializer.newInstance())} else {val cleanSeqOp = context.clean(seqOp)val cleanCombOp = context.clean(combOp)val aggregatePartition =(it: Iterator[T]) => it.aggregate(zeroValue)(cleanSeqOp, cleanCombOp)var partiallyAggregated: RDD[U] = mapPartitions(it => Iterator(aggregatePartition(it)))var numPartitions = partiallyAggregated.partitions.lengthval scale = math.max(math.ceil(math.pow(numPartitions, 1.0 / depth)).toInt, 2)// If creating an extra level doesn't help reduce// the wall-clock time, we stop tree aggregation.// Don't trigger TreeAggregation when it doesn't save wall-clock timewhile (numPartitions > scale + math.ceil(numPartitions.toDouble / scale)) {numPartitions /= scaleval curNumPartitions = numPartitionspartiallyAggregated = partiallyAggregated.mapPartitionsWithIndex {(i, iter) => iter.map((i % curNumPartitions, _))}.foldByKey(zeroValue, new HashPartitioner(curNumPartitions))(cleanCombOp).values}val copiedZeroValue = Utils.clone(zeroValue, sc.env.closureSerializer.newInstance())partiallyAggregated.fold(copiedZeroValue)(cleanCombOp)}}/*** Return the number of elements in the RDD.*/def count(): Long = sc.runJob(this, Utils.getIteratorSize _).sum/*** Approximate version of count() that returns a potentially incomplete result* within a timeout, even if not all tasks have finished.** The confidence is the probability that the error bounds of the result will* contain the true value. That is, if countApprox were called repeatedly* with confidence 0.9, we would expect 90% of the results to contain the* true count. The confidence must be in the range [0,1] or an exception will* be thrown.** @param timeout maximum time to wait for the job, in milliseconds* @param confidence the desired statistical confidence in the result* @return a potentially incomplete result, with error bounds*/def countApprox(timeout: Long,confidence: Double = 0.95): PartialResult[BoundedDouble] = withScope {require(0.0 <= confidence && confidence <= 1.0, s"confidence ($confidence) must be in [0,1]")val countElements: (TaskContext, Iterator[T]) => Long = { (_, iter) =>var result = 0Lwhile (iter.hasNext) {result += 1Liter.next()}result}val evaluator = new CountEvaluator(partitions.length, confidence)sc.runApproximateJob(this, countElements, evaluator, timeout)}/*** Return the count of each unique value in this RDD as a local map of (value, count) pairs.** @note This method should only be used if the resulting map is expected to be small, as* the whole thing is loaded into the driver's memory.* To handle very large results, consider using** {{{* rdd.map(x => (x, 1L)).reduceByKey(_ + _)* }}}** , which returns an RDD[T, Long] instead of a map.*/def countByValue()(implicit ord: Ordering[T] = null): Map[T, Long] = withScope {map(value => (value, null)).countByKey()}/*** Approximate version of countByValue().** @param timeout maximum time to wait for the job, in milliseconds* @param confidence the desired statistical confidence in the result* @return a potentially incomplete result, with error bounds*/def countByValueApprox(timeout: Long, confidence: Double = 0.95)(implicit ord: Ordering[T] = null): PartialResult[Map[T, BoundedDouble]] = withScope {require(0.0 <= confidence && confidence <= 1.0, s"confidence ($confidence) must be in [0,1]")if (elementClassTag.runtimeClass.isArray) {throw new SparkException("countByValueApprox() does not support arrays")}val countPartition: (TaskContext, Iterator[T]) => OpenHashMap[T, Long] = { (_, iter) =>val map = new OpenHashMap[T, Long]iter.foreach {t => map.changeValue(t, 1L, _ + 1L)}map}val evaluator = new GroupedCountEvaluator[T](partitions.length, confidence)sc.runApproximateJob(this, countPartition, evaluator, timeout)}/*** Return approximate number of distinct elements in the RDD.** The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice:* Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available* <a href="https://doi.org/10.1145/2452376.2452456">here</a>.** The relative accuracy is approximately `1.054 / sqrt(2^p)`. Setting a nonzero (`sp` is greater* than `p`) would trigger sparse representation of registers, which may reduce the memory* consumption and increase accuracy when the cardinality is small.** @param p The precision value for the normal set.*          `p` must be a value between 4 and `sp` if `sp` is not zero (32 max).* @param sp The precision value for the sparse set, between 0 and 32.*           If `sp` equals 0, the sparse representation is skipped.*/def countApproxDistinct(p: Int, sp: Int): Long = withScope {require(p >= 4, s"p ($p) must be >= 4")require(sp <= 32, s"sp ($sp) must be <= 32")require(sp == 0 || p <= sp, s"p ($p) cannot be greater than sp ($sp)")val zeroCounter = new HyperLogLogPlus(p, sp)aggregate(zeroCounter)((hll: HyperLogLogPlus, v: T) => {hll.offer(v)hll},(h1: HyperLogLogPlus, h2: HyperLogLogPlus) => {h1.addAll(h2)h1}).cardinality()}/*** Return approximate number of distinct elements in the RDD.** The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice:* Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available* <a href="https://doi.org/10.1145/2452376.2452456">here</a>.** @param relativeSD Relative accuracy. Smaller values create counters that require more space.*                   It must be greater than 0.000017.*/def countApproxDistinct(relativeSD: Double = 0.05): Long = withScope {require(relativeSD > 0.000017, s"accuracy ($relativeSD) must be greater than 0.000017")val p = math.ceil(2.0 * math.log(1.054 / relativeSD) / math.log(2)).toIntcountApproxDistinct(if (p < 4) 4 else p, 0)}/*** Zips this RDD with its element indices. The ordering is first based on the partition index* and then the ordering of items within each partition. So the first item in the first* partition gets index 0, and the last item in the last partition receives the largest index.** This is similar to Scala's zipWithIndex but it uses Long instead of Int as the index type.* This method needs to trigger a spark job when this RDD contains more than one partitions.** @note Some RDDs, such as those returned by groupBy(), do not guarantee order of* elements in a partition. The index assigned to each element is therefore not guaranteed,* and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee* the same index assignments, you should sort the RDD with sortByKey() or save it to a file.*/def zipWithIndex(): RDD[(T, Long)] = withScope {new ZippedWithIndexRDD(this)}/*** Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k,* 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method* won't trigger a spark job, which is different from [[org.apache.spark.rdd.RDD#zipWithIndex]].** @note Some RDDs, such as those returned by groupBy(), do not guarantee order of* elements in a partition. The unique ID assigned to each element is therefore not guaranteed,* and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee* the same index assignments, you should sort the RDD with sortByKey() or save it to a file.*/def zipWithUniqueId(): RDD[(T, Long)] = withScope {val n = this.partitions.length.toLongthis.mapPartitionsWithIndex { case (k, iter) =>Utils.getIteratorZipWithIndex(iter, 0L).map { case (item, i) =>(item, i * n + k)}}}/*** Take the first num elements of the RDD. It works by first scanning one partition, and use the* results from that partition to estimate the number of additional partitions needed to satisfy* the limit.** @note This method should only be used if the resulting array is expected to be small, as* all the data is loaded into the driver's memory.** @note Due to complications in the internal implementation, this method will raise* an exception if called on an RDD of `Nothing` or `Null`.*/def take(num: Int): Array[T] = withScope {val scaleUpFactor = Math.max(conf.get(RDD_LIMIT_SCALE_UP_FACTOR), 2)if (num == 0) {new Array[T](0)} else {val buf = new ArrayBuffer[T]val totalParts = this.partitions.lengthvar partsScanned = 0while (buf.size < num && partsScanned < totalParts) {// The number of partitions to try in this iteration. It is ok for this number to be// greater than totalParts because we actually cap it at totalParts in runJob.var numPartsToTry = 1Lval left = num - buf.sizeif (partsScanned > 0) {// If we didn't find any rows after the previous iteration, quadruple and retry.// Otherwise, interpolate the number of partitions we need to try, but overestimate// it by 50%. We also cap the estimation in the end.if (buf.isEmpty) {numPartsToTry = partsScanned * scaleUpFactor} else {// As left > 0, numPartsToTry is always >= 1numPartsToTry = Math.ceil(1.5 * left * partsScanned / buf.size).toIntnumPartsToTry = Math.min(numPartsToTry, partsScanned * scaleUpFactor)}}val p = partsScanned.until(math.min(partsScanned + numPartsToTry, totalParts).toInt)val res = sc.runJob(this, (it: Iterator[T]) => it.take(left).toArray, p)res.foreach(buf ++= _.take(num - buf.size))partsScanned += p.size}buf.toArray}}/*** Return the first element in this RDD.*/def first(): T = withScope {take(1) match {case Array(t) => tcase _ => throw new UnsupportedOperationException("empty collection")}}/*** Returns the top k (largest) elements from this RDD as defined by the specified* implicit Ordering[T] and maintains the ordering. This does the opposite of* [[takeOrdered]]. For example:* {{{*   sc.parallelize(Seq(10, 4, 2, 12, 3)).top(1)*   // returns Array(12)**   sc.parallelize(Seq(2, 3, 4, 5, 6)).top(2)*   // returns Array(6, 5)* }}}** @note This method should only be used if the resulting array is expected to be small, as* all the data is loaded into the driver's memory.** @param num k, the number of top elements to return* @param ord the implicit ordering for T* @return an array of top elements*/def top(num: Int)(implicit ord: Ordering[T]): Array[T] = withScope {takeOrdered(num)(ord.reverse)}/*** Returns the first k (smallest) elements from this RDD as defined by the specified* implicit Ordering[T] and maintains the ordering. This does the opposite of [[top]].* For example:* {{{*   sc.parallelize(Seq(10, 4, 2, 12, 3)).takeOrdered(1)*   // returns Array(2)**   sc.parallelize(Seq(2, 3, 4, 5, 6)).takeOrdered(2)*   // returns Array(2, 3)* }}}** @note This method should only be used if the resulting array is expected to be small, as* all the data is loaded into the driver's memory.** @param num k, the number of elements to return* @param ord the implicit ordering for T* @return an array of top elements*/def takeOrdered(num: Int)(implicit ord: Ordering[T]): Array[T] = withScope {if (num == 0) {Array.empty} else {val mapRDDs = mapPartitions { items =>// Priority keeps the largest elements, so let's reverse the ordering.val queue = new BoundedPriorityQueue[T](num)(ord.reverse)queue ++= collectionUtils.takeOrdered(items, num)(ord)Iterator.single(queue)}if (mapRDDs.partitions.length == 0) {Array.empty} else {mapRDDs.reduce { (queue1, queue2) =>queue1 ++= queue2queue1}.toArray.sorted(ord)}}}/*** Returns the max of this RDD as defined by the implicit Ordering[T].* @return the maximum element of the RDD* */def max()(implicit ord: Ordering[T]): T = withScope {this.reduce(ord.max)}/*** Returns the min of this RDD as defined by the implicit Ordering[T].* @return the minimum element of the RDD* */def min()(implicit ord: Ordering[T]): T = withScope {this.reduce(ord.min)}/*** @note Due to complications in the internal implementation, this method will raise an* exception if called on an RDD of `Nothing` or `Null`. This may be come up in practice* because, for example, the type of `parallelize(Seq())` is `RDD[Nothing]`.* (`parallelize(Seq())` should be avoided anyway in favor of `parallelize(Seq[T]())`.)* @return true if and only if the RDD contains no elements at all. Note that an RDD*         may be empty even when it has at least 1 partition.*/def isEmpty(): Boolean = withScope {partitions.length == 0 || take(1).length == 0}/*** Save this RDD as a text file, using string representations of elements.*/def saveAsTextFile(path: String): Unit = withScope {saveAsTextFile(path, null)}/*** Save this RDD as a compressed text file, using string representations of elements.*/def saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec]): Unit = withScope {this.mapPartitions { iter =>val text = new Text()iter.map { x =>require(x != null, "text files do not allow null rows")text.set(x.toString)(NullWritable.get(), text)}}.saveAsHadoopFile[TextOutputFormat[NullWritable, Text]](path, codec)}/*** Save this RDD as a SequenceFile of serialized objects.*/def saveAsObjectFile(path: String): Unit = withScope {this.mapPartitions(iter => iter.grouped(10).map(_.toArray)).map(x => (NullWritable.get(), new BytesWritable(Utils.serialize(x)))).saveAsSequenceFile(path)}/*** Creates tuples of the elements in this RDD by applying `f`.*/def keyBy[K](f: T => K): RDD[(K, T)] = withScope {val cleanedF = sc.clean(f)map(x => (cleanedF(x), x))}/** A private method for tests, to look at the contents of each partition */private[spark] def collectPartitions(): Array[Array[T]] = withScope {sc.runJob(this, (iter: Iterator[T]) => iter.toArray)}

shuffle

/*** Return a new RDD that has exactly numPartitions partitions.** Can increase or decrease the level of parallelism in this RDD. Internally, this uses* a shuffle to redistribute data.** If you are decreasing the number of partitions in this RDD, consider using `coalesce`,* which can avoid performing a shuffle.*/def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {coalesce(numPartitions, shuffle = true)}/*** Return a new RDD that is reduced into `numPartitions` partitions.** This results in a narrow dependency, e.g. if you go from 1000 partitions* to 100 partitions, there will not be a shuffle, instead each of the 100* new partitions will claim 10 of the current partitions. If a larger number* of partitions is requested, it will stay at the current number of partitions.** However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,* this may result in your computation taking place on fewer nodes than* you like (e.g. one node in the case of numPartitions = 1). To avoid this,* you can pass shuffle = true. This will add a shuffle step, but means the* current upstream partitions will be executed in parallel (per whatever* the current partitioning is).** @note With shuffle = true, you can actually coalesce to a larger number* of partitions. This is useful if you have a small number of partitions,* say 100, potentially with a few partitions being abnormally large. Calling* coalesce(1000, shuffle = true) will result in 1000 partitions with the* data distributed using a hash partitioner. The optional partition coalescer* passed in must be serializable.*/def coalesce(numPartitions: Int, shuffle: Boolean = false,partitionCoalescer: Option[PartitionCoalescer] = Option.empty)(implicit ord: Ordering[T] = null): RDD[T] = withScope {require(numPartitions > 0, s"Number of partitions ($numPartitions) must be positive.")if (shuffle) {/** Distributes elements evenly across output partitions, starting from a random partition. */val distributePartition = (index: Int, items: Iterator[T]) => {var position = new Random(hashing.byteswap32(index)).nextInt(numPartitions)items.map { t =>// Note that the hash code of the key will just be the key itself. The HashPartitioner// will mod it with the number of total partitions.position = position + 1(position, t)}} : Iterator[(Int, T)]// include a shuffle step so that our upstream tasks are still distributednew CoalescedRDD(new ShuffledRDD[Int, T, T](mapPartitionsWithIndexInternal(distributePartition, isOrderSensitive = true),new HashPartitioner(numPartitions)),numPartitions,partitionCoalescer).values} else {new CoalescedRDD(this, numPartitions, partitionCoalescer)}}/*** Return the intersection of this RDD and another one. The output will not contain any duplicate* elements, even if the input RDDs did.** @note This method performs a shuffle internally.*/def intersection(other: RDD[T]): RDD[T] = withScope {this.map(v => (v, null)).cogroup(other.map(v => (v, null))).filter { case (_, (leftGroup, rightGroup)) => leftGroup.nonEmpty && rightGroup.nonEmpty }.keys}/*** Return the intersection of this RDD and another one. The output will not contain any duplicate* elements, even if the input RDDs did.** @note This method performs a shuffle internally.** @param partitioner Partitioner to use for the resulting RDD*/def intersection(other: RDD[T],partitioner: Partitioner)(implicit ord: Ordering[T] = null): RDD[T] = withScope {this.map(v => (v, null)).cogroup(other.map(v => (v, null)), partitioner).filter { case (_, (leftGroup, rightGroup)) => leftGroup.nonEmpty && rightGroup.nonEmpty }.keys}/*** Return the intersection of this RDD and another one. The output will not contain any duplicate* elements, even if the input RDDs did.  Performs a hash partition across the cluster** @note This method performs a shuffle internally.** @param numPartitions How many partitions to use in the resulting RDD*/def intersection(other: RDD[T], numPartitions: Int): RDD[T] = withScope {intersection(other, new HashPartitioner(numPartitions))}
/*** Generic function to combine the elements for each key using a custom set of aggregation* functions. This method is here for backward compatibility. It does not provide combiner* classtag information to the shuffle.** @see `combineByKeyWithClassTag`*/def combineByKey[C](createCombiner: V => C,mergeValue: (C, V) => C,mergeCombiners: (C, C) => C,partitioner: Partitioner,mapSideCombine: Boolean = true,serializer: Serializer = null): RDD[(K, C)] = self.withScope {combineByKeyWithClassTag(createCombiner, mergeValue, mergeCombiners,partitioner, mapSideCombine, serializer)(null)}/*** Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.* This method is here for backward compatibility. It does not provide combiner* classtag information to the shuffle.** @see `combineByKeyWithClassTag`*/def combineByKey[C](createCombiner: V => C,mergeValue: (C, V) => C,mergeCombiners: (C, C) => C,numPartitions: Int): RDD[(K, C)] = self.withScope {combineByKeyWithClassTag(createCombiner, mergeValue, mergeCombiners, numPartitions)(null)}/*** Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.*/def combineByKeyWithClassTag[C](createCombiner: V => C,mergeValue: (C, V) => C,mergeCombiners: (C, C) => C,numPartitions: Int)(implicit ct: ClassTag[C]): RDD[(K, C)] = self.withScope {combineByKeyWithClassTag(createCombiner, mergeValue, mergeCombiners,new HashPartitioner(numPartitions))}/*** Aggregate the values of each key, using given combine functions and a neutral "zero value".* This function can return a different result type, U, than the type of the values in this RDD,* V. Thus, we need one operation for merging a V into a U and one operation for merging two U's,* as in scala.TraversableOnce. The former operation is used for merging values within a* partition, and the latter is used for merging values between partitions. To avoid memory* allocation, both of these functions are allowed to modify and return their first argument* instead of creating a new U.*/def aggregateByKey[U: ClassTag](zeroValue: U, partitioner: Partitioner)(seqOp: (U, V) => U,combOp: (U, U) => U): RDD[(K, U)] = self.withScope {// Serialize the zero value to a byte array so that we can get a new clone of it on each keyval zeroBuffer = SparkEnv.get.serializer.newInstance().serialize(zeroValue)val zeroArray = new Array[Byte](zeroBuffer.limit)zeroBuffer.get(zeroArray)lazy val cachedSerializer = SparkEnv.get.serializer.newInstance()val createZero = () => cachedSerializer.deserialize[U](ByteBuffer.wrap(zeroArray))// We will clean the combiner closure later in `combineByKey`val cleanedSeqOp = self.context.clean(seqOp)combineByKeyWithClassTag[U]((v: V) => cleanedSeqOp(createZero(), v),cleanedSeqOp, combOp, partitioner)}/*** Aggregate the values of each key, using given combine functions and a neutral "zero value".* This function can return a different result type, U, than the type of the values in this RDD,* V. Thus, we need one operation for merging a V into a U and one operation for merging two U's,* as in scala.TraversableOnce. The former operation is used for merging values within a* partition, and the latter is used for merging values between partitions. To avoid memory* allocation, both of these functions are allowed to modify and return their first argument* instead of creating a new U.*/def aggregateByKey[U: ClassTag](zeroValue: U, numPartitions: Int)(seqOp: (U, V) => U,combOp: (U, U) => U): RDD[(K, U)] = self.withScope {aggregateByKey(zeroValue, new HashPartitioner(numPartitions))(seqOp, combOp)}/*** Aggregate the values of each key, using given combine functions and a neutral "zero value".* This function can return a different result type, U, than the type of the values in this RDD,* V. Thus, we need one operation for merging a V into a U and one operation for merging two U's,* as in scala.TraversableOnce. The former operation is used for merging values within a* partition, and the latter is used for merging values between partitions. To avoid memory* allocation, both of these functions are allowed to modify and return their first argument* instead of creating a new U.*/def aggregateByKey[U: ClassTag](zeroValue: U)(seqOp: (U, V) => U,combOp: (U, U) => U): RDD[(K, U)] = self.withScope {aggregateByKey(zeroValue, defaultPartitioner(self))(seqOp, combOp)}/*** Merge the values for each key using an associative function and a neutral "zero value" which* may be added to the result an arbitrary number of times, and must not change the result* (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).*/def foldByKey(zeroValue: V,partitioner: Partitioner)(func: (V, V) => V): RDD[(K, V)] = self.withScope {// Serialize the zero value to a byte array so that we can get a new clone of it on each keyval zeroBuffer = SparkEnv.get.serializer.newInstance().serialize(zeroValue)val zeroArray = new Array[Byte](zeroBuffer.limit)zeroBuffer.get(zeroArray)// When deserializing, use a lazy val to create just one instance of the serializer per tasklazy val cachedSerializer = SparkEnv.get.serializer.newInstance()val createZero = () => cachedSerializer.deserialize[V](ByteBuffer.wrap(zeroArray))val cleanedFunc = self.context.clean(func)combineByKeyWithClassTag[V]((v: V) => cleanedFunc(createZero(), v),cleanedFunc, cleanedFunc, partitioner)}/*** Merge the values for each key using an associative function and a neutral "zero value" which* may be added to the result an arbitrary number of times, and must not change the result* (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).*/def foldByKey(zeroValue: V, numPartitions: Int)(func: (V, V) => V): RDD[(K, V)] = self.withScope {foldByKey(zeroValue, new HashPartitioner(numPartitions))(func)}/*** Merge the values for each key using an associative function and a neutral "zero value" which* may be added to the result an arbitrary number of times, and must not change the result* (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).*/def foldByKey(zeroValue: V)(func: (V, V) => V): RDD[(K, V)] = self.withScope {foldByKey(zeroValue, defaultPartitioner(self))(func)}

一、函数定义
def combineByKey[C](
createCombiner: V => C,
mergeValue: (C, V) => C,
mergeCombiners: (C, C) => C,
partitioner: Partitioner,
mapSideCombine: Boolean = true,
serializer: Serializer = null)
里面的每个参数分别对应聚合操作的各个阶段

二、参数说明
1、createCombiner:V => C ,分区内创建组合函数。这个函数把当前的值作为参数,此时我们可以对其做些附加操作(类型转换)并把它返回 (这一步类似于初始化操作)。
2、mergeValue: (C, V) => C,分区内合并值函数。该函数把元素V合并到之前的元素C(createCombiner)上 (这个操作在每个分区内进行)。
3、mergeCombiners: (C, C) => C,多分区合并组合器函数。该函数把2个元素C合并 (这个操作在不同分区间进行)。
4、partitioner:自定义分区数,默认为HashPartitioner
5、mapSideCombine:是否在map端进行Combine操作,默认为true

三、工作流程
1、combineByKey会遍历分区中的所有元素,因此每个元素的key要么没遇到过,要么和之前某个元素的key相同。
2、如果这是一个新的元素,函数会调用createCombiner创建那个key对应的累加器初始值。
3、如果这是一个在处理当前分区之前已经遇到的key,会调用mergeCombiners把该key累加器对应的当前value与这个新的value合并。

四、举例说明
val initialScores = Array(((“1”, “011”), 1), ((“1”, “012”), 1), ((“2”, “011”), 1), ((“2”, “013”),1), ((“2”, “014”),1))
val d1 = sc.parallelize(initialScores)

d1.map(x=>(x._1._1,(x._1._2,1))).combineByKey(
(v) => (v),
(acc: (String,Int), v) => (v._1+":"+acc._1,v._2+acc._2),
(acc1: (String,Int), acc2: (String,Int)) => (acc1._1 +":"+ acc2._1, acc1._2 +acc2._2)
).collect().foreach(println)
1、map操作将格式转化为(“1”, (“011”, 1))
2、(acc: (String,Int), v) => (v._1+":"+acc._1,v._2+acc._2):这个时分区内合并值函数,所以(“011”, 1)等价于(String, Int),这个表达式的意思是将同一个key对应的值的第一个字符串用":“串起来,然后第二个值相加。
3、(acc1: (String,Int), acc2: (String,Int)) => (acc1._1 +”:"+ acc2._1, acc1._2 +acc2._2):分区间的合并操作,将第2步的操作扩展到分区间。
4、最后输出值为:
(1,(012:011,2))
(2,(014:013:011,3))

shuffle

unsafeShuffleWriter


SortShuffleWriter

3中的例子,会在mappartition执行期间,在内存中定义一个数组并且将缓存所有的数据。假如数据集比较大,内存不足,会导致内存溢出,任务失败。 对于这样的案例,Spark的RDD不支持像mapreduce那些有上下文的写方法。其实,浪尖有个方法是无需缓存数据的,那就是自定义一个迭代器类。如下例:

class CustomIterator(iter: Iterator[Int]) extends Iterator[Int] {   def hasNext : Boolean = {     iter.hasNext    }
def next : Int= {        val cur = iter.next        cur*3
}
}    val result = a.mapPartitions(
v => new CustomIterator(v)
)  println(result.collect().mkString(","))


BypassMergeSortShuffleWriter



全局ID

zipWithIndex()

首先基于分区索引排序,然后是每个分区中的项的排序。所以第一个分区中的第一项得到索引0,第二个分区的起始值是第一个分区的最大值。从0开始。分区内id连续。会触发spark job。

zipWithUniqueId()

每个分区是一个等差数列,等差为分区数n,每个分区的第一个值为分区id(id从0开始)。第k个分区:num*n+k。分区内id不连续。从0开始。不会触发spark job。

1、使用REDIS生成自增ID。

优点:使用REDIS的INCNY实现自增,并且没有并发问题,REDIS集群环境完全可以满足要求。

缺点:因为每次都要去REDIS上取ID,SPARK与REDIS之间每次都是一次网络传输,少则10几ms,多则几百ms。而且SPARK与REDIS形成了依赖关系。一旦REDIS挂了,SPARK就会出现问题。所以我们放弃了这个方案。

2、使用SPARKSQL的函数和withcolumn生成自增ID
val newDf = dataFrame.withColumn(“id”,org.apache.spark.sql.functions.row_number().over(Window.partitionBy(batch).orderBy(index))
由于上面的代码生成新的dataFrame,分区会变成一个,所以要转成RDD,重新repartition一下。

val newRdd = newDf.rdd.repartition(10)
val df = session.createDataFrame(newRdd,schema)
由于有repartition操作,这里会出现shuffle。所以存在性能问题。对于我们要求大批量快速入库的要求不能满足。所以不能采用。

3、使用zipWithIndex在生成RDD[ROW]时生成ID

zipWithIndex()

首先基于分区索引排序,然后是每个分区中的项的排序。所以第一个分区中的第一项得到索引0,第二个分区的起始值是第一个分区的最大值。从0开始。分区内id连续。会触发spark job。

zipWithUniqueId()

每个分区是一个等差数列,等差为分区数n,每个分区的第一个值为分区id(id从0开始)。第k个分区:num*n+k。分区内id不连续。从0开始。不会触发spark job。

全局唯一自增ID
如果需要多次运行程序并保证id始终自增,可以在redis中维护偏移量,在调用addUniqueIdColumn时传入对应的offset即可。

 val tempRdd = rdd.zipWithIndex()val record = tempRdd.map(x=>{var strArray = x._1.split(",")val newArray = strArray.+:(x._2).toString)newArray})

// 加载数据
val dataframe = spark.read.csv(inputFile).toDF("lon", "lat")
1
2/*** 设置窗口函数的分区以及排序,因为是全局排序而不是分组排序,所有分区依据为空* 排序规则没有特殊要求也可以随意填写*/val spec = Window.partitionBy().orderBy($"lon")val df1 = dataframe.withColumn("id", row_number().over(spec))df1.show()zipWithIndex()首先基于分区索引排序,然后是每个分区中的项的排序。所以第一个分区中的第一项得到索引0,第二个分区的起始值是第一个分区的最大值。从0开始。分区内id连续。会触发spark job。zipWithUniqueId()每个分区是一个等差数列,等差为分区数n,每个分区的第一个值为分区id(id从0开始)。第k个分区:num*n+k。分区内id不连续。从0开始。不会触发spark job。import org.apache.spark.SparkException
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema
import org.apache.spark.sql.types.LongType
import org.apache.spark.sql.{DataFrame, Row, SparkSession}object SparkCommon {/*** 不改变分区数,不产生shuffle** @param offset                 自增id的起始值,默认0* @param isPartitionConsecutive 分区内ID是否连续(不连续时是等差数列),连续时会触发spark job*/def addUniqueIdColumn(sparkSession: SparkSession, df: DataFrame, uidKey: String, offset: Long = 0, isPartitionConsecutive: Boolean = false): DataFrame = {val newSchema = df.head().schema.add(uidKey, LongType)val rdd = if (isPartitionConsecutive) df.rdd.zipWithIndex() else df.rdd.zipWithUniqueId()val result: RDD[Row] = rdd.map(e => {Row.merge(e._1,Row(e._2+offset))})sparkSession.createDataFrame(result, newSchema)}import org.junit.{After, Before, Test}
import org.apache.spark.sql.{DataFrame, SparkSession}class SparkCommonTest{@Test def testUid(): Unit = {val sparkSession=SparkSession.builder().master("local[2]").getOrCreate()var df: DataFrame =sparkSession.read.json("E:\\Data\\input\\JSON\\json4x12.txt")df=df.repartitionByRange(4,df("name"))SparkCommon.addUniqueIdColumn(sparkSession,df,"uid").sort("name").show()SparkCommon.addUniqueIdColumn(sparkSession,df,"pid",12,true).sort("name").show()sparkSession.stop()}全局唯一自增ID
如果需要多次运行程序并保证id始终自增,可以在redis中维护偏移量,在调用addUniqueIdColumn时传入对应的offset即可 zipWithIndex
def zipWithIndex(): RDD[(T, Long)]该函数将RDD中的元素和这个元素在RDD中的ID(索引号)组合成键/值对。scala> var rdd2 = sc.makeRDD(Seq("A","B","R","D","F"),2)
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[34] at makeRDD at :21scala> rdd2.zipWithIndex().collect
res27: Array[(String, Long)] = Array((A,0), (B,1), (R,2), (D,3), (F,4))zipWithUniqueId
def zipWithUniqueId(): RDD[(T, Long)]该函数将RDD中元素和一个唯一ID组合成键/值对,该唯一ID生成算法如下:每个分区中第一个元素的唯一ID值为:该分区索引号,每个分区中第N个元素的唯一ID值为:(前一个元素的唯一ID值) + (该RDD总的分区数)看下面的例子:scala> var rdd1 = sc.makeRDD(Seq("A","B","C","D","E","F"),2)
rdd1: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[44] at makeRDD at :21
//rdd1有两个分区,
scala> rdd1.zipWithUniqueId().collect
res32: Array[(String, Long)] = Array((A,0), (B,2), (C,4), (D,1), (E,3), (F,5))
//总分区数为2
//第一个分区第一个元素ID为0,第二个分区第一个元素ID为1
//第一个分区第二个元素ID为0+2=2,第一个分区第三个元素ID为2+2=4
//第二个分区第二个元素ID为1+2=3,第二个分区第三个元素ID为3+2=5问题:
你要遍历一个有序集合,同时你又想访问一个循环计数器,但最重要的是你真的不需要手动创建这个计数器。
解决方案:使用zipWithIndex或者zip方法来自动地创建一个计数器,假设你有一个有序集合days,那么你可以使用zipWithIndex和counter来打印带有计数器的集合元素:复制代码
scala> val days = Array("Sunday", "Monday", "Tuesday", "Wednesday","Thursday", "Friday", "Saturday")
days: Array[String] = Array(Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday)scala> days.zipWithIndex.foreach{case(day,count) => println(s"$count is $day")}
0 is Sunday
1 is Monday
2 is Tuesday
3 is Wednesday
4 is Thursday
5 is Friday
6 is Saturday
复制代码
同样,你可以使用for循环来打印计数器和集合元素复制代码
scala> for((day,count) <- days.zipWithIndex) {|   println(s"$count is $day")| }
0 is Sunday
1 is Monday
2 is Tuesday
3 is Wednesday
4 is Thursday
5 is Friday
6 is Saturday
复制代码
zipWithIndex的计数器都是从0开始,如果你想指定开始的值,那么你可以使用zip Stream:复制代码
scala> for((day,count) <- days.zip(Stream from 1)) {|   println(s"$count is $day")| }
1 is Sunday
2 is Monday
3 is Tuesday
4 is Wednesday
5 is Thursday
6 is Friday
7 is Saturday
复制代码
Discussion当有序集合调用zipWithIndex的时候,它会返回一个有序的二元组集合:scala> val list = List("a", "b", "c")
list: List[String] = List(a, b, c)scala> list.zipWithIndex
res3: List[(String, Int)] = List((a,0), (b,1), (c,2))
因为zipWithIndex是在一个已经存在的有序集合的基础上建立一个新的有序集合,你可以在调用zipWithIndex之前调用view:scala> val zwv = list.view.zipWithIndex
zwv: scala.collection.SeqView[(String, Int),Seq[_]] = SeqViewZ(...)
就像上面这个例子里面看到的,它在原有的List基础上创建了一个lazy view,所以这个元组集合并不被会被创建,直到它被调用的那一刻。正因有这种特性,我们推荐在调用zipWithIndex之前先调用view方法。zip和zipWithIndex方法都返回一个有序二元组集合。因此,你的foreach方法也可以写成下面这样,虽然这比起解决方案中的方法,可读性略差。scala> days.zipWithIndex.foreach(d => println(s"${d._2} is ${d._1}"))
0 is Sunday
1 is Monday
2 is Tuesday
3 is Wednesday
4 is Thursday
5 is Friday
6 is Saturday在之前的例子中我们曾经见过,可以通过一个for循环加range来创建这个计数器:scala> val fruits = Array("apple", "banana", "orange")
fruits: Array[String] = Array(apple, banana, orange)scala> for (i <- 0 until fruits.size) println(s"element $i is ${fruits(i)}")
element 0 is apple
element 1 is banana
element 2 is orangedef dfZipWithIndex (df, offset=1, colName="rowId"):'''Enumerates dataframe rows is native order, like rdd.ZipWithIndex(), but on a dataframe and preserves a schema:param df: source dataframe:param offset: adjustment to zipWithIndex()'s index:param colName: name of the index column'''new_schema = StructType([StructField(colName,LongType(),True)]        # new added field in front+ df.schema.fields                            # previous schema)zipped_rdd = df.rdd.zipWithIndex()new_rdd = zipped_rdd.map(lambda args: ([args[1] + offset] + list(args[0])))return spark.createDataFrame(new_rdd, new_schema)def dfZipWithIndex (df, offset=1, colName="rowId"):'''Enumerates dataframe rows is native order, like rdd.ZipWithIndex(), but on a dataframe and preserves a schema:param df: source dataframe:param offset: adjustment to zipWithIndex()'s index:param colName: name of the index column'''new_schema = StructType([StructField(colName,LongType(),True)]        # new added field in front+ df.schema.fields                            # previous schema)zipped_rdd = df.rdd.zipWithIndex()new_rdd = zipped_rdd.map(lambda args: ([args[1] + offset] + list(args[0])))return spark.createDataFrame(new_rdd, new_schema)

大数据最佳实践-spark相关推荐

  1. 东网科技荣膺2016中国大数据最佳实践奖

    11月24日,由中国软件网主办的数据趴活动圆满落幕,活动现场隆重揭晓了大数据领域的重磅榜单,旨在对大数据领域中表现突出的企业.人物进行表彰.东网科技有限公司(以下简称"东网科技") ...

  2. 收藏 | 500页阿里、滴滴、快手等公司的大数据最佳实践!PDF限时下载

    大数据在阿里.百度.滴滴.快手等公司的最佳实践? Hadoop的核心竞争力? Spark or Flink? 离线平台与实时计算平台如何设计? 今天,2020 DataFunTalk 精选: < ...

  3. 大数据最佳实践-hbase

    目录 概述 架构 MemStore WAL HMaster 读流程 写流程 Memstore Flush Flush过程 StoreFile Compaction Region Split 优化 re ...

  4. 大数据最佳实践-基于Spark的ETL开发

    目录 数据同步 RDMBS to RDMBS 数据同步 Hive to Hive 数据同步 RDBMS to Hive 数据同步 hive to rdmbs HDFS 数据监控 数据同步 file t ...

  5. 大数据最佳实践-flink

    目录 概述 特点 API 架构 任务调度原理 概念 Time Window Event Time Watermark 有状态的计算 一致性 7.2 检查点: 保证exactly-once Statef ...

  6. 大数据最佳实践-kafka

    目录 架构 优缺点 硬件 磁盘存储 kafka机器数 确定topic分区数 如何确定Partition的副本数 同一个组的消费者的数量建议与待消费的Topic下的Partition数保持一致 每个消费 ...

  7. Spark 大数据处理最佳实践

    开源大数据社区 & 阿里云 EMR 系列直播 第十一期 主题:Spark 大数据处理最佳实践 讲师:简锋,阿里云 EMR 数据开发平台 负责人 内容框架: 大数据概览 如何摆脱技术小白 Spa ...

  8. 大数据项目实践:基于hadoop+spark+mongodb+mysql开发医院临床知识库系统

    一.前言 从20世纪90年代数字化医院概念提出到至今的20多年时间,数字化医院(Digital Hospital)在国内各大医院飞速的普及推广发展,并取得骄人成绩.不但有数字化医院管理信息系统(HIS ...

  9. 大数据项目实践:基于hadoop+spark+mongodb+mysql+c#开发医院临床知识库系统

    从20世纪90年代数字化医院概念提出到至今的20多年时间,数字化医院(Digital Hospital)在国内各大医院飞速的普及推广发展,并取得骄人成绩.不但有数字化医院管理信息系统(HIS).影像存 ...

  10. 《大数据机器学习实践探索》 ---- 总目录

    文章大纲 1. 框架平台介绍篇 云平台 安装与调试 大数据搜索框架 Elasticsearch 大数据框架 spark 2. 数据处理篇 数据接入 数据清洗 ETL EDA 特征工程 3. 大数据机器 ...

最新文章

  1. 微信小程序模板消息(服务通知消息)原始post工具封装(不使用jar包--坑比较多),解决47001(JSON格式)和中文乱码问题
  2. 初级搭建博客mysql_mysql 初级命令学习2
  3. python format函数实例_Python字符串格式化,format格式化函数详细使用
  4. 基于wampserver和Eclipse for PHP Developers的PHP开发调试环境搭建
  5. getActionBar()报空指针异常
  6. MySQL中GTID的几个限制和解决方案(r13笔记第21天)
  7. python课程价格-少儿编程培训机构哪家好,儿童编程价格多少钱?家长知多少
  8. adb命令获取app布局文件xml
  9. 雇员查询java面试题经典29例【第八季_常瑞鹏】
  10. Java开发笔记(一百二十八)Swing的图标
  11. 电视还有前途吗?也许它的前途就是嫁给互联网
  12. SQL数据库的导入和导出
  13. 三菱FX3UFX2NFX1N PLC 模拟器模拟通信功能,模拟PLC实体
  14. 分享一个简单免费查询你手机注册过的网站的方法/app
  15. Pyecharts树状图:地理图表
  16. 【开关电源五】电源拓扑之全桥、半桥、推挽
  17. [zz]在港内地学生办理工作签证指南(IANG)
  18. mysql 获取农历年份_iOS 获取公历、农历日期的年月日
  19. 树莓派安装配置MFRC522模块
  20. 北京六天十次面试,已拿到offer,面试题和总结分享(Java岗)

热门文章

  1. 高等数学第七版同济大学课后习题讲解上下册
  2. 2021蓝桥杯省赛c++A
  3. 系分 01 企业信息化战略与实施
  4. 微软商店常见的几个问题
  5. android TextView属性汇总
  6. Matlab 轴承信号仿真+分析
  7. 临时邮件、临时接收手机验证码等好用工具
  8. 步进电机28BYJ-48拆机图、原理和51/stm32测试程序
  9. python遗传算法计算实例_python实现的遗传算法实例(一)
  10. linux开发windows游戏,安装Wine 1.9.19(开发版)在Linux上运行Windows游戏和应用程序...