使用Scala编写Spark程序求基站下移动用户停留时长TopN

1. 需求:根据手机基站日志计算停留时长的TopN

我们的手机之所以能够实现移动通信,是因为在全国各地有许许多多的基站,只要手机一开机,就会和附近的基站尝试建立连接,而每一次建立连接和断开连接都会被记录到移动运营商的基站服务器的日志中。

虽然我们不知道手机用户所在的具体位置,但是根据基站的位置就可以大致判断手机用户的所处的地理范围,然后商家就可以根据用户的位置信息来做一些推荐广告。

为了便于理解,我们简单模拟了基站上的一些移动用户日志数据,共4个字段:手机号码,时间戳,基站id,连接类型(1表示建立连接,0表示断开连接)

基站A的用户日志(19735E1C66.log文件):

18688888888,20160327082400,16030401EAFB68F1E3CDF819735E1C66,1
18611132889,20160327082500,16030401EAFB68F1E3CDF819735E1C66,1
18688888888,20160327170000,16030401EAFB68F1E3CDF819735E1C66,0
18611132889,20160327180000,16030401EAFB68F1E3CDF819735E1C66,0

基站B的用户日志(DDE7970F68.log文件):

18611132889,20160327075000,9F36407EAD0629FC166F14DDE7970F68,1
18688888888,20160327075100,9F36407EAD0629FC166F14DDE7970F68,1
18611132889,20160327081000,9F36407EAD0629FC166F14DDE7970F68,0
18688888888,20160327081300,9F36407EAD0629FC166F14DDE7970F68,0
18688888888,20160327175000,9F36407EAD0629FC166F14DDE7970F68,1
18611132889,20160327182000,9F36407EAD0629FC166F14DDE7970F68,1
18688888888,20160327220000,9F36407EAD0629FC166F14DDE7970F68,0
18611132889,20160327230000,9F36407EAD0629FC166F14DDE7970F68,0

基站C的用户日志(E549D940E0.log文件):

18611132889,20160327081100,CC0710CC94ECC657A8561DE549D940E0,1
18688888888,20160327081200,CC0710CC94ECC657A8561DE549D940E0,1
18688888888,20160327081900,CC0710CC94ECC657A8561DE549D940E0,0
18611132889,20160327082000,CC0710CC94ECC657A8561DE549D940E0,0
18688888888,20160327171000,CC0710CC94ECC657A8561DE549D940E0,1
18688888888,20160327171600,CC0710CC94ECC657A8561DE549D940E0,0
18611132889,20160327180500,CC0710CC94ECC657A8561DE549D940E0,1
18611132889,20160327181500,CC0710CC94ECC657A8561DE549D940E0,0

下面是基站表的数据(loc_info.txt文件),共4个字段,分别代表基站id和经纬度以及信号的辐射类型(信号级别,比如2G信号、3G信号和4G信号等):

9F36407EAD0629FC166F14DDE7970F68,116.304864,40.050645,6
CC0710CC94ECC657A8561DE549D940E0,116.303955,40.041935,6
16030401EAFB68F1E3CDF819735E1C66,116.296302,40.032296,6

基于以上数据,要求计算每个手机号码在每个基站停留时间最长的2个地点(经纬度)。

思路:

(1)读取日志数据,并切分字段;
(2)整理字段,以手机号码和基站id为key,时间为value封装成Tuple;
(3)根据key进行聚合,将时间累加;
(4)将数据以基站id为key,以手机号码和时间为value封装成Tuple,便于后面和基站表进行join;
(5)读取基站数据,并切分字段;
(6)整理字段,以基站id为key,以基站的经度和纬度为value封装到Tuple;
(7)将两个Tuple进行join;
(8)对join后的结果按照手机号码分组;
(9)将分组后的结果转成List,在按照时间排序,在反转,最后取Top2;
(10)将计算结果写入HDFS;

2. 准备测试数据

1) 移动用户的日志信息(即上面的3个log文件):

2) 基站数据(即上面的loc_info.txt文件):

3. pom文件

  1 <?xml version="1.0" encoding="UTF-8"?>
  2 <project xmlns="http://maven.apache.org/POM/4.0.0"
  3          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5     <modelVersion>4.0.0</modelVersion>
  6
  7     <groupId>com.xuebusi</groupId>
  8     <artifactId>spark</artifactId>
  9     <version>1.0-SNAPSHOT</version>
 10
 11     <properties>
 12         <maven.compiler.source>1.7</maven.compiler.source>
 13         <maven.compiler.target>1.7</maven.compiler.target>
 14         <encoding>UTF-8</encoding>
 15
 16         <!-- 这里对jar包版本做集中管理 -->
 17         <scala.version>2.10.6</scala.version>
 18         <spark.version>1.6.2</spark.version>
 19         <hadoop.version>2.6.4</hadoop.version>
 20     </properties>
 21
 22     <dependencies>
 23         <dependency>
 24             <!-- scala语言核心包 -->
 25             <groupId>org.scala-lang</groupId>
 26             <artifactId>scala-library</artifactId>
 27             <version>${scala.version}</version>
 28         </dependency>
 29         <dependency>
 30             <!-- spark核心包 -->
 31             <groupId>org.apache.spark</groupId>
 32             <artifactId>spark-core_2.10</artifactId>
 33             <version>${spark.version}</version>
 34         </dependency>
 35
 36         <dependency>
 37             <!-- hadoop的客户端,用于访问HDFS -->
 38             <groupId>org.apache.hadoop</groupId>
 39             <artifactId>hadoop-client</artifactId>
 40             <version>${hadoop.version}</version>
 41         </dependency>
 42     </dependencies>
 43
 44     <build>
 45         <pluginManagement>
 46
 47             <plugins>
 48                 <!--  scala-maven-plugin:编译scala程序的Maven插件 -->
 49                 <plugin>
 50                     <groupId>net.alchim31.maven</groupId>
 51                     <artifactId>scala-maven-plugin</artifactId>
 52                     <version>3.2.2</version>
 53                 </plugin>
 54                 <!--  maven-compiler-plugin:编译java程序的Maven插件 -->
 55                 <plugin>
 56                     <groupId>org.apache.maven.plugins</groupId>
 57                     <artifactId>maven-compiler-plugin</artifactId>
 58                     <version>3.5.1</version>
 59                 </plugin>
 60             </plugins>
 61         </pluginManagement>
 62         <plugins>
 63             <!--  编译scala程序的Maven插件的一些配置参数 -->
 64             <plugin>
 65                 <groupId>net.alchim31.maven</groupId>
 66                 <artifactId>scala-maven-plugin</artifactId>
 67                 <executions>
 68                     <execution>
 69                         <id>scala-compile-first</id>
 70                         <phase>process-resources</phase>
 71                         <goals>
 72                             <goal>add-source</goal>
 73                             <goal>compile</goal>
 74                         </goals>
 75                     </execution>
 76                     <execution>
 77                         <id>scala-test-compile</id>
 78                         <phase>process-test-resources</phase>
 79                         <goals>
 80                             <goal>testCompile</goal>
 81                         </goals>
 82                     </execution>
 83                 </executions>
 84             </plugin>
 85             <!--  编译java程序的Maven插件的一些配置参数 -->
 86             <plugin>
 87                 <groupId>org.apache.maven.plugins</groupId>
 88                 <artifactId>maven-compiler-plugin</artifactId>
 89                 <executions>
 90                     <execution>
 91                         <phase>compile</phase>
 92                         <goals>
 93                             <goal>compile</goal>
 94                         </goals>
 95                     </execution>
 96                 </executions>
 97             </plugin>
 98             <!--  maven-shade-plugin:打jar包用的Mavne插件 -->
 99             <plugin>
100                 <groupId>org.apache.maven.plugins</groupId>
101                 <artifactId>maven-shade-plugin</artifactId>
102                 <version>2.4.3</version>
103                 <executions>
104                     <execution>
105                         <phase>package</phase>
106                         <goals>
107                             <goal>shade</goal>
108                         </goals>
109                         <configuration>
110                             <filters>
111                                 <filter>
112                                     <artifact>*:*</artifact>
113                                     <excludes>
114                                         <exclude>META-INF/*.SF</exclude>
115                                         <exclude>META-INF/*.DSA</exclude>
116                                         <exclude>META-INF/*.RSA</exclude>
117                                     </excludes>
118                                 </filter>
119                             </filters>
120                         </configuration>
121                     </execution>
122                 </executions>
123             </plugin>
124         </plugins>
125     </build>
126
127 </project>

4. 编写Scala程序

在src/main/scala/目录下创建一个名为MobileLocation的Object:

MobileLocation.scala完整代码:

package com.xuebusi.sparkimport org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}/*** 计算每个基站下停留时间最长的2个手机号* Created by SYJ on 2017/1/24.*/
object MobileLocation {def main(args: Array[String]) {/*** 创建SparkConf      *      * 一些说明:    * 为了便于在IDEA中进行Debug测试,* 这里就设置为local模式,即在本地运行Spark程序;* 但是这种方式存在一个问题,如果要从HDFS中读数据,* 在Windows平台下读取Linux上HDFS中的数据的话,* 可能会抛异常,因为它在读取数据的时候要用到Windows* 下的一些本地库;** 在使用Eclipse在Windows上运行MapReduce程序的时候也会遇到* 该问题,但是在Linux和MacOS操作系统中则不会遇到这种问题.** Hadoop的压缩和解压缩要用到Windows的一些本地库,* 而这些库是C或者C++编写的,而C和C++编写的库文件是不跨平台的,* 所以要想在Windows下调试MapReduce程序需要先安装好本地库;** 建议在Windows下安装Linux虚拟机,带有图形界面的,* 这样调试就不会有问题.**///本地运行val conf: SparkConf = new SparkConf().setAppName("MobileLocation").setMaster("local")//创建SparkConf,默认以集群方式运行//val conf: SparkConf = new SparkConf().setAppName("MobileLocation")//创建SparkContextval sc: SparkContext = new SparkContext(conf)//从文件系统读取数据val lines: RDD[String] = sc.textFile(args(0))/*** 切分数据* 这里使用了两个map方法,不建议使用这种方式,* 我们可以在一个map方法中完成*///lines.map(_.split(",")).map(arr => (arr(0), arr(1).toLong, arr(2), arr(3)))//在一个map方法中实现对数据的切分,并组装成元组的形式val splited = lines.map(line => {val fields: Array[String] = line.split(",")val mobile: String = fields(0)//val time: Long = fields(1).toLongval lac: String = fields(2)val tp: String = fields(3)val time: Long = if(tp == "1") -fields(1).toLong else fields(1).toLong//将字段拼接成元组再返回//((手机号码, 基站id), 时间)
      ((mobile, lac), time)})//分组聚合val reduced: RDD[((String, String), Long)] = splited.reduceByKey(_+_)//整理成元组格式,便于下一步和基站表进行joinval lacAndMobieTime = reduced.map(x => {//(基站id, (手机号码, 时间))
      (x._1._2, (x._1._1, x._2))})//读取基站数据val lacInfo: RDD[String] = sc.textFile(args(1))//切分数据并jionval splitedLacInfo = lacInfo.map(line => {val fields: Array[String] = line.split(",")val id: String = fields(0)//基站idval x: String = fields(1)//基站经度val y: String = fields(2)//基站纬度/*** 返回数据* 只有key-value类型的数据才可以进行join,* 所以这里返回元组,以基站id为key,* 以基站的经纬度为value:*   (基站id, (经度, 纬度))*/(id, (x, y))})//join//返回:RDD[(基站id, ((手机号码, 时间), (经度, 纬度)))]val joined: RDD[(String, ((String, Long), (String, String)))] = lacAndMobieTime.join(splitedLacInfo)//ArrayBuffer((CC0710CC94ECC657A8561DE549D940E0,((18688888888,1300),(116.303955,40.041935))), (CC0710CC94ECC657A8561DE549D940E0,((18611132889,1900),(116.303955,40.041935))), (16030401EAFB68F1E3CDF819735E1C66,((18688888888,87600),(116.296302,40.032296))), (16030401EAFB68F1E3CDF819735E1C66,((18611132889,97500),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18611132889,54000),(116.304864,40.050645))), (9F36407EAD0629FC166F14DDE7970F68,((18688888888,51200),(116.304864,40.050645))))//System.out.println(joined.collect().toBuffer)//按手机号码分组val groupedByMobile = joined.groupBy(_._2._1._1)//ArrayBuffer((18688888888,CompactBuffer((CC0710CC94ECC657A8561DE549D940E0,((18688888888,1300),(116.303955,40.041935))), (16030401EAFB68F1E3CDF819735E1C66,((18688888888,87600),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18688888888,51200),(116.304864,40.050645))))), (18611132889,CompactBuffer((CC0710CC94ECC657A8561DE549D940E0,((18611132889,1900),(116.303955,40.041935))), (16030401EAFB68F1E3CDF819735E1C66,((18611132889,97500),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18611132889,54000),(116.304864,40.050645))))))//System.out.println(groupedByMobile.collect().toBuffer)/*** 先转成List,再按照时间排序,再反转元素,再取Top2*/val result = groupedByMobile.mapValues(_.toList.sortBy(_._2._1._2).reverse.take(2))//ArrayBuffer((18688888888,List((16030401EAFB68F1E3CDF819735E1C66,((18688888888,87600),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18688888888,51200),(116.304864,40.050645))))), (18611132889,List((16030401EAFB68F1E3CDF819735E1C66,((18611132889,97500),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18611132889,54000),(116.304864,40.050645))))))//System.out.println(result.collect().toBuffer)//将结果写入到文件系统result.saveAsTextFile(args(2))//释放资源
    sc.stop()}
}

5. 本地测试

编辑配置信息:

点击“+”号,添加一个配置:

选择“Application”:

选择Main方法所在的类:

填写配置的名称,在Program arguments输入框中填写3个参数,分别为两个输入目录和一个输出目录:

在本地运行程序:

抛出一个内存不足异常:

17/01/24 17:17:58 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 259522560 must be at least 4.718592E8. Please use a larger heap size.at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:198)at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:180)at org.apache.spark.SparkEnv$.create(SparkEnv.scala:354)at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)at com.xuebusi.spark.MobileLocation$.main(MobileLocation.scala:37)at com.xuebusi.spark.MobileLocation.main(MobileLocation.scala)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

可以程序中添加一行代码来解决:
conf.set("spark.testing.memory", "536870912")//后面的值大于512m即可

但是上面的方式不够灵活,这里我们采用给JVM传参的方式。修改配置,在VM options输入框中添加参数“-Xmx512m”或者“-Dspark.testing.memory=536870912”,内存大小为512M:

程序输出日志以及结果:

6. 提交到Spark集群上运行

本地运行和提交到集群上运行,在代码上有所区别,需要修改一行代码:

将编写好的程序使用Maven插件打成jar包:

将jar包上传到Spark集群服务器:

将测试所用的数据也上传到HDFS集群:

运行命令:

/root/apps/spark/bin/spark-submit \
--master spark://hadoop01:7077,hadoop02:7077 \
--executor-memory 512m \
--total-executor-cores 7 \
--class com.xuebusi.spark.MobileLocation \/root/spark-1.0-SNAPSHOT.jar \hdfs://hadoop01:9000/mobile/input/mobile_logs \hdfs://hadoop01:9000/mobile/input/loc_logs \hdfs://hadoop01:9000/mobile/output

7. 程序运行时的输出日志

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/01/24 11:27:48 INFO SparkContext: Running Spark version 1.6.2
17/01/24 11:27:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/24 11:27:52 INFO SecurityManager: Changing view acls to: root
17/01/24 11:27:52 INFO SecurityManager: Changing modify acls to: root
17/01/24 11:27:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/01/24 11:27:54 INFO Utils: Successfully started service 'sparkDriver' on port 41762.
17/01/24 11:27:55 INFO Slf4jLogger: Slf4jLogger started
17/01/24 11:27:55 INFO Remoting: Starting remoting
17/01/24 11:27:55 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.71.11:49399]
17/01/24 11:27:55 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 49399.
17/01/24 11:27:55 INFO SparkEnv: Registering MapOutputTracker
17/01/24 11:27:55 INFO SparkEnv: Registering BlockManagerMaster
17/01/24 11:27:55 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-6c4988a1-3ad2-49ad-8cc2-02390384792b
17/01/24 11:27:55 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
17/01/24 11:27:56 INFO SparkEnv: Registering OutputCommitCoordinator
17/01/24 11:28:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/01/24 11:28:02 INFO SparkUI: Started SparkUI at http://192.168.71.11:4040
17/01/24 11:28:02 INFO HttpFileServer: HTTP File server directory is /tmp/spark-eaa76419-9ddc-43c1-ad0e-cc95c0dede7e/httpd-34193396-8d9c-4ea7-9fca-5caf8c712f86
17/01/24 11:28:02 INFO HttpServer: Starting HTTP Server
17/01/24 11:28:02 INFO Utils: Successfully started service 'HTTP file server' on port 58135.
17/01/24 11:28:06 INFO SparkContext: Added JAR file:/root/spark-1.0-SNAPSHOT.jar at http://192.168.71.11:58135/jars/spark-1.0-SNAPSHOT.jar with timestamp 1485286086076
17/01/24 11:28:06 INFO Executor: Starting executor ID driver on host localhost
17/01/24 11:28:06 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55934.
17/01/24 11:28:06 INFO NettyBlockTransferService: Server created on 55934
17/01/24 11:28:06 INFO BlockManagerMaster: Trying to register BlockManager
17/01/24 11:28:06 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55934 with 517.4 MB RAM, BlockManagerId(driver, localhost, 55934)
17/01/24 11:28:06 INFO BlockManagerMaster: Registered BlockManager
17/01/24 11:28:09 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 153.6 KB, free 153.6 KB)
17/01/24 11:28:09 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 167.5 KB)
17/01/24 11:28:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55934 (size: 13.9 KB, free: 517.4 MB)
17/01/24 11:28:09 INFO SparkContext: Created broadcast 0 from textFile at MobileLocation.scala:40
17/01/24 11:28:12 INFO FileInputFormat: Total input paths to process : 3
17/01/24 11:28:12 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 86.4 KB, free 253.9 KB)
17/01/24 11:28:13 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 19.3 KB, free 273.2 KB)
17/01/24 11:28:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:55934 (size: 19.3 KB, free: 517.4 MB)
17/01/24 11:28:13 INFO SparkContext: Created broadcast 1 from textFile at MobileLocation.scala:73
17/01/24 11:28:13 INFO FileInputFormat: Total input paths to process : 1
17/01/24 11:28:13 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/01/24 11:28:13 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/01/24 11:28:13 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/01/24 11:28:13 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/01/24 11:28:13 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/01/24 11:28:14 INFO SparkContext: Starting job: saveAsTextFile at MobileLocation.scala:110
17/01/24 11:28:14 INFO DAGScheduler: Registering RDD 2 (map at MobileLocation.scala:50)
17/01/24 11:28:14 INFO DAGScheduler: Registering RDD 7 (map at MobileLocation.scala:76)
17/01/24 11:28:14 INFO DAGScheduler: Registering RDD 4 (map at MobileLocation.scala:67)
17/01/24 11:28:14 INFO DAGScheduler: Registering RDD 11 (groupBy at MobileLocation.scala:98)
17/01/24 11:28:14 INFO DAGScheduler: Got job 0 (saveAsTextFile at MobileLocation.scala:110) with 3 output partitions
17/01/24 11:28:14 INFO DAGScheduler: Final stage: ResultStage 4 (saveAsTextFile at MobileLocation.scala:110)
17/01/24 11:28:14 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 3)
17/01/24 11:28:14 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 3)
17/01/24 11:28:14 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[7] at map at MobileLocation.scala:76), which has no missing parents
17/01/24 11:28:14 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.8 KB, free 277.0 KB)
17/01/24 11:28:14 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.2 KB, free 279.2 KB)
17/01/24 11:28:14 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:55934 (size: 2.2 KB, free: 517.4 MB)
17/01/24 11:28:14 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
17/01/24 11:28:14 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[7] at map at MobileLocation.scala:76)
17/01/24 11:28:14 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/01/24 11:28:14 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[2] at map at MobileLocation.scala:50), which has no missing parents
17/01/24 11:28:14 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 3.9 KB, free 283.1 KB)
17/01/24 11:28:14 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 2.2 KB, free 285.3 KB)
17/01/24 11:28:14 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 0, localhost, partition 0,ANY, 2210 bytes)
17/01/24 11:28:14 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:55934 (size: 2.2 KB, free: 517.4 MB)
17/01/24 11:28:14 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1006
17/01/24 11:28:14 INFO DAGScheduler: Submitting 3 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[2] at map at MobileLocation.scala:50)
17/01/24 11:28:14 INFO TaskSchedulerImpl: Adding task set 0.0 with 3 tasks
17/01/24 11:28:14 INFO Executor: Running task 0.0 in stage 1.0 (TID 0)
17/01/24 11:28:14 INFO Executor: Fetching http://192.168.71.11:58135/jars/spark-1.0-SNAPSHOT.jar with timestamp 1485286086076
17/01/24 11:28:15 INFO Utils: Fetching http://192.168.71.11:58135/jars/spark-1.0-SNAPSHOT.jar to /tmp/spark-eaa76419-9ddc-43c1-ad0e-cc95c0dede7e/userFiles-1790c4be-0a0d-45fd-91f9-c66c49e765a5/fetchFileTemp1508611941727652704.tmp
17/01/24 11:28:19 INFO Executor: Adding file:/tmp/spark-eaa76419-9ddc-43c1-ad0e-cc95c0dede7e/userFiles-1790c4be-0a0d-45fd-91f9-c66c49e765a5/spark-1.0-SNAPSHOT.jar to class loader
17/01/24 11:28:19 INFO HadoopRDD: Input split: hdfs://hadoop01:9000/mobile/input/loc_logs/loc_info.txt:0+171
17/01/24 11:28:20 INFO Executor: Finished task 0.0 in stage 1.0 (TID 0). 2255 bytes result sent to driver
17/01/24 11:28:20 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 1, localhost, partition 0,ANY, 2215 bytes)
17/01/24 11:28:20 INFO Executor: Running task 0.0 in stage 0.0 (TID 1)
17/01/24 11:28:20 INFO HadoopRDD: Input split: hdfs://hadoop01:9000/mobile/input/mobile_logs/19735E1C66.log:0+248
17/01/24 11:28:20 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 0) in 5593 ms on localhost (1/1)
17/01/24 11:28:20 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/01/24 11:28:20 INFO DAGScheduler: ShuffleMapStage 1 (map at MobileLocation.scala:76) finished in 5.716 s
17/01/24 11:28:20 INFO DAGScheduler: looking for newly runnable stages
17/01/24 11:28:20 INFO DAGScheduler: running: Set(ShuffleMapStage 0)
17/01/24 11:28:20 INFO DAGScheduler: waiting: Set(ShuffleMapStage 2, ShuffleMapStage 3, ResultStage 4)
17/01/24 11:28:20 INFO DAGScheduler: failed: Set()
17/01/24 11:28:20 INFO Executor: Finished task 0.0 in stage 0.0 (TID 1). 2255 bytes result sent to driver
17/01/24 11:28:20 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 2, localhost, partition 1,ANY, 2215 bytes)
17/01/24 11:28:20 INFO Executor: Running task 1.0 in stage 0.0 (TID 2)
17/01/24 11:28:20 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 1) in 401 ms on localhost (1/3)
17/01/24 11:28:20 INFO HadoopRDD: Input split: hdfs://hadoop01:9000/mobile/input/mobile_logs/DDE7970F68.log:0+496
17/01/24 11:28:20 INFO Executor: Finished task 1.0 in stage 0.0 (TID 2). 2255 bytes result sent to driver
17/01/24 11:28:20 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 3, localhost, partition 2,ANY, 2215 bytes)
17/01/24 11:28:20 INFO Executor: Running task 2.0 in stage 0.0 (TID 3)
17/01/24 11:28:20 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 2) in 191 ms on localhost (2/3)
17/01/24 11:28:20 INFO HadoopRDD: Input split: hdfs://hadoop01:9000/mobile/input/mobile_logs/E549D940E0.log:0+496
17/01/24 11:28:20 INFO Executor: Finished task 2.0 in stage 0.0 (TID 3). 2255 bytes result sent to driver
17/01/24 11:28:20 INFO DAGScheduler: ShuffleMapStage 0 (map at MobileLocation.scala:50) finished in 6.045 s
17/01/24 11:28:20 INFO DAGScheduler: looking for newly runnable stages
17/01/24 11:28:20 INFO DAGScheduler: running: Set()
17/01/24 11:28:20 INFO DAGScheduler: waiting: Set(ShuffleMapStage 2, ShuffleMapStage 3, ResultStage 4)
17/01/24 11:28:20 INFO DAGScheduler: failed: Set()
17/01/24 11:28:20 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 3) in 176 ms on localhost (3/3)
17/01/24 11:28:20 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/01/24 11:28:20 INFO DAGScheduler: Submitting ShuffleMapStage 2 (MapPartitionsRDD[4] at map at MobileLocation.scala:67), which has no missing parents
17/01/24 11:28:21 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 2.9 KB, free 288.3 KB)
17/01/24 11:28:21 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 1800.0 B, free 290.0 KB)
17/01/24 11:28:21 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:55934 (size: 1800.0 B, free: 517.4 MB)
17/01/24 11:28:21 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006
17/01/24 11:28:21 INFO DAGScheduler: Submitting 3 missing tasks from ShuffleMapStage 2 (MapPartitionsRDD[4] at map at MobileLocation.scala:67)
17/01/24 11:28:21 INFO TaskSchedulerImpl: Adding task set 2.0 with 3 tasks
17/01/24 11:28:21 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 4, localhost, partition 0,NODE_LOCAL, 1947 bytes)
17/01/24 11:28:21 INFO Executor: Running task 0.0 in stage 2.0 (TID 4)
17/01/24 11:28:21 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 17 ms
17/01/24 11:28:21 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 1376 bytes result sent to driver
17/01/24 11:28:21 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 5, localhost, partition 1,NODE_LOCAL, 1947 bytes)
17/01/24 11:28:21 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 4) in 352 ms on localhost (1/3)
17/01/24 11:28:21 INFO Executor: Running task 1.0 in stage 2.0 (TID 5)
17/01/24 11:28:21 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 3 blocks
17/01/24 11:28:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 6 ms
17/01/24 11:28:21 INFO Executor: Finished task 1.0 in stage 2.0 (TID 5). 1376 bytes result sent to driver
17/01/24 11:28:21 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 6, localhost, partition 2,NODE_LOCAL, 1947 bytes)
17/01/24 11:28:21 INFO Executor: Running task 2.0 in stage 2.0 (TID 6)
17/01/24 11:28:21 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 5) in 190 ms on localhost (2/3)
17/01/24 11:28:21 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 3 blocks
17/01/24 11:28:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
17/01/24 11:28:21 INFO Executor: Finished task 2.0 in stage 2.0 (TID 6). 1376 bytes result sent to driver
17/01/24 11:28:21 INFO DAGScheduler: ShuffleMapStage 2 (map at MobileLocation.scala:67) finished in 0.673 s
17/01/24 11:28:21 INFO DAGScheduler: looking for newly runnable stages
17/01/24 11:28:21 INFO DAGScheduler: running: Set()
17/01/24 11:28:21 INFO DAGScheduler: waiting: Set(ShuffleMapStage 3, ResultStage 4)
17/01/24 11:28:21 INFO DAGScheduler: failed: Set()
17/01/24 11:28:21 INFO TaskSetManager: Finished task 2.0 in stage 2.0 (TID 6) in 162 ms on localhost (3/3)
17/01/24 11:28:21 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
17/01/24 11:28:21 INFO DAGScheduler: Submitting ShuffleMapStage 3 (MapPartitionsRDD[11] at groupBy at MobileLocation.scala:98), which has no missing parents
17/01/24 11:28:21 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 4.1 KB, free 294.2 KB)
17/01/24 11:28:22 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2.1 KB, free 296.2 KB)
17/01/24 11:28:22 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:55934 (size: 2.1 KB, free: 517.4 MB)
17/01/24 11:28:22 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1006
17/01/24 11:28:22 INFO DAGScheduler: Submitting 3 missing tasks from ShuffleMapStage 3 (MapPartitionsRDD[11] at groupBy at MobileLocation.scala:98)
17/01/24 11:28:22 INFO TaskSchedulerImpl: Adding task set 3.0 with 3 tasks
17/01/24 11:28:22 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 7, localhost, partition 0,PROCESS_LOCAL, 2020 bytes)
17/01/24 11:28:22 INFO Executor: Running task 0.0 in stage 3.0 (TID 7)
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
17/01/24 11:28:22 INFO Executor: Finished task 0.0 in stage 3.0 (TID 7). 1376 bytes result sent to driver
17/01/24 11:28:22 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 8, localhost, partition 1,PROCESS_LOCAL, 2020 bytes)
17/01/24 11:28:22 INFO Executor: Running task 1.0 in stage 3.0 (TID 8)
17/01/24 11:28:22 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 7) in 290 ms on localhost (1/3)
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 6 ms
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
17/01/24 11:28:22 INFO Executor: Finished task 1.0 in stage 3.0 (TID 8). 1376 bytes result sent to driver
17/01/24 11:28:22 INFO TaskSetManager: Starting task 2.0 in stage 3.0 (TID 9, localhost, partition 2,PROCESS_LOCAL, 2020 bytes)
17/01/24 11:28:22 INFO Executor: Running task 2.0 in stage 3.0 (TID 9)
17/01/24 11:28:22 INFO TaskSetManager: Finished task 1.0 in stage 3.0 (TID 8) in 191 ms on localhost (2/3)
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/01/24 11:28:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 3 ms
17/01/24 11:28:22 INFO Executor: Finished task 2.0 in stage 3.0 (TID 9). 1376 bytes result sent to driver
17/01/24 11:28:22 INFO DAGScheduler: ShuffleMapStage 3 (groupBy at MobileLocation.scala:98) finished in 0.606 s
17/01/24 11:28:22 INFO DAGScheduler: looking for newly runnable stages
17/01/24 11:28:22 INFO DAGScheduler: running: Set()
17/01/24 11:28:22 INFO DAGScheduler: waiting: Set(ResultStage 4)
17/01/24 11:28:22 INFO DAGScheduler: failed: Set()
17/01/24 11:28:22 INFO TaskSetManager: Finished task 2.0 in stage 3.0 (TID 9) in 172 ms on localhost (3/3)
17/01/24 11:28:22 INFO TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool
17/01/24 11:28:22 INFO DAGScheduler: Submitting ResultStage 4 (MapPartitionsRDD[14] at saveAsTextFile at MobileLocation.scala:110), which has no missing parents
17/01/24 11:28:23 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 66.4 KB, free 362.6 KB)
17/01/24 11:28:23 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 23.0 KB, free 385.6 KB)
17/01/24 11:28:24 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on localhost:55934 (size: 23.0 KB, free: 517.3 MB)
17/01/24 11:28:24 INFO SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1006
17/01/24 11:28:24 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 4 (MapPartitionsRDD[14] at saveAsTextFile at MobileLocation.scala:110)
17/01/24 11:28:24 INFO TaskSchedulerImpl: Adding task set 4.0 with 3 tasks
17/01/24 11:28:24 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 10, localhost, partition 0,NODE_LOCAL, 1958 bytes)
17/01/24 11:28:24 INFO Executor: Running task 0.0 in stage 4.0 (TID 10)
17/01/24 11:28:24 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:24 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 3 ms
17/01/24 11:28:26 INFO FileOutputCommitter: Saved output of task 'attempt_201701241128_0004_m_000000_10' to hdfs://hadoop01:9000/mobile/output/_temporary/0/task_201701241128_0004_m_000000
17/01/24 11:28:26 INFO SparkHadoopMapRedUtil: attempt_201701241128_0004_m_000000_10: Committed
17/01/24 11:28:26 INFO Executor: Finished task 0.0 in stage 4.0 (TID 10). 2080 bytes result sent to driver
17/01/24 11:28:26 INFO TaskSetManager: Starting task 1.0 in stage 4.0 (TID 11, localhost, partition 1,NODE_LOCAL, 1958 bytes)
17/01/24 11:28:26 INFO Executor: Running task 1.0 in stage 4.0 (TID 11)
17/01/24 11:28:26 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 10) in 1823 ms on localhost (1/3)
17/01/24 11:28:26 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:26 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
17/01/24 11:28:27 INFO BlockManagerInfo: Removed broadcast_2_piece0 on localhost:55934 in memory (size: 2.2 KB, free: 517.3 MB)
17/01/24 11:28:27 INFO BlockManagerInfo: Removed broadcast_5_piece0 on localhost:55934 in memory (size: 2.1 KB, free: 517.3 MB)
17/01/24 11:28:27 INFO BlockManagerInfo: Removed broadcast_4_piece0 on localhost:55934 in memory (size: 1800.0 B, free: 517.3 MB)
17/01/24 11:28:27 INFO BlockManagerInfo: Removed broadcast_3_piece0 on localhost:55934 in memory (size: 2.2 KB, free: 517.4 MB)
17/01/24 11:28:27 INFO FileOutputCommitter: Saved output of task 'attempt_201701241128_0004_m_000001_11' to hdfs://hadoop01:9000/mobile/output/_temporary/0/task_201701241128_0004_m_000001
17/01/24 11:28:27 INFO SparkHadoopMapRedUtil: attempt_201701241128_0004_m_000001_11: Committed
17/01/24 11:28:27 INFO Executor: Finished task 1.0 in stage 4.0 (TID 11). 2080 bytes result sent to driver
17/01/24 11:28:27 INFO TaskSetManager: Starting task 2.0 in stage 4.0 (TID 12, localhost, partition 2,NODE_LOCAL, 1958 bytes)
17/01/24 11:28:27 INFO Executor: Running task 2.0 in stage 4.0 (TID 12)
17/01/24 11:28:27 INFO TaskSetManager: Finished task 1.0 in stage 4.0 (TID 11) in 1675 ms on localhost (2/3)
17/01/24 11:28:27 INFO ShuffleBlockFetcherIterator: Getting 3 non-empty blocks out of 3 blocks
17/01/24 11:28:27 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
17/01/24 11:28:28 INFO FileOutputCommitter: Saved output of task 'attempt_201701241128_0004_m_000002_12' to hdfs://hadoop01:9000/mobile/output/_temporary/0/task_201701241128_0004_m_000002
17/01/24 11:28:28 INFO SparkHadoopMapRedUtil: attempt_201701241128_0004_m_000002_12: Committed
17/01/24 11:28:28 INFO Executor: Finished task 2.0 in stage 4.0 (TID 12). 2080 bytes result sent to driver
17/01/24 11:28:28 INFO DAGScheduler: ResultStage 4 (saveAsTextFile at MobileLocation.scala:110) finished in 3.750 s
17/01/24 11:28:28 INFO TaskSetManager: Finished task 2.0 in stage 4.0 (TID 12) in 288 ms on localhost (3/3)
17/01/24 11:28:28 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
17/01/24 11:28:28 INFO DAGScheduler: Job 0 finished: saveAsTextFile at MobileLocation.scala:110, took 14.029200 s
17/01/24 11:28:28 INFO SparkUI: Stopped Spark web UI at http://192.168.71.11:4040
17/01/24 11:28:28 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/01/24 11:28:28 INFO MemoryStore: MemoryStore cleared
17/01/24 11:28:28 INFO BlockManager: BlockManager stopped
17/01/24 11:28:28 INFO BlockManagerMaster: BlockManagerMaster stopped
17/01/24 11:28:28 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/01/24 11:28:28 INFO SparkContext: Successfully stopped SparkContext
17/01/24 11:28:28 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/01/24 11:28:28 INFO ShutdownHookManager: Shutdown hook called
17/01/24 11:28:28 INFO ShutdownHookManager: Deleting directory /tmp/spark-eaa76419-9ddc-43c1-ad0e-cc95c0dede7e
17/01/24 11:28:28 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
17/01/24 11:28:28 INFO ShutdownHookManager: Deleting directory /tmp/spark-eaa76419-9ddc-43c1-ad0e-cc95c0dede7e/httpd-34193396-8d9c-4ea7-9fca-5caf8c712f86
[root@hadoop01 ~]#
[root@hadoop01 ~]#
[root@hadoop01 ~]# 

View Code

8. 查看输出结果

[root@hadoop01 ~]# hdfs dfs -cat /mobile/output/part-00000
[root@hadoop01 ~]# hdfs dfs -cat /mobile/output/part-00001
(18688888888,List((16030401EAFB68F1E3CDF819735E1C66,((18688888888,87600),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18688888888,51200),(116.304864,40.050645)))))
(18611132889,List((16030401EAFB68F1E3CDF819735E1C66,((18611132889,97500),(116.296302,40.032296))), (9F36407EAD0629FC166F14DDE7970F68,((18611132889,54000),(116.304864,40.050645)))))
[root@hadoop01 ~]# hdfs dfs -cat /mobile/output/part-00002
[root@hadoop01 ~]#

转载于:https://www.cnblogs.com/jun1019/p/6347983.html

使用Scala编写Spark程序求基站下移动用户停留时长TopN相关推荐

  1. Redis计数器统计小程序用户停留时长

    业务需求 统计小程序的用户停留时长 不需要实时统计,所以按照天为维度 使用Redis的hash形式存并使用计数器累加时长,凌晨定时持久化前一天的数据到DB 注:一些其它统计也可以使用此种方式来 使用R ...

  2. 好程序员大数据教程:SparkShell和IDEA中编写Spark程序

    好程序员大数据教程:SparkShell和IDEA中编写Spark程序,spark-shell是Spark自带的交互式Shell程序,方便用户进行交互式编程,用户可以在该命令行下用Scala编写Spa ...

  3. 用java编写spark程序,简单示例及运行

     最近因为工作需要,研究了下spark,因为scala还不熟,所以先学习了java的spark程序写法,下面是我的简单测试程序的代码,大部分函数的用法已在注释里面注明. 我的环境:hadoop 2 ...

  4. 如何在IDEA上编写Spark程序?(本地+集群+java三种模式书写代码)

    本篇博客,Alice为大家带来关于如何在IDEA上编写Spark程序的教程. 文章目录 写在前面 准备材料 图解WordCount pom.xml 本地执行 集群上运行 Java8版[了解] 写在前面 ...

  5. spark调用python程序包_pycharm编写spark程序,导入pyspark包的3中实现方法

    一种方法: File --> Default Setting --> 选中Project Interpreter中的一个python版本-->点击右边锯齿形图标(设置)-->选 ...

  6. 用VS2012或VS2013在win7下编写的程序在XP下运行就出现“不是有效的win32应用程序

    问题描述: 用VC2013编译了一个程序,在Windows 8.Windows 7(64位.32位)下都能正常运行.但在Win XP,Win2003下运行时,却报错不能运行,具体错误信息为" ...

  7. 求ax^2+bx+c=0(ao)根的c语言程序,1、编写一程序,求一元二次方程ax^2+bx+c=0(agt,c++编写程序,一元二次方程ax^2+bx+c=0的根...

    问题标题 1.编写一程序,求一元二次方程ax^2+bx+c=0(a 2019-5-10来自ip:11.182.150.37的网友咨询 浏览量:509 手机版 问题补充: 1.编写一程序,求一元二次方程 ...

  8. java编写WordCound的Spark程序,Scala编写wordCound程序

    1.创建一个maven项目,项目的相关信息如下: <groupId>cn.toto.spark</groupId> <artifactId>bigdata</ ...

  9. 搭建scala 开发spark程序环境及实例演示

    上一篇博文已经介绍了搭建scala的开发环境,现在进入正题.如何开发我们的第一个spark程序. 下载spark安装包,下载地址http://spark.apache.org/downloads.ht ...

最新文章

  1. @ini_get php,php中get_cfg_var()和ini_get()的用法及区别_php技巧_脚本之家
  2. 《你不可不知的50个建筑学知识》之哥特式建筑
  3. (转)ArcEngine读取数据(数据访问)
  4. Java常见异常及解释
  5. C++:控制台程序弹出消息框
  6. OpenCV SURF FLANN匹配的实例(附完整代码)
  7. H3C认证无线高级工程师
  8. sqlserver调用mysql存储过程_sqlserver调用存储过程
  9. 如何在Power BI Desktop报表中使用Web数据源
  10. Lesson 1- exchange 2010 installing
  11. 计算机考试网页制作演示视频教程,历年职称计算机考试网页制作真题及答案_计算机网页制作教程...
  12. 使用 .reg 文件操作注册表
  13. Wireshark之远程抓包
  14. Java实现矩阵运算——矩阵乘法、矩阵转置、自动填充矩阵行
  15. 用于单眼3D物体检测的可学习深度引导卷积
  16. gensim 中文语料训练 word2vec
  17. QQ游戏对对碰辅助程序
  18. java mysql判断字符串相等_【Java】利用String的compareTo比较两个时期字符串
  19. Mysql数据库常用的词汇,新手必备
  20. GBase 数据同步工具RTSync

热门文章

  1. 豆瓣电影简单评分模型-从收集数据到建模分析
  2. ZCMU--1970: 潜伏者
  3. TSC前端页面打印配置
  4. micro:bit是什么?小学生拿着它就能召唤神龙?
  5. matlab——1、二维曲线绘图
  6. 自动控制原理(3) - 结构/信号流图
  7. 同态加密库 HEAAN效率测试(1)
  8. The Path to Learning WR Python FPE.1
  9. opencv 中cvGetSize()函数出错
  10. java 命名规范 json大小写_JSON.toJSONString会把key的首字母转成小写