wordcount程序

文件wordcount.txt

hello wujiadong
hello spark
hello hadoop
hello python

程序示例

package wujiadong_sparkCoreimport org.apache.spark.{SparkConf, SparkContext}/*** Created by Administrator on 2017/2/25.*/
object LocalSpark {def main(args: Array[String]): Unit = {//第一步:创建SparkConf对象,设置spark应用的配置信息//使用setMaster()可以设置spark应用程序要连接spark集群的master节点的url//设置为local则代表在本地运行val conf = new SparkConf().setAppName("localspark").setMaster("local")//在idea里运行的话才需要设置setMaster//创建SparkContext对象,SparkContex是spark所有功能的一个入口,主要作用包括初始化spark应用程序所需的一些核心组件,包括调度器//(DAGSchedule、TaskScheduler)还会去spark master节点上进行注册等等val sc = new SparkContext(conf)//本地文件val file = "C://Users//Administrator.USER-20160219OS//Desktop//wordcount.txt"//针对输入源(hdfs文件,本地文件等),创建一个初始的RDD,RDD中有元素这种概念,每一个元素就相当于文件的一行val lines = sc.textFile(file)//对初始RDD进行transformation操作//先将每一行拆分成一个一个的单词val wordRDD = lines.flatMap(line => line.split(" "))//将每个单词映射成(单词,1)这样的格式,后面才能根据单词作为key,来进行每个单词的出现次数的累加val wordpair = wordRDD.map(word => (word,1))//以单词作为key,统计每个单词出现的次数(对每个单词的key进行reduce操作)val result = wordpair.reduceByKey(_+_)//最后进行action操作,比如可以使用foreach进行触发result.foreach(wordNumberPair => println(wordNumberPair._1 + " , " + wordNumberPair._2))}}

运行结果

"C:\Program Files\Java\jdk1.8.0_101\bin\java" -Dspark.master=local -Didea.launcher.port=7532 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 2016.3.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_101\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_101\jre\lib\rt.jar;D:\wujiadong.spark\out\production\wujiadong.spark;C:\Program Files (x86)\scala\lib\scala-library.jar;C:\Program Files (x86)\scala\lib\scala-reflect.jar;F:\spark-assembly-1.5.1-hadoop2.6.0.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 2016.3.3\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain wujiadong_sparkCore.LocalSpark
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/03/04 20:41:21 INFO SparkContext: Running Spark version 1.5.1
17/03/04 20:41:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/04 20:41:24 INFO SecurityManager: Changing view acls to: Administrator
17/03/04 20:41:24 INFO SecurityManager: Changing modify acls to: Administrator
17/03/04 20:41:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Administrator); users with modify permissions: Set(Administrator)
17/03/04 20:41:27 INFO Slf4jLogger: Slf4jLogger started
17/03/04 20:41:27 INFO Remoting: Starting remoting
17/03/04 20:41:28 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.11.25.3:64151]
17/03/04 20:41:29 INFO Utils: Successfully started service 'sparkDriver' on port 64151.
17/03/04 20:41:29 INFO SparkEnv: Registering MapOutputTracker
17/03/04 20:41:30 INFO SparkEnv: Registering BlockManagerMaster
17/03/04 20:41:31 INFO DiskBlockManager: Created local directory at C:\Users\Administrator.USER-20160219OS\AppData\Local\Temp\blockmgr-8339dad4-0230-405c-8ff3-f28fe073b327
17/03/04 20:41:35 INFO MemoryStore: MemoryStore started with capacity 972.5 MB
17/03/04 20:41:38 INFO HttpFileServer: HTTP File server directory is C:\Users\Administrator.USER-20160219OS\AppData\Local\Temp\spark-7aef918f-fd75-4153-833e-f29def7f1805\httpd-e95baaaa-f8c5-43e3-be14-8b45a90fce45
17/03/04 20:41:38 INFO HttpServer: Starting HTTP Server
17/03/04 20:41:40 INFO Utils: Successfully started service 'HTTP file server' on port 64166.
17/03/04 20:41:40 INFO SparkEnv: Registering OutputCommitCoordinator
17/03/04 20:41:42 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/03/04 20:41:42 INFO SparkUI: Started SparkUI at http://10.11.25.3:4040
17/03/04 20:41:43 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
17/03/04 20:41:43 INFO Executor: Starting executor ID driver on host localhost
17/03/04 20:41:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 64205.
17/03/04 20:41:47 INFO NettyBlockTransferService: Server created on 64205
17/03/04 20:41:47 INFO BlockManagerMaster: Trying to register BlockManager
17/03/04 20:41:47 INFO BlockManagerMasterEndpoint: Registering block manager localhost:64205 with 972.5 MB RAM, BlockManagerId(driver, localhost, 64205)
17/03/04 20:41:47 INFO BlockManagerMaster: Registered BlockManager
17/03/04 20:41:52 INFO MemoryStore: ensureFreeSpace(130448) called with curMem=0, maxMem=1019782103
17/03/04 20:41:52 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.4 KB, free 972.4 MB)
17/03/04 20:41:53 INFO MemoryStore: ensureFreeSpace(14276) called with curMem=130448, maxMem=1019782103
17/03/04 20:41:53 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 972.4 MB)
17/03/04 20:41:53 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:64205 (size: 13.9 KB, free: 972.5 MB)
17/03/04 20:41:53 INFO SparkContext: Created broadcast 0 from textFile at LocalSpark.scala:20
17/03/04 20:41:56 INFO FileInputFormat: Total input paths to process : 1
17/03/04 20:41:56 INFO SparkContext: Starting job: foreach at LocalSpark.scala:29
17/03/04 20:41:58 INFO DAGScheduler: Registering RDD 3 (map at LocalSpark.scala:25)
17/03/04 20:41:58 INFO DAGScheduler: Got job 0 (foreach at LocalSpark.scala:29) with 1 output partitions
17/03/04 20:41:58 INFO DAGScheduler: Final stage: ResultStage 1(foreach at LocalSpark.scala:29)
17/03/04 20:41:58 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
17/03/04 20:41:58 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
17/03/04 20:41:58 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at LocalSpark.scala:25), which has no missing parents
17/03/04 20:41:59 INFO MemoryStore: ensureFreeSpace(4120) called with curMem=144724, maxMem=1019782103
17/03/04 20:41:59 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KB, free 972.4 MB)
17/03/04 20:41:59 INFO MemoryStore: ensureFreeSpace(2337) called with curMem=148844, maxMem=1019782103
17/03/04 20:41:59 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KB, free 972.4 MB)
17/03/04 20:41:59 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:64205 (size: 2.3 KB, free: 972.5 MB)
17/03/04 20:41:59 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
17/03/04 20:41:59 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at LocalSpark.scala:25)
17/03/04 20:41:59 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/03/04 20:42:00 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 2164 bytes)
17/03/04 20:42:00 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/03/04 20:42:00 INFO HadoopRDD: Input split: file:/C:/Users/Administrator.USER-20160219OS/Desktop/wordcount.txt:0+54
17/03/04 20:42:00 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/03/04 20:42:00 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/03/04 20:42:00 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/03/04 20:42:00 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/03/04 20:42:00 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/03/04 20:42:01 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2253 bytes result sent to driver
17/03/04 20:42:01 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1448 ms on localhost (1/1)
17/03/04 20:42:01 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/03/04 20:42:01 INFO DAGScheduler: ShuffleMapStage 0 (map at LocalSpark.scala:25) finished in 1.633 s
17/03/04 20:42:01 INFO DAGScheduler: looking for newly runnable stages
17/03/04 20:42:01 INFO DAGScheduler: running: Set()
17/03/04 20:42:01 INFO DAGScheduler: waiting: Set(ResultStage 1)
17/03/04 20:42:01 INFO DAGScheduler: failed: Set()
17/03/04 20:42:01 INFO DAGScheduler: Missing parents for ResultStage 1: List()
17/03/04 20:42:01 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at LocalSpark.scala:27), which is now runnable
17/03/04 20:42:01 INFO MemoryStore: ensureFreeSpace(2224) called with curMem=151181, maxMem=1019782103
17/03/04 20:42:01 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.2 KB, free 972.4 MB)
17/03/04 20:42:01 INFO MemoryStore: ensureFreeSpace(1380) called with curMem=153405, maxMem=1019782103
17/03/04 20:42:01 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1380.0 B, free 972.4 MB)
17/03/04 20:42:01 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:64205 (size: 1380.0 B, free: 972.5 MB)
17/03/04 20:42:01 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861
17/03/04 20:42:01 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at LocalSpark.scala:27)
17/03/04 20:42:01 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/03/04 20:42:01 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, PROCESS_LOCAL, 1901 bytes)
17/03/04 20:42:01 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/03/04 20:42:02 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/03/04 20:42:02 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 55 ms
spark , 1
wujiadong , 1
hadoop , 1
python , 1
hello , 4
17/03/04 20:42:02 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1165 bytes result sent to driver
17/03/04 20:42:02 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 367 ms on localhost (1/1)
17/03/04 20:42:02 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/03/04 20:42:02 INFO DAGScheduler: ResultStage 1 (foreach at LocalSpark.scala:29) finished in 0.370 s
17/03/04 20:42:02 INFO DAGScheduler: Job 0 finished: foreach at LocalSpark.scala:29, took 5.915115 s
17/03/04 20:42:02 INFO SparkContext: Invoking stop() from shutdown hook
17/03/04 20:42:02 INFO SparkUI: Stopped Spark web UI at http://10.11.25.3:4040
17/03/04 20:42:02 INFO DAGScheduler: Stopping DAGScheduler
17/03/04 20:42:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/04 20:42:03 INFO MemoryStore: MemoryStore cleared
17/03/04 20:42:03 INFO BlockManager: BlockManager stopped
17/03/04 20:42:03 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/04 20:42:03 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/04 20:42:03 INFO SparkContext: Successfully stopped SparkContext
17/03/04 20:42:03 INFO ShutdownHookManager: Shutdown hook called
17/03/04 20:42:03 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator.USER-20160219OS\AppData\Local\Temp\spark-7aef918f-fd75-4153-833e-f29def7f1805Process finished with exit code 0

转载于:https://www.cnblogs.com/wujiadong2014/p/6389064.html

spark学习11(Wordcount程序-本地测试)相关推荐

  1. 在Spark上运行WordCount程序

    1.编写程序代码如下: Wordcount.scala package Wordcount import org.apache.spark.SparkConf import org.apache.sp ...

  2. 微擎联动的小程序本地测试获取获取用户信息失败

    (因为csdn) 背景: 嗯嗯,就是获取open id的为null.看到开发这工具. 然后后面就卡在这里了,所以测试环境算是没有弄好. 解决过程: 1.只能说本地host以及vhost已经配置好一个s ...

  3. 【狼人杀plus全记录】没有公网IP照样完美解决微信小程序本地测试问题,超简单方法!

    前文:在开发微信小程序后台的时候,我们需要使用域名进行跳转访问,按照传统的思路我们的域名只能填写一个公网IP,然而多数情况下我们并没有公网IP 方法非常简单,有两种思路: 第一种,将域名定向到局域网I ...

  4. spark:开发本地测试的wordcount程序

    1.使用Java开发本地测试的wordcount程序-1 2.使用Scala开发本地测试的wordcount程序-1 测试文件上传: hadoop fs -put wordcount.txt /wor ...

  5. Apache Spark学习:利用Scala语言开发Spark应用程序

    Spark内核是由Scala语言开发的,因此使用Scala语言开发Spark应用程序是自然而然的事情.如果你对Scala语言还不太熟悉,可以阅读网络教程 A Scala Tutorial for Ja ...

  6. [学习笔记]黑马程序员Spark全套视频教程,4天spark3.2快速入门到精通,基于Python语言的spark教程

    文章目录 视频资料: 思维导图 一.Spark基础入门(环境搭建.入门概念) 第二章:Spark环境搭建-Local 2.1 课程服务器环境 2.2 Local模式基本原理 2.3 安装包下载 2.4 ...

  7. 启动Spark Shell,在Spark Shell中编写WordCount程序,在IDEA中编写WordCount的Maven程序,spark-submit使用spark的jar来做单词统计

    1.启动Spark Shell spark-shell是Spark自带的交互式Shell程序,方便用户进行交互式编程,用户可以在该命令行下用scala编写spark程序.要注意的是要启动Spark-S ...

  8. Spark学习笔记1——第一个Spark程序:单词数统计

    Spark学习笔记1--第一个Spark程序:单词数统计 笔记摘抄自 [美] Holden Karau 等著的<Spark快速大数据分析> 添加依赖 通过 Maven 添加 Spark-c ...

  9. 50.Spark大型电商项目-用户访问session分析-top10热门品类之本地测试

    本篇文章记录用户访问session分析-top10热门品类之本地测试. 在测试的过程中,到很多问题. 问题一:二次排序需要序列化,否则会在程序运行的时候报错. public class Categor ...

最新文章

  1. 简单介绍python format格式化和数字格式化
  2. 豆瓣评分 9.1,揭秘乔布斯如何成为最伟大的产品经理?
  3. Linux怎么查看设置系统语言包
  4. OTT交付如何超越传统广电交付,为用户带来高质量视频网络——对话Synamedia流媒体技术发展经理卢彦林...
  5. 【HDU - 4781】Assignment For Princess(图上构造)
  6. android 进程管理机制,Android的进程管理机制
  7. Linux系统centos7+VMwareWorkstation创建共享文件夹错误解决方法集锦
  8. linux rpm
  9. 你想要的宏基因组-微生物组知识全在这(2022.8)
  10. echart 图谱_echart——关系图graph详解
  11. excel-LOOKUP函数多条件查找
  12. Java反射之Method对象详解
  13. JetpackCompose Modifier常用属性介绍(1)
  14. ROS——在Ubuntu18.04下基于ROS Melodic编译python3的cv_bridge
  15. 认识Vue源码 (2)-- 手写类Vue框架:Zue
  16. mpvue的一些知识
  17. vsco和lr哪个好_我为什么选择了Lightroom,抛弃了VSCO、snapseed?(附508个预设)
  18. jzoj4210. 【五校联考1day1】我才不是萝莉控呢(哈夫曼树)
  19. 应届生改派、派遣证、报到证、户口接收函(进京函)--人社部渠道
  20. 【新书预告】《Adobe Flex大师之路》即将上市,敬请关注!

热门文章

  1. VBA实现数据库中的字段处理(下划线去掉,后面的字母变大写)之版本1.0。
  2. 浙江计算机三级考试单片机试题,历年浙江省计算机三级单片机
  3. Atom安装或更新插件失败的解决方案
  4. docker load tar.gz包失败解决方法
  5. tcpdf中文解决方案
  6. Vue解决接口访问跨域问题
  7. vue el-tree 同时向后台传递选中和半选节点数据 (回显数据勾选问题已解决)
  8. 如何在C#中获取Unix时间戳
  9. Bash中单引号和双引号之间的区别
  10. 使用Python迭代字符串中的每个字符