Created by Wang, Jerry on Aug 16, 2015

./spark-submit --class “org.apache.spark.examples.JavaWordCount” --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
added by Jerry: loading load-spark-env.sh !!! !!!1
added by Jerry:…
/root/devExpert/spark-1.4.1/conf
added by Jerry, number of Jars: 1
added by Jerry, launch_classpath: /root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar
added by Jerry,RUNNER:/usr/jdk1.7.0_79/bin/java
added by Jerry, printf argument list: org.apache.spark.deploy.SparkSubmit --class org.apache.spark.examples.JavaWordCount --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
added by Jerry, I am in if-else branch: /usr/jdk1.7.0_79/bin/java -cp /root/devExpert/spark-1.4.1/conf/:/root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar:/root/devExpert/spark-1.4.1/lib_managed/jars /datanucleus-rdbms-3.2.9.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanu cleus-core-3.2.10.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-a pi-jdo-3.2.6.jar -Xms512m -Xmx512m -XX:MaxPermSize=256m org.apache.spark.deploy. SparkSubmit --master spark://NKGV50849583FV1:7077 --class org.apache.spark.examp les.JavaWordCount /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/t arget/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt

Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
15/08/16 13:05:16 INFO SparkContext: Running Spark version 1.4.1
15/08/16 13:05:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
15/08/16 13:05:17 WARN Utils: Your hostname, NKGV50849583FV1 resolves to a loopback address: 127.0.0.1; using 10.128.184.131 instead (on interface eth0)
15/08/16 13:05:17 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/08/16 13:05:17 INFO SecurityManager: Changing view acls to: root
15/08/16 13:05:17 INFO SecurityManager: Changing modify acls to: root
15/08/16 13:05:17 INFO SecurityManager: SecurityManager: authentication disabled ; ui acls disabled; users with view permissions: Set(root); users with modify pe rmissions: Set(root)
15/08/16 13:05:18 INFO Slf4jLogger: Slf4jLogger started
15/08/16 13:05:18 INFO Remoting: Starting remoting
15/08/16 13:05:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.128.184.131:57470]
15/08/16 13:05:18 INFO Utils: Successfully started service ‘sparkDriver’ on port 57470.
15/08/16 13:05:18 INFO SparkEnv: Registering MapOutputTracker
15/08/16 13:05:18 INFO SparkEnv: Registering BlockManagerMaster
15/08/16 13:05:18 INFO DiskBlockManager: Created local directory at /tmp/spark-041d2460-36a1-4550-82b8-98957d92629b/blockmgr-818a4676-a915-4c14-8bff36d423a7031 a

15/08/16 13:05:18 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/08/16 13:05:18 INFO HttpFileServer: HTTP File server directory is /tmp/spark-041d2460-36a1-4550-82b8-98957d92629b/httpd-a7bdeea3-f3f9-4589-9817-94e5bca11d6a
15/08/16 13:05:18 INFO HttpServer: Starting HTTP Server
15/08/16 13:05:19 INFO Utils: Successfully started service ‘HTTP file server’ on port 38889.
15/08/16 13:05:19 INFO SparkEnv: Registering OutputCommitCoordinator
15/08/16 13:05:19 INFO Utils: Successfully started service ‘SparkUI’ on port 404 0.
15/08/16 13:05:19 INFO SparkUI: Started SparkUI at http://10.128.184.131:4040
15/08/16 13:05:19 INFO SparkContext: Added JAR file:/root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar at http://10.128.184.131:38889/jars/JavaWordCount-1.jar with timestamp 1439701519360
15/08/16 13:05:19 INFO AppClientClientActor:Connectingtomasterakka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master...15/08/1613:05:19INFOSparkDeploySchedulerBackend:ConnectedtoSparkclusterwithappIDapp−20150816130519−0000−Jerry:thiscouldbeobservedinSparkmasterWebui15/08/1613:05:19INFOAppClientClientActor: Connecting to master akka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master... 15/08/16 13:05:19 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150816130519-0000 - Jerry: this could be observed in Spark master Webui 15/08/16 13:05:19 INFO AppClientClientActor:Connectingtomasterakka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master...15/08/1613:05:19INFOSparkDeploySchedulerBackend:ConnectedtoSparkclusterwithappIDapp−20150816130519−0000−Jerry:thiscouldbeobservedinSparkmasterWebui15/08/1613:05:19INFOAppClientClientActor: Executor(

) added: app-20150816130519-0000/0 on worker-20150816130347-10.128.184.131-60921 (10.128.184.131:60921) with 8 cores
15/08/16 13:05:19 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150816130519-0000/0 on hostPort 10.128.184.131:60921 with 8 cores, 512.0 MB RAM
15/08/16 13:05:19 INFO AppClientClientActor:Executorupdated:app−20150816130519−0000/0isnowRUNNING15/08/1613:05:19INFOAppClientClientActor: Executor updated: app-20150816130519-0000/0 is now RUNNING 15/08/16 13:05:19 INFO AppClientClientActor:Executorupdated:app−20150816130519−0000/0isnowRUNNING15/08/1613:05:19INFOAppClientClientActor: Executor updated: app-20150816130519-0000/0 is now LOADING

15/08/16 13:05:19 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 43227.
15/08/16 13:05:19 INFO NettyBlockTransferService: Server created on 43227
15/08/16 13:05:19 INFO BlockManagerMaster: Trying to register BlockManager
15/08/16 13:05:19 INFO BlockManagerMasterEndpoint: Registering block manager 10. 128.184.131:43227 with 265.4 MB RAM, BlockManagerId(driver, 10.128.184.131, 4322 7)
15/08/16 13:05:19 INFO BlockManagerMaster: Registered BlockManager
15/08/16 13:05:19 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/08/16 13:05:20 INFO MemoryStore: ensureFreeSpace(143840) called with curMem=0 , maxMem=278302556
15/08/16 13:05:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 140.5 KB, free 265.3 MB)
15/08/16 13:05:20 INFO MemoryStore: ensureFreeSpace(12633) called with curMem=14 3840, maxMem=278302556
15/08/16 13:05:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.3 KB, free 265.3 MB)
15/08/16 13:05:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 1 0.128.184.131:43227 (size: 12.3 KB, free: 265.4 MB)
15/08/16 13:05:20 INFO SparkContext: Created broadcast 0 from textFile at JavaWordCount.java:45

15/08/16 13:05:20 INFO FileInputFormat: Total input paths to process : 1
15/08/16 13:05:20 INFO SparkContext: Starting job: collect at JavaWordCount.java:68

15/08/16 13:05:20 INFO DAGScheduler: Registering RDD 3 (mapToPair at JavaWordCount.java:54)

15/08/16 13:05:20 INFO DAGScheduler: Got job 0 (collect at JavaWordCount.java:68 ) with 1 output partitions (allowLocal=false)
15/08/16 13:05:20 INFO DAGScheduler: Final stage: ResultStage 1(collect at JavaWordCount.java:68)
15/08/16 13:05:20 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
15/08/16 13:05:20 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
15/08/16 13:05:20 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitions RDD[3] at mapToPair at JavaWordCount.java:54), which has no missing parents
15/08/16 13:05:20 INFO MemoryStore: ensureFreeSpace(4768) called with curMem=156 473, maxMem=278302556
15/08/16 13:05:20 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.7 KB, free 265.3 MB)
15/08/16 13:05:20 INFO MemoryStore: ensureFreeSpace(2678) called with curMem=161 241, maxMem=278302556
15/08/16 13:05:20 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.6 KB, free 265.3 MB)
15/08/16 13:05:20 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 1 0.128.184.131:43227 (size: 2.6 KB, free: 265.4 MB)
15/08/16 13:05:20 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
15/08/16 13:05:20 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at JavaWordCount.java:54)
15/08/16 13:05:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/08/16 13:05:22 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@10.128.184.131:50029/user/Executor#-1 235463794]) with ID 0
15/08/16 13:05:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10 .128.184.131, PROCESS_LOCAL, 1469 bytes)
15/08/16 13:05:22 INFO BlockManagerMasterEndpoint: Registering block manager 10. 128.184.131:43013 with 265.4 MB RAM, BlockManagerId(0, 10.128.184.131, 43013)
15/08/16 13:05:23 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 1 0.128.184.131:43013 (size: 2.6 KB, free: 265.4 MB)
15/08/16 13:05:23 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 1 0.128.184.131:43013 (size: 12.3 KB, free: 265.4 MB)
15/08/16 13:05:23 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1072 ms on 10.128.184.131 (1/1)
15/08/16 13:05:23 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/08/16 13:05:23 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at JavaWordCou nt.java:54) finished in 3.163 s
15/08/16 13:05:23 INFO DAGScheduler: looking for newly runnable stages
15/08/16 13:05:23 INFO DAGScheduler: running: Set()
15/08/16 13:05:23 INFO DAGScheduler: waiting: Set(ResultStage 1)
15/08/16 13:05:23 INFO DAGScheduler: failed: Set()
15/08/16 13:05:23 INFO DAGScheduler: Missing parents for ResultStage 1: List()
15/08/16 13:05:23 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at JavaWordCount.java:61), which is now runnable
15/08/16 13:05:23 INFO MemoryStore: ensureFreeSpace(2408) called with curMem=163 919, maxMem=278302556
15/08/16 13:05:23 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.4 KB, free 265.3 MB)
15/08/16 13:05:23 INFO MemoryStore: ensureFreeSpace(1458) called with curMem=166 327, maxMem=278302556
15/08/16 13:05:23 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1458.0 B, free 265.2 MB)
15/08/16 13:05:23 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 1 0.128.184.131:43227 (size: 1458.0 B, free: 265.4 MB)
15/08/16 13:05:23 INFO SparkContext: Created broadcast 2 from broadcast at DAGSc heduler.scala:874
15/08/16 13:05:23 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at JavaWordCount.java:61)
15/08/16 13:05:23 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
15/08/16 13:05:23 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 10 .128.184.131, PROCESS_LOCAL, 1227 bytes)
15/08/16 13:05:23 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 1 0.128.184.131:43013 (size: 1458.0 B, free: 265.4 MB)
15/08/16 13:05:23 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.128.184.131:50029
15/08/16 13:05:23 INFO MapOutputTrackerMaster: Size of output statuses for shuff le 0 is 143 bytes
15/08/16 13:05:24 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 107 ms on 10.128.184.131 (1/1)
15/08/16 13:05:24 INFO DAGScheduler: ResultStage 1 (collect at JavaWordCount.java:68) finished in 0.107 s
15/08/16 13:05:24 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/08/16 13:05:24 INFO DAGScheduler: Job 0 finished: collect at JavaWordCount.java:68, took 3.374647 s
/* result ****************************************
this: 2
guideline: 1
is: 2
long: 1
Hello: 1
a: 1
development: 1
file!: 1
Fiori: 1
test: 2
world: 1

  • result ****************************************/
    15/08/16 13:05:24 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 10.128.18 4.131:43227 in memory (size: 1458.0 B, free: 265.4 MB)
    15/08/16 13:05:24 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 10.128.18 4.131:43013 in memory (size: 1458.0 B, free: 265.4 MB)
    15/08/16 13:05:24 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.128.18 4.131:43227 in memory (size: 2.6 KB, free: 265.4 MB)
    15/08/16 13:05:24 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.128.18 4.131:43013 in memory (size: 2.6 KB, free: 265.4 MB)
    15/08/16 13:05:24 INFO SparkUI: Stopped Spark web UI at http://10.128.184.131:40 40
    15/08/16 13:05:24 INFO DAGScheduler: Stopping DAGScheduler
    15/08/16 13:05:24 INFO SparkDeploySchedulerBackend: Shutting down all executors
    15/08/16 13:05:24 INFO SparkDeploySchedulerBackend: Asking each executor to shu down
    15/08/16 13:05:24 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEn point stopped!
    15/08/16 13:05:24 INFO Utils: path = /tmp/spark-041d2460-36a1-4550-82b8-98957d9 629b/blockmgr-818a4676-a915-4c14-8bff-36d423a7031a, already present as root for deletion.
    15/08/16 13:05:24 INFO MemoryStore: MemoryStore cleared
    15/08/16 13:05:24 INFO BlockManager: BlockManager stopped
    15/08/16 13:05:24 INFO BlockManagerMaster: BlockManagerMaster stopped
    15/08/16 13:05:24 INFO OutputCommitCoordinatorOutputCommitCoordinatorEndpoint:OutputCommitCoordinatorstopped!15/08/1613:05:24INFOSparkContext:SuccessfullystoppedSparkContext15/08/1613:05:24INFOUtils:Shutdownhookcalled15/08/1613:05:24INFORemoteActorRefProviderOutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 15/08/16 13:05:24 INFO SparkContext: Successfully stopped SparkContext 15/08/16 13:05:24 INFO Utils: Shutdown hook called 15/08/16 13:05:24 INFO RemoteActorRefProviderOutputCommitCoordinatorEndpoint:OutputCommitCoordinatorstopped!15/08/1613:05:24INFOSparkContext:SuccessfullystoppedSparkContext15/08/1613:05:24INFOUtils:Shutdownhookcalled15/08/1613:05:24INFORemoteActorRefProviderRemotingTerminator: Shutting down remote daemon.
    15/08/16 13:05:24 INFO Utils: Deleting directory /tmp/spark-041d2460-36a1-4550- 2b8-98957d92629b
    15/08/16 13:05:24 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon

要获取更多Jerry的原创文章,请关注公众号"汪子熙":

Spark平台上提交作业到集群生成的日志文件相关推荐

  1. Spark练习 - 提交作业到集群 - submit job via cluster

    Created by Wang, Jerry, last modified on Sep 12, 2015 start-master.sh ( sbin folder下) then ps -aux 7 ...

  2. hadoop集群中的日志文件

    hadoop存在多种日志文件,其中master上的日志文件记录全面信息,包括slave上的jobtracker与datanode也会将错误信息写到master中.而slave中的日志主要记录完成的ta ...

  3. spark提交到yarn_详细总结spark基于standalone、yarn集群提交作业流程

    最近总结了一些关于spark core的内容,今天先来和大家分享一下spark的运行模式. spark运行模式 (1)local:在本地eclipse.IDEA中写spark代码运行程序,一般用于测试 ...

  4. 如何做云班课上的计算机作业,云班课不分组怎么提交作业

    <云班课>APP中,学生们都需要按时完成老师布置的相关的任务以及作业,但是呢,在云班课平台中提交作业的类型有很多种类,所以说,针对不同的种类,提交作业的方式也是不同的,那么不分组的话应该要 ...

  5. 深度学习核心技术精讲100篇(五十一)-Spark平台下基于LDA的k-means算法实现

    本文主要在Spark平台下实现一个机器学习应用,该应用主要涉及LDA主题模型以及K-means聚类.通过本文你可以了解到: 文本挖掘的基本流程 LDA主题模型算法 K-means算法 Spark平台下 ...

  6. python是开源的它可以被移植到许多平台上对吗_Python程序设计答案

    [其它]编写一个 Python 程序,输出如下图形效果. ++++++++++ + + ++++++++++ 2. 根据输入的百分制成绩,输出其所对应的五级制成绩. 3. 根据输入的身份证号码,输出对 ...

  7. 百度资源(站长)平台怎么提交收录?

    今天给大家讲解一下百度资源平台怎么提交链接和提升"收录". 注意:这边的收录实际讲是索引,可以简单的理解为 索引=被搜索引擎抓到并收入到库中 我们平常说的收录=放出被你在搜索引擎看 ...

  8. qt 字体不随dpi_Windows – QT5字体渲染在各种平台上不同

    我想对某些自定义小部件渲染进行可重复的测试.为了做到这一点,我将它们绘制成一个Q Image,并将结果保存为PNG.与MacOSX相比,Windows的输出真的不同. 我照顾: >在所有平台上选 ...

  9. 借力大数据、AI,机智云能否在物联网PaaS平台上更胜一筹?

    经过几年发展,物联网技术日益成熟,企业需求渐趋复杂,越来越多的企业将单一的设备连接和管理需求转向数据分析和场景应用. 技术层面,物联网与大数据呈现融合趋势.一些物联网云平台公司开始将品牌做厚,涉足数据 ...

最新文章

  1. 基于BootStrap,FortAweSome,Ajax的学生管理系统
  2. 树莓派控制多个舵机_树莓派控制SG90舵机
  3. Android10.0 Binder通信原理(九)-AIDL Binder示例
  4. C 盘FAT32变为 RAW 格式
  5. 【Android】 常用的Intent
  6. Java面向对象编程(中级)
  7. 队列的链式存储结构及其实现_了解队列数据结构及其实现
  8. golang 字符串操作实例
  9. 回调函数中window.open()被拦截
  10. 使用嵌套循环,打印 5 行 5 列的直角三角形
  11. codejock toolkit pro 19.2 for MFC
  12. Centos:更换为网易镜像源
  13. 勒索病毒“WannaCry”之复现过程(永恒之蓝)
  14. 笔记本 CPU 后面的字母 有U,H,Y,HQ,M 怎么区别?
  15. 算法时间复杂度Θ(n2)与 O(n2)
  16. Python淘宝商品比价定向爬虫
  17. win10 安装配置 MySQL
  18. Dell R410 broadcom网卡驱动更新失败
  19. CSR8675项目实战:BlueBrowsing蓝牙播放器
  20. stm32f407zgt6的pdr_on引脚怎么接

热门文章

  1. 升级Mountain Lion后git等工具不能用的问题的解决方法
  2. IT运维管理与ITIL
  3. 方立勋_30天掌握JavaWeb_自己编写jdbc框架、dbutils框架(未完)
  4. 计蒜客 A2232.程序设计:蒜厂年会-单调队列(双端队列(STL deque)实现)滑窗维护最小前缀和...
  5. 构建之法 第三次心得
  6. Web缓存的作用与类型
  7. MinGW编译wxWidget
  8. 此项目的默认Web访问模式设置为文件共享, 但是无法从路径(此为转贴)
  9. Daily Scrum02 12.04
  10. springMVC 与mybatis 整合 demo(maven 工程)