Spark学习(二)Spark高可用集群搭建
1、下载Spark安装包
官网网址:http://spark.apache.org/downloads.html
2、Spark安装过程
2.1、上传并解压缩
[potter@potter2 ~]$ tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz -C apps/
2.2、修改配置文件
(1)进入配置文件所在目录
/home/potter/apps/spark-2.3.0-bin-hadoop2.7/conf
- [potter@potter2 conf]$ ll
- total 36
- -rw-r–r-- 1 potter potter 996 Feb 23 03:42 docker.properties.template
- -rw-r–r-- 1 potter potter 1105 Feb 23 03:42 fairscheduler.xml.template
- -rw-r–r-- 1 potter potter 2025 Feb 23 03:42 log4j.properties.template
- -rw-r–r-- 1 potter potter 7801 Feb 23 03:42 metrics.properties.template
- -rw-r–r-- 1 potter potter 865 Feb 23 03:42 slaves.template
- -rw-r–r-- 1 potter potter 1292 Feb 23 03:42 spark-defaults.conf.template
- -rwxr-xr-x 1 potter potter 4221 Feb 23 03:42 spark-env.sh.template
(2)修改spark-env.sh文件
复制spark-env.sh.template,并重命名为spark-env.sh,并在文件最后添加配置内容
- [potter@potter2 conf]$ cp spark-env.sh.template spark-env.sh
- [potter@potter2 conf]$ vi spark-env.sh
- export JAVA_HOME=/usr/local/java/jdk1.8.0_73
- #export SCALA_HOME=/usr/share/scala
- export HADOOP_HOME=/home/potter/apps/hadoop-2.7.5
- export HADOOP_CONF_DIR=/home/potter/apps/hadoop-2.7.5/etc/hadoop
- export SPARK_WORKER_MEMORY=500m
- export SPARK_WORKER_CORES=1
- export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=potter2:2181,potter3:2181,potter4:2181,potter5:2181 -Dspark.deploy.zookeeper.dir=/spark"
注:
#export SPARK_MASTER_IP=hadoop1 这个配置要注释掉。 集群搭建时配置的spark参数可能和现在的不一样,主要是考虑个人电脑配置问题,如果memory配置太大,机器运行很慢。 说明: -Dspark.deploy.recoveryMode=ZOOKEEPER #说明整个集群状态是通过zookeeper来维护的,整个集群状态的恢复也是通过zookeeper来维护的。就是说用zookeeper做了spark的HA配置,Master(Active)挂掉的话,Master(standby)要想变成Master(Active)的话,Master(Standby)就要像zookeeper读取整个集群状态信息,然后进行恢复所有Worker和Driver的状态信息,和所有的Application状态信息; -Dspark.deploy.zookeeper.url=potter2:2181,potter3:2181,potter4:2181,potter5:2181#将所有配置了zookeeper,并且在这台机器上有可能做master(Active)的机器都配置进来;(我用了4台,就配置了4台) -Dspark.deploy.zookeeper.dir=/spark 这里的dir和zookeeper配置文件zoo.cfg中的dataDir的区别??? -Dspark.deploy.zookeeper.dir是保存spark的元数据,保存了spark的作业运行状态; zookeeper会保存spark集群的所有的状态信息,包括所有的Workers信息,所有的Applactions信息,所有的Driver信息,如果集群 |
(3)复制slaves.template变成slaves
- [potter@potter2 conf]$ cp slaves.template slaves
- [potter@potter2 conf]$ vi slaves
添加以下内容:
- potter2
- potter3
- potter4
- potter5
(4)将安装包分发给其他节点
- [potter@potter2 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ potter3:PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="2"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2apps]PWD</div></div></li><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="2"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">[potter<span class="hljs-meta">@potter</span>2 apps]PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="2"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2apps] scp -r spark-2.3.0-bin-hadoop2.7/ potter4:PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="3"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2apps]PWD</div></div></li><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="3"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">[potter<span class="hljs-meta">@potter</span>2 apps]PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="3"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2apps] scp -r spark-2.3.0-bin-hadoop2.7/ potter5:PWD</div></div></li></ol></code><divclass="hljs−button2"data−title="复制"></div></pre><h2><aname="t4"></a>2.3、配置环境变量</h2><preonclick="hljs.copyCode(event)"><codeclass="language−javahljs">[potter<spanclass="hljs−meta">@potter</span>2]PWD</div></div></li></ol></code><div class="hljs-button {2}" data-title="复制"></div></pre><h2><a name="t4"></a>2.3、配置环境变量</h2><pre "hljs.copyCode(event)"><code class="language-java hljs">[potter<span class="hljs-meta">@potter</span>2 ~]PWD</div></div></li></ol></code><divclass="hljs−button2"data−title="复制"></div></pre><h2><aname="t4"></a>2.3、配置环境变量</h2><preonclick="hljs.copyCode(event)"><codeclass="language−javahljs">[potter<spanclass="hljs−meta">@potter</span>2 ] vi .bashrc
- export SPARK_HOME=/home/potter/apps/spark-2.3.0-bin-hadoop2.7
- export PATH=PATH:PATH:PATH:SPARK_HOME/bin
保存并使其立即生效
[potter@potter2 ~]$ source .bashrc
2.4、配置spark-defaults.conf
复制一个spark-defaults.conf文件
[potter@potter2 conf]$ cp spark-defaults.conf.template spark-defaults.conf
[potter@potter2 conf]$ vi spark-defaults.conf
- # This is useful for setting default environmental settings.
- # Example:
- spark.master spark://potter2:7077,potter3:7077,potter4:7077,potter5:7077
- # spark.eventLog.enabled true
- # spark.eventLog.dir hdfs://namenode:8021/directory
- # spark.serializer org.apache.spark.serializer.KryoSerializer
- # spark.driver.memory 5g
- # spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers=“one two three"
- [potter@potter2 conf]$ scp -r spark-defaults.conf potter3:PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="2"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2conf]PWD</div></div></li><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="2"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">[potter<span class="hljs-meta">@potter</span>2 conf]PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="2"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2conf] scp -r spark-defaults.conf potter4:PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="3"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2conf]PWD</div></div></li><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="3"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">[potter<span class="hljs-meta">@potter</span>2 conf]PWD</div></div></li><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="3"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2conf] scp -r spark-defaults.conf potter5:PWD</div></div></li></ol></code><divclass="hljs−button2"data−title="复制"></div></pre><h1><aname="t6"></a>3、启动</h1><h2><aname="t7"></a>3.1、先启动zookeeper集群</h2><p>所有节点均要执行</p><preonclick="hljs.copyCode(event)"><codeclass="language−javahljs"><olclass="hljs−ln"><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="1"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2]PWD</div></div></li></ol></code><div class="hljs-button {2}" data-title="复制"></div></pre><h1><a name="t6"></a>3、启动</h1><h2><a name="t7"></a>3.1、先启动zookeeper集群</h2><p>所有节点均要执行</p><pre "hljs.copyCode(event)"><code class="language-java hljs"><ol class="hljs-ln"><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="1"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">[potter<span class="hljs-meta">@potter</span>2 ~]PWD</div></div></li></ol></code><divclass="hljs−button2"data−title="复制"></div></pre><h1><aname="t6"></a>3、启动</h1><h2><aname="t7"></a>3.1、先启动zookeeper集群</h2><p>所有节点均要执行</p><preonclick="hljs.copyCode(event)"><codeclass="language−javahljs"><olclass="hljs−ln"><li><divclass="hljs−ln−numbers"><divclass="hljs−ln−linehljs−ln−n"data−line−number="1"></div></div><divclass="hljs−ln−code"><divclass="hljs−ln−line">[potter<spanclass="hljs−meta">@potter</span>2 ] zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /home/potter/apps/zookeeper-3.4.10/bin/…/conf/zoo.cfg
- Starting zookeeper … already running as process 3703.
- [potter@potter2 ~]$ zkServer.sh status
- ZooKeeper JMX enabled by default
- Using config: /home/potter/apps/zookeeper-3.4.10/bin/…/conf/zoo.cfg
- Mode: follower
3.2、启动HDFS集群
任意一个节点执行即可
[potter@potter2 ~]$ start-dfs.sh
3.3、再启动Spark集群
- [potter@potter2 ~]$ cd apps/spark-2.3.0-bin-hadoop2.7/sbin/
- [potter@potter2 sbin]$ ./start-all.sh
3.4、查看进程
- [potter@potter2 sbin]$ jps
- 6464 Master
- 6528 Worker
- 6561 Jps
- 6562 Jps
- 3909 NameNode
- 6565 Jps
- 3703 QuorumPeerMain
- 5047 NodeManager
- 4412 DFSZKFailoverController
- 4204 JournalNode
- 4014 DataNode
- [potter@potter3 conf]$ jps
- 4609 Jps
- 3441 DataNode
- 3284 QuorumPeerMain
- 4581 Worker
- 3879 NodeManager
- 3576 JournalNode
- 3372 NameNode
- 3676 DFSZKFailoverController
- [potter@potter4 conf]$ jps
- 3456 JournalNode
- 3607 NodeManager
- 4123 Jps
- 3356 DataNode
- 3260 QuorumPeerMain
- 4095 Worker
- [potter@potter5 conf]$ jps
- 3216 QuorumPeerMain
- 3447 NodeManager
- 3304 DataNode
- 3945 Jps
- 3916 Worker
3.5、启动spark
出现以下就算启动成功:
3.5、问题
查看进程发现spark集群只有hadoop1成功启动了Master进程,其他3个节点均没有启动成功,需要手动启动,进入到/home/hadoop/apps/spark/sbin目录下执行以下命令,3个节点都要执行
- [potter@potter3 ~]$ cd apps/spark-2.3.0-bin-hadoop2.7/sbin/
- [potter@potter3 sbin]$ ./start-master.sh
- [potter@potter4 ~]$ cd apps/spark-2.3.0-bin-hadoop2.7/sbin/
- [potter@potter4 sbin]$ ./start-master.sh
- [potter@potter5 ~]$ cd apps/spark-2.3.0-bin-hadoop2.7/sbin/
- [potter@potter5 sbin]$ ./start-master.sh
3.6、执行之后再次查看进程
Master进程和Worker进程都可以启动成功
- [potter@potter3 sbin]$ jps
- 3441 DataNode
- 3284 QuorumPeerMain
- 4581 Worker
- 3576 JournalNode
- 3372 NameNode
- 3676 DFSZKFailoverController
- 5020 Jps
- 4894 Master
- [potter@potter4 sbin]$ jps
- 3456 JournalNode
- 4179 Master
- 3356 DataNode
- 3260 QuorumPeerMain
- 4238 Jps
- 4095 Worker
- [potter@potter5 sbin]$ jps
- 3216 QuorumPeerMain
- 3304 DataNode
- 4057 Jps
- 3916 Worker
- 3998 Master
4、验证
4.1、查看Web界面Master状态
potter2是ALIVE状态,potter3、potter4和potter5均是STANDBY状态
potter2
potter3
potter4
potter5
4.2、手动干掉potter2上面的Master进程,观察是否进行自动切换
- [potter@potter2 ~]$ jps
- 6464 Master
- 6528 Worker
- 6901 Jps
- 3909 NameNode
- 3703 QuorumPeerMain
- 4412 DFSZKFailoverController
- 4204 JournalNode
- 4014 DataNode
- [potter@potter2 ~]$ kill -9 6464
- [potter@potter2 ~]$
干掉potter2上的Master进程之后,再次查看web界面
potter2节点,由于Master进程被干掉,所以界面无法访问
potter3节点,Master被干掉之后,potter3节点上的Master成功篡位成功,成为ALIVE状态
potter4
potter5
五、执行Spark程序on standalone
(1)执行第一个Spark程序
- [potter@potter2 apps]$ cd
- [potter@potter2 ~]$ /home/potter/apps/spark-2.3.0-bin-hadoop2.7/bin/spark-submit </div>
- > –class org.apache.spark.examples.SparkPi </span>
- > –master spark?/potter2:7077 </span>
- > –executor-memory 500m </span>
- > –total-executor-cores 1 </span>
- > /home/potter/apps/spark-2.3.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.0.jar </span>
- > 100
其中的spark://hadoop1:7077是下图中的地址
运行结果:
(2)启动spark-shell
- /home/potter/apps/spark-2.3.0-bin-hadoop2.7/bin/spark-shell </div>
- –master spark://potter2:7077 </span>
- –executor-memory 500m </div>
- –total-executor-cores 1
参数说明:
–master spark://potter2:7077 指定Master的地址 –executor-memory 500m:指定每个worker可用内存为500m –total-executor-cores 1: 指定整个集群使用的cup核数为1个 |
注意:
如果启动spark shell时没有指定master地址,但是也可以正常启动spark shell和执行spark shell中的程序,其实是启动了spark的local模式,该模式仅在本机启动一个进程,没有与集群建立联系。
Spark Shell中已经默认将SparkContext类初始化为对象sc。用户代码如果需要用到,则直接应用sc即可
Spark Shell中已经默认将SparkSQl类初始化为对象spark。用户代码如果需要用到,则直接应用spark即可
(3)在spark shell中编写workcount程序
1)编写一个workcount.txt文件并上传到HDFS上的 / 目录下
- [potter@potter2 ~]$ vi workcount.txt
- [potter@potter2 ~]$ hadoop fs -put hello.txt /
workcount.txt的内容是:
- you,jump
- i,jump
2)在spark shell中用scala语言编写spark程序
scala> sc.textFile(”/workcount.txt").flatMap(.split(",")).map((,1)).reduceByKey(+).collect
说明:
sc是SparkContext对象,该对象是提交spark程序的入口 textFile("/spark/hello.txt")是hdfs中读取数据 flatMap(.split(" "))先map再压平 map((,1))将单词和1构成元组 reduceByKey(+)按照key进行reduce,并将value累加 saveAsTextFile("/spark/out")将结果写入到hdfs中 |
运行结果:
六、执行Spark程序on YARN
(1)前提
成功启动zookeeper集群、HDFS集群、YARN集群
(2)启动Spark on YARN
[potter@potter2 ~]$ spark-shell --master yarn --deploy-mode client
报错如下:
报错原因:内存资源给的过小,yarn直接kill掉进程,则报rpc连接失败、ClosedChannelException等错误。
解决方法:
先停止YARN服务,然后修改yarn-site.xml,增加如下内容
- <property>
- <name>yarn.nodemanager.vmem-check-enabled</name>
- <value>false</value>
- <description>Whether virtual memory limits will be enforced for containers</description>
- </property>
- <property>
- <name>yarn.nodemanager.vmem-pmem-ratio</name>
- <value>4</value>
- <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
- </property>
将新的yarn-site.xml文件分发到其他Hadoop节点对应的目录下,最后在重新启动YARN。
重新执行以下命令启动spark on yarn
[potter@potter2 hadoop]$ spark-shell --master yarn --deploy-mode client
(3)打开YARN的web界面
打开YARN的web界面:http://potter4:8088
可以看到Spark shell应用程序正在运行
单击ID号链接,可以看到该应用程序的详细信息
单击“ApplicationMaster”链接
(4)运行程序
- scala> val array = Array(1,2,3,4,5)
- array: Array[Int] = Array(1, 2, 3, 4, 5)
- scala> val rdd = sc.makeRDD(array)
- rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:26
- scala> rdd.count
- res0: Long = 5
- scala>
再次查看YARN的web界面
(5)执行Spark自带的示例程序PI
- spark-submit –class org.apache.spark.examples.SparkPi </span>
- –master yarn </span>
- –deploy-mode cluster </span>
- –driver-memory 500m </span>
- –executor-memory 500m </span>
- –executor-cores 1 </span>
- /home/potter/apps/spark-2.3.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.0.jar </span>
- 10
执行过程:
- 2018-04-22 18:16:08 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
- 2018-04-22 18:16:13 INFO Client:54 - Requesting a new application from cluster with 4 NodeManagers
- 2018-04-22 18:16:14 INFO Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
- 2018-04-22 18:16:14 INFO Client:54 - Will allocate AM container, with 884 MB memory including 384 MB overhead
- 2018-04-22 18:16:14 INFO Client:54 - Setting up container launch context for our AM
- 2018-04-22 18:16:14 INFO Client:54 - Setting up the launch environment for our AM container
- 2018-04-22 18:16:14 INFO Client:54 - Preparing resources for our AM container
- 2018-04-22 18:16:18 WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
- 2018-04-22 18:16:26 INFO Client:54 - Uploading resource file:/tmp/spark-e645ee0d-099c-4b22-8729-cb77babf5e0a/__spark_libs__3299474380368175903.zip -> hdfs://myha01/user/potter/.sparkStaging/application_1524389838076_0006/__spark_libs__3299474380368175903.zip
- 2018-04-22 18:17:04 INFO Client:54 - Uploading resource file:/home/potter/apps/spark-2.3.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.0.jar -> hdfs://myha01/user/potter/.sparkStaging/application_1524389838076_0006/spark-examples_2.11-2.3.0.jar
- 2018-04-22 18:17:05 INFO Client:54 - Uploading resource file:/tmp/spark-e645ee0d-099c-4b22-8729-cb77babf5e0a/__spark_conf__7169638864757569614.zip -> hdfs://myha01/user/potter/.sparkStaging/application_1524389838076_0006/spark_conf.zip
- 2018-04-22 18:17:05 INFO SecurityManager:54 - Changing view acls to: potter
- 2018-04-22 18:17:05 INFO SecurityManager:54 - Changing modify acls to: potter
- 2018-04-22 18:17:05 INFO SecurityManager:54 - Changing view acls groups to:
- 2018-04-22 18:17:05 INFO SecurityManager:54 - Changing modify acls groups to:
- 2018-04-22 18:17:05 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(potter); groups with view permissions: Set(); users with modify permissions: Set(potter); groups with modify permissions: Set()
- 2018-04-22 18:17:06 INFO Client:54 - Submitting application application_1524389838076_0006 to ResourceManager
- 2018-04-22 18:17:06 INFO YarnClientImpl:273 - Submitted application application_1524389838076_0006
- 2018-04-22 18:17:07 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:07 INFO Client:54 -
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: N/A
- ApplicationMaster RPC port: -1
- queue: default
- start time: 1524392325362
- final status: UNDEFINED
- tracking URL: http://potter4:8088/proxy/application_1524389838076_0006/
- user: potter
- 2018-04-22 18:17:08 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:09 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:10 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:11 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:12 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:13 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:14 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:15 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:16 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:17 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:18 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:19 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:20 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:22 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:23 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:24 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:25 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:26 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:28 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:29 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:30 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:31 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:33 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:34 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:35 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:36 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:37 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:38 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:40 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:41 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:42 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:43 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:44 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:45 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:46 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:47 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:48 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:49 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:50 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:51 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:52 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:53 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:54 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:55 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:56 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:57 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:58 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:17:59 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:00 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:01 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:02 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:03 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:04 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:05 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:06 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:07 INFO Client:54 - Application report for application_1524389838076_0006 (state: ACCEPTED)
- 2018-04-22 18:18:08 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:08 INFO Client:54 -
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 192.168.123.102
- ApplicationMaster RPC port: 0
- queue: default
- start time: 1524392325362
- final status: UNDEFINED
- tracking URL: http://potter4:8088/proxy/application_1524389838076_0006/
- user: potter
- 2018-04-22 18:18:09 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:10 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:11 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:12 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:13 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:14 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:15 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:16 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:17 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:19 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:20 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:21 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:23 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:24 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:25 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:26 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:27 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:28 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:29 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:30 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:31 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:32 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:33 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:35 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:36 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:37 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:38 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:39 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:40 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:41 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:42 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:43 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:44 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:45 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:46 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:47 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:48 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:49 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:50 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:51 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:52 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:53 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:54 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:55 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:56 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:57 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:58 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:18:59 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:00 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:01 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:02 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:03 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:04 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:05 INFO Client:54 - Application report for application_1524389838076_0006 (state: RUNNING)
- 2018-04-22 18:19:06 INFO Client:54 - Application report for application_1524389838076_0006 (state: FINISHED)
- 2018-04-22 18:19:06 INFO Client:54 -
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 192.168.123.102
- ApplicationMaster RPC port: 0
- queue: default
- start time: 1524392325362
- final status: SUCCEEDED
- tracking URL: http://potter4:8088/proxy/application_1524389838076_0006/
- user: potter
- 2018-04-22 18:19:08 INFO ShutdownHookManager:54 - Shutdown hook called
- 2018-04-22 18:19:09 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-6f009c18-9d50-460b-b480-77b0ca856369
- 2018-04-22 18:19:09 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-e645ee0d-099c-4b22-8729-cb77babf5e0a
- [potter@potter2 ~]$
Spark学习(二)Spark高可用集群搭建相关推荐
- .Net Core2.1 秒杀项目一步步实现CI/CD(Centos7)系列二:k8s高可用集群搭建总结以及部署API到k8s...
前言:本系列博客又更新了,是博主研究很长时间,亲自动手实践过后的心得,k8s集群是购买了5台阿里云服务器部署的,这个集群差不多搞了一周时间,关于k8s的知识点,我也是刚入门,这方面的知识建议参考博客园 ...
- mycat mysql好可用架构_想要学会MyCat高可用集群搭建,但是这些知识点却还玩不明白?...
一.集群架构 1.MyCat实现读写分离架构 在我前面的文章, 我已经讲解过了通过MyCat来实现MySQL的读写分离, 从而完成MySQL集群的负载均衡 , 如下面的结构图: 但是以上架构存在问题 ...
- Hadoop(二): YARN(资源管理器 RM)、HBase高可用集群搭建
目录 一.Hadoop的高可用原理概述 1.1 原理概述 1.2 实验环境要求 二.高可用集群的搭建 2.1 搭建Zookeeper 2.2 搭建HA的高可用 2.3 YARN(资源管理器 RM)高可 ...
- Hadoop 3.1.2(HA)+Zookeeper3.4.13+Hbase1.4.9(HA)+Hive2.3.4+Spark2.4.0(HA)高可用集群搭建
目录 目录 1.前言 1.1.什么是 Hadoop? 1.1.1.什么是 YARN? 1.2.什么是 Zookeeper? 1.3.什么是 Hbase? 1.4.什么是 Hive 1.5.什么是 Sp ...
- RabbitMQ高级指南:从配置、使用到高可用集群搭建
本文大纲: 1. RabbitMQ简介 2. RabbitMQ安装与配置 3. C# 如何使用RabbitMQ 4. 几种Exchange模式 5. RPC 远程过程调用 6. RabbitMQ高可用 ...
- RabbitMQ高可用集群搭建
RabbitMQ高可用集群搭建 摘要:实际生产应用中都会采用消息队列的集群方案,如果选择RabbitMQ那么有必要了解下它的集群方案原理一般来说,如果只是为了学习RabbitMQ或者验证业务工程的正确 ...
- Hadoop HA 高可用集群搭建
Hadoop HA 高可用集群搭建 一.首先配置集群信息 1 vi /etc/hosts 二.安装zookeeper 1.解压至/usr/hadoop/下 1 tar -zxvf zookeeper- ...
- Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase)
一.服务器环境 主机名 IP 用户名 密码 安装目录 master 192.168.142.124 root xxx /usr/hadoop-2.6.5/ slave1 192.168.142.125 ...
- k8s高可用集群搭建部署
简介 k8s普通搭建出来只是单master节点,如果该节点挂掉,则整个集群都无法调度,K8s高可用集群是用多个master节点加负载均衡节点组成,外层再接高可用分布式存储集群例如ceph集群,实现计算 ...
- RabbitMQ 高级指南:从配置、使用到高可用集群搭建
博主说:在项目中,通过 RabbitMQ,咱们可以将一些无需即时返回且耗时的操作提取出来,进行异步处理,而这种异步处理的方式大大的节省了服务器的请求响应时间,从而提高了系统的吞吐量. 正文 1 Rab ...
最新文章
- python肘部法则 最优分类
- matlab调用opencv的函数
- 更改记录表CDHDR和CDPOS
- VTK:Snippets之RestoreSceneFromFieldData
- 《Inside XAML》翻译半成品
- 开源代码分析技巧之——打印调用逻辑
- Xshell6突然连不上K8S所在的虚拟机
- 基于python的智能风扇设计_智能风扇设计毕业设计
- idea在创建类时在File Header中加入昵称和时间等
- JS实现文本全选并复制
- 新常态 新核心,浪潮商用机器为关键行业数字化转型打造新Power
- Linux_数据段、代码段、堆栈段、BSS段的区别
- 微博java敏感词_新浪微博的敏感词是哪些?
- 微信小程序页面的基本布局方法——flex布局
- Golang--Go语言 五百行后台代码实现一简约的个人博客网站-TinyBlog
- TMC2208电机驱动简介
- linux上读取不到库文件,linux中make找不到库文件-lmpi的问题
- nvm的安装配置教程
- subtract用法c语言,操作 subtract() - 闪电教程JSRUN
- java 删除语句_是java语句
热门文章
- 【十五、网站公安备案】2021最详细wordpress博客建站教程(2021.03.04更新)
- Cygwin的替代软件Gow
- 阿里P8架构师谈(9):流量高峰时期的性能瓶颈有哪些、以及如何来解决
- 复制加密内存卡(TF卡、U盘)资料的方法
- matlab求解常微分方程,matlab 求解常微分方程式
- python在房地产中的应用_“人生苦短,我学 Python”丨爆火的Python语言应用领域主要有哪些?...
- git push 失败与解决方法
- DOM操作 (创建、增、删、改、查、属性操作、事件操作)
- 向U盘拷贝文件,总是提示对于目标系统,文件过大??
- catboost案例