USDP默认的环境变量

    ┌──────────────────────────────────────────────────────────────────────┐│                 • MobaXterm Personal Edition v21.4 •                 ││               (SSH client, X server and network tools)               ││                                                                      ││ ➤ SSH session to root@192.168.88.101                                 ││   • Direct SSH      :  ✔                                             ││   • SSH compression :  ✔                                             ││   • SSH-browser     :  ✔                                             ││   • X11-forwarding  :  ✔  (remote display is forwarded through SSH)  ││                                                                      ││ ➤ For more info, ctrl+click on help or visit our website.            │└──────────────────────────────────────────────────────────────────────┘Last login: Sat Mar 12 02:05:37 2022 from zhiyong5
[root@zhiyong2 ~]# java -version
java version "1.8.0_202"
Java(TM) SE Runtime Environment (build 1.8.0_202-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)
[root@zhiyong2 ~]# scala
-bash: scala: 未找到命令
[root@zhiyong2 ~]# which java
/usr/java/jdk1.8.0_202/bin/java
[root@zhiyong2 ~]# which hadoop
/srv/udp/2.0.0.0/yarn/bin/hadoop
[root@zhiyong2 ~]# cat /etc/profile
# /etc/profile# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.pathmunge () {case ":${PATH}:" in*:"$1":*);;*)if [ "$2" = "after" ] ; thenPATH=$PATH:$1elsePATH=$1:$PATHfiesac
}if [ -x /usr/bin/id ]; thenif [ -z "$EUID" ]; then# ksh workaroundEUID=`/usr/bin/id -u`UID=`/usr/bin/id -ru`fiUSER="`/usr/bin/id -un`"LOGNAME=$USERMAIL="/var/spool/mail/$USER"
fi# Path manipulation
if [ "$EUID" = "0" ]; thenpathmunge /usr/sbinpathmunge /usr/local/sbin
elsepathmunge /usr/local/sbin afterpathmunge /usr/sbin after
fiHOSTNAME=`/usr/bin/hostname 2>/dev/null`
HISTSIZE=1000
if [ "$HISTCONTROL" = "ignorespace" ] ; thenexport HISTCONTROL=ignoreboth
elseexport HISTCONTROL=ignoredups
fiexport PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; thenumask 002
elseumask 022
fifor i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; doif [ -r "$i" ]; thenif [ "${-#*i}" != "$-" ]; then. "$i"else. "$i" >/dev/nullfifi
doneunset i
unset -f pathmunge
export JAVA_HOME=/usr/java/jdk1.8.0_202
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
SERVICE_BIN=`/bin/bash /srv/.service_env`
export PATH=${SERVICE_BIN}:$PATH
[root@zhiyong2 ~]# cd /srv/.service_env
-bash: cd: /srv/.service_env: 不是目录
[root@zhiyong2 ~]# cat /srv/.service_env
#!/bin/bash
# PARENT_PATH
export UDP_SERVICE_PATH=/srv/udp/2.0.0.0# HOME
export FLINK_HOME=${UDP_SERVICE_PATH}/flink
export FLUME_HOME=${UDP_SERVICE_PATH}/flume
export HIVE_HOME=${UDP_SERVICE_PATH}/hive
export IMPALA_HOME=${UDP_SERVICE_PATH}/impala
export KYLIN_HOME=${UDP_SERVICE_PATH}/kylin
export LIVY_HOME=${UDP_SERVICE_PATH}/livy
export PHOENIX_HOME=${UDP_SERVICE_PATH}/phoenix
export PRESTO_HOME=${UDP_SERVICE_PATH}/presto
export TRINO_HOME=${UDP_SERVICE_PATH}/trino
export SPARK_HOME=${UDP_SERVICE_PATH}/spark
export SQOOP_HOME=${UDP_SERVICE_PATH}/sqoop
export DATAX_HOME=${UDP_SERVICE_PATH}/datax
export TEZ_HOME=${UDP_SERVICE_PATH}/tez
export YARN_HOME=${UDP_SERVICE_PATH}/yarnexport ELASTICSEARCH_HOME=${UDP_SERVICE_PATH}/elasticsearch
export HBASE_HOME=${UDP_SERVICE_PATH}/hbase
export HDFS_HOME=${UDP_SERVICE_PATH}/hdfs
export KAFKA_HOME=${UDP_SERVICE_PATH}/kafka
export KUDU_HOME=${UDP_SERVICE_PATH}/kudu
export ZOOKEEPER_HOME=${UDP_SERVICE_PATH}/zookeeperexport HUE_HOME=${UDP_SERVICE_PATH}/hue
export KAFKAEAGLE_HOME=${UDP_SERVICE_PATH}/kafkaeagle
export KIBANA_HOME=${UDP_SERVICE_PATH}/kibana
export ZEPPELIN_HOME=${UDP_SERVICE_PATH}/zeppelin
export ZKUI_HOME=${UDP_SERVICE_PATH}/zkuiexport AIRFLOW_HOME=${UDP_SERVICE_PATH}/airflow
export OOZIE_HOME=${UDP_SERVICE_PATH}/oozie
export UDS_HOME=${UDP_SERVICE_PATH}/uds
export DOLPHINSCHEDULER_HOME=${UDP_SERVICE_PATH}/dolphinschedulerexport RANGER_HOME=${UDP_SERVICE_PATH}/rangerexport ATLAS_HOME=${UDP_SERVICE_PATH}/atlasexport NEO4J_HOME=${UDP_SERVICE_PATH}/neo4j# BIN
export FLINK_BIN=${FLINK_HOME}/bin
export FLUME_BIN=${FLUME_HOME}/bin
export HIVE_BIN=${HIVE_HOME}/bin
export IMPALA_BIN=${IMPALA_HOME}/bin
export KYLIN_BIN=${KYLIN_HOME}/bin
export LIVY_BIN=${LIVY_HOME}/bin
export PHOENIX_BIN=${PHOENIX_HOME}/bin
export PRESTO_BIN=${PRESTO_HOME}/bin
export TRINO_BIN=${TRINO_HOME}/bin
export SPARK_BIN=${SPARK_HOME}/bin
export SQOOP_BIN=${SQOOP_HOME}/bin
export DATAX_BIN=${DATAX_HOME}/bin
export TEZ_BIN=${TEZ_HOME}/bin
export YARN_BIN=${YARN_HOME}/binexport ELASTICSEARCH_BIN=${ELASTICSEARCH_HOME}/bin
export HBASE_BIN=${HBASE_HOME}/bin
export HDFS_BIN=${HDFS_HOME}/bin
export KAFKA_BIN=${KAFKA_HOME}/bin
export KUDU_BIN=${KUDU_HOME}/bin
export ZOOKEEPER_BIN=${ZOOKEEPER_HOME}/binexport HUE_BIN=${HUE_HOME}/bin
export KAFKAEAGLE_BIN=${KAFKAEAGLE_HOME}/bin
export KIBANA_BIN=${KIBANA_HOME}/bin
export ZEPPELIN_BIN=${ZEPPELIN_HOME}/bin
export ZKUI_BIN=${ZKUI_HOME}/binexport AIRFLOW_BIN=${AIRFLOW_HOME}/bin
export OOZIE_BIN=${OOZIE_HOME}/bin
export UDS_BIN=${UDS_HOME}/bin
export DOLPHINSCHEDULER_BIN=${DOLPHINSCHEDULER_HOME}/bin
export RANGER_BIN=${RANGER_HOME}/binexport ATLAS_BIN=${ATLAS_HOME}/bin
export NEO4J_BIN=${NEO4J_HOME}/bin
SERVICE_BIN=${FLINK_BIN}:${FLUME_BIN}:${HIVE_BIN}:${IMPALA_BIN}:${KYLIN_BIN}:${LIVY_BIN}:${PHOENIX_BIN}:${PRESTO_BIN}:${SPARK_BIN}:${SQOOP_BIN}:${DATAX_BIN}:${TEZ_BIN}:${YARN_BIN}:${ELASTICSEARCH_BIN}:${HBASE_BIN}:${HDFS_BIN}:${KAFKA_BIN}:${KUDU_BIN}:${ZOOKEEPER_BIN}:${HUE_BIN}:${KAFKAEAGLE_BIN}:${KIBANA_BIN}:${ZEPPELIN_BIN}:${ZKUI_BIN}:${AIRFLOW_BIN}:${OOZIE_BIN}:${UDS_BIN}:${DOLPHINSCHEDULER_BIN}:${RANGER_BIN}:${ATLAS_BIN}:${TRINO_BIN}:${NEO4J_BIN}echo ${SERVICE_BIN}

可以看出,USDP已经默认添加了很多环境变量,对新手和非专业运维人员相当友好!

使用自带的Flink1.13测试Word Count

[root@zhiyong2 ~]# cd
[root@zhiyong2 ~]# ll
总用量 20
-rw-------. 1 root root  1639 3月   1 05:40 anaconda-ks.cfg
drwxr-xr-x. 2 root root    60 3月   1 23:11 logs
-rw-r--r--  1 root root 14444 3月  11 22:36 test1.txt
[root@zhiyong2 ~]# touch wordtest1.txt
[root@zhiyong2 ~]# ll
总用量 20
-rw-------. 1 root root  1639 3月   1 05:40 anaconda-ks.cfg
drwxr-xr-x. 2 root root    60 3月   1 23:11 logs
-rw-r--r--  1 root root 14444 3月  11 22:36 test1.txt
-rw-r--r--  1 root root     0 3月  14 22:18 wordtest1.txt
[root@zhiyong2 ~]# vim wordtest1.txt
[root@zhiyong2 ~]# cat wordtest1.txt
hello
word
world
hehe
haha
haha
haha
hehe
digital
monster
[root@zhiyong2 ~]# which flink
/srv/udp/2.0.0.0/flink/bin/flink
[root@zhiyong2 batch]# cd /srv/udp/2.0.0.0/flink/examples/batch
[root@zhiyong2 batch]# ll
总用量 144
-rwxr-xr-x. 1 hadoop hadoop 14542 10月  9 17:26 ConnectedComponents.jar
-rwxr-xr-x. 1 hadoop hadoop 15872 10月  9 17:26 DistCp.jar
-rwxr-xr-x. 1 hadoop hadoop 17240 10月  9 17:26 EnumTriangles.jar
-rwxr-xr-x. 1 hadoop hadoop 19668 10月  9 17:26 KMeans.jar
-rwxr-xr-x. 1 hadoop hadoop 16005 10月  9 17:26 PageRank.jar
-rwxr-xr-x. 1 hadoop hadoop 12535 10月  9 17:26 TransitiveClosure.jar
-rwxr-xr-x. 1 hadoop hadoop 26331 10月  9 17:26 WebLogAnalysis.jar
-rwxr-xr-x. 1 hadoop hadoop 10432 10月  9 17:26 WordCount.jar
[root@zhiyong2 ~]# flink run /srv/udp/2.0.0.0/flink/examples/batch/WordCount.jar --input /root/wordtest1.txt

直接运行肯定会报错:

[root@zhiyong2 ~]# flink run /srv/udp/2.0.0.0/flink/examples/batch/WordCount.jar --input /root/wordtest1.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/flink/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/yarn/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Printing result to stdout. Use --output to specify output path.------------------------------------------------------------The program finished with the following exception:org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316)at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1061)at org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:129)at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:70)at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:942)at org.apache.flink.api.java.DataSet.collect(DataSet.java:417)at org.apache.flink.api.java.DataSet.print(DataSet.java:1748)at org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:96)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)... 11 more
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1056)... 22 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$9(RestClusterClient.java:405)at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:390)at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:430)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321)at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:386)... 21 more
Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: localhost/127.0.0.1:8081at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)... 19 more
Caused by: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: localhost/127.0.0.1:8081
Caused by: java.net.ConnectException: 拒绝连接at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)at java.lang.Thread.run(Thread.java:748)

事实上,必须先开启集群才能跑任务:

[root@zhiyong2 ~]# start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host zhiyong2.
Starting taskexecutor daemon on host zhiyong2.
[root@zhiyong2 ~]# flink run /srv/udp/2.0.0.0/flink/examples/batch/WordCount.jar --input /root/wordtest1.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/flink/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/yarn/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Printing result to stdout. Use --output to specify output path.
Job has been submitted with JobID 332d103c5c3d037924473730e0a4155c
Program execution finished
Job with JobID 332d103c5c3d037924473730e0a4155c has finished.
Job Runtime: 1794 ms
Accumulator Results:
- dbb2606e28c38627e089d3f0e6864a3d (java.util.ArrayList) [7 elements](digital,1)
(haha,3)
(hehe,2)
(hello,1)
(monster,1)
(word,1)
(world,1)

打开网站:

http://zhiyong2:8081/#/overview

可以看到任务成功提交:

可以查看进程:

[root@zhiyong2 ~]# ps aux | grep flink

不用了就可以关闭Flink:

[root@zhiyong2 ~]# stop-cluster.sh
Stopping taskexecutor daemon (pid: 284215) on host zhiyong2.
Stopping standalonesession daemon (pid: 283695) on host zhiyong2.
[root@zhiyong2 ~]# ps aux | grep flink
root     311122  0.0  0.0 112824   980 pts/0    R+   22:44   0:00 grep --color=auto flink

USDP自带的Flink是1.13,有点老,配置也有问题。安装个1.14新版,还能体验流批一体的欢乐。

简单部署Flink1.14.3

准备安装路径:

[root@zhiyong2 ~]# mkdir -p /export/software/
[root@zhiyong2 ~]# mkdir -p /export/server/

上传压缩包:

[root@zhiyong2 ~]# ls /export/software/
flink-1.14.3-bin-scala_2.12.tgz

解压缩:

[root@zhiyong2 ~]# tar -zxvf /export/software/flink-1.14.3-bin-scala_2.12.tgz -C /export/server/

测试Word Count:

[root@zhiyong2 ~]# /export/server/flink-1.14.3/bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host zhiyong2.
Starting taskexecutor daemon on host zhiyong2.
[root@zhiyong2 ~]# /export/server/flink-1.14.3/bin/flink run /srv/udp/2.0.0.0/flink/examples/batch/WordCount.jar --input /root/wordtest1.txt
Printing result to stdout. Use --output to specify output path.
Job has been submitted with JobID 98d71fc1eb07630148e3709db89e3fd0
Program execution finished
Job with JobID 98d71fc1eb07630148e3709db89e3fd0 has finished.
Job Runtime: 2085 ms
Accumulator Results:
- 147191476201ac1195539c611aeb794c (java.util.ArrayList) [7 elements](digital,1)
(haha,3)
(hehe,2)
(hello,1)
(monster,1)
(word,1)
(world,1)

打开网站:

http://zhiyong2:8081/#/overview

可以看到:

这是货真价实的Flink1.14!!!看到这头大松鼠,说明下载的包没问题。

为了能方便地使用USDP已经配置好的环境变量和同步配置等功能,干脆将USDP的Flink1.13做替换,反正1.14是里程碑,1.13老版本大概率用不上了。。。替换之前先停止当前Flink进程:

[root@zhiyong2 ~]# /export/server/flink-1.14.3/bin/stop-cluster.sh
Stopping taskexecutor daemon (pid: 317225) on host zhiyong2.
Stopping standalonesession daemon (pid: 316935) on host zhiyong2.

替换自带的Flink1.13

备份老版本flink

先备份原来的文件,出错后还可以复原:

[root@zhiyong2 ~]# cd /export/
[root@zhiyong2 export]# chmod 777 /export/software
[root@zhiyong2 export]# chmod 777 /export/server/
[root@zhiyong2 export]# ll
总用量 0
drwxrwxrwx 3 root root 26 3月  14 22:32 server
drwxrwxrwx 2 root root 45 3月  14 22:30 software
[root@zhiyong2 export]# cd
[root@zhiyong2 software]# cd
[root@zhiyong2 ~]# su - hadoop
上一次登录:一 3月 14 22:59:03 CST 2022
[hadoop@zhiyong2 ~]$ cp -r /srv/udp/2.0.0.0/flink /export/software/
[hadoop@zhiyong2 ~]$ cd /export/software/
[hadoop@zhiyong2 software]$ ll
总用量 331744
drwxr-xr-x 11 hadoop hadoop       157 3月  14 23:00 flink
-rw-r--r--  1 root   root   339705501 3月  14 22:30 flink-1.14.3-bin-scala_2.12.tgz
[hadoop@zhiyong2 software]$ exit
登出

替换zhiyong2的Flink

[root@zhiyong2 ~]# rm -rf /srv/udp/2.0.0.0/flink
[root@zhiyong2 ~]# chmod 777 -R /srv/udp/2.0.0.0/
[root@zhiyong2 ~]# cd /export/server/
[root@zhiyong2 server]# ll
总用量 0
drwxr-xr-x 10 501 games 156 1月  11 07:45 flink-1.14.3
[root@zhiyong2 server]# su - hadoop
上一次登录:一 3月 14 23:13:04 CST 2022
[hadoop@zhiyong2 ~]$ cd /export/server/
[hadoop@zhiyong2 server]$ ll
总用量 0
drwxr-xr-x 10 501 games 156 1月  11 07:45 flink-1.14.3
[hadoop@zhiyong2 server]$ cp -r ./flink-1.14.3/ ./flink
[hadoop@zhiyong2 server]$ ll
总用量 0
drwxr-xr-x 10 hadoop hadoop 156 3月  14 23:14 flink
drwxr-xr-x 10    501 games  156 1月  11 07:45 flink-1.14.3
[hadoop@zhiyong2 server]$ cp -r ./flink /srv/udp/2.0.0.0/flink
[hadoop@zhiyong2 flink]$ cd /srv/udp/2.0.0.0
[hadoop@zhiyong2 2.0.0.0]$ ll
总用量 24
drwxrwxrwx. 11 hadoop  hadoop   169 3月   1 23:12 dolphinscheduler
drwxrwxrwx. 11 elastic elastic  261 3月   1 23:09 elasticsearch
drwxr-xr-x  10 hadoop  hadoop   156 3月  14 23:15 flink
drwxrwxrwx.  7 hadoop  hadoop   186 3月   1 23:10 flume
drwxrwxrwx   8 hadoop  hadoop   201 3月   3 00:17 hbase
drwxrwxrwx. 12 hadoop  hadoop   206 3月   1 23:06 hdfs
drwxrwxrwx. 13 hadoop  hadoop   229 3月   1 23:08 hive
drwxrwxrwx. 15 hue     hue     4096 3月   1 23:11 hue
drwxrwxrwx.  8 hadoop  hadoop   120 3月   1 23:09 kafka
drwxrwxrwx.  3 root    root      67 3月   1 23:05 node_exporter
drwxrwxrwx   3 root    root      17 3月  11 22:19 old
drwxrwxrwx.  6 hadoop  hadoop  4096 3月   1 23:07 phoenix
drwxrwxrwx. 11 hadoop  hadoop  4096 3月   1 23:11 ranger
drwxrwxrwx. 12 hadoop  hadoop   170 3月   1 23:08 spark
drwxrwxrwx. 10 hadoop  hadoop  4096 3月   1 23:08 sqoop
drwxrwxrwx.  6 hadoop  hadoop  4096 3月   1 23:07 tez
drwxrwxrwx. 12 hadoop  hadoop   206 3月   1 23:07 yarn
drwxrwxrwx. 12 hadoop  hadoop  4096 3月   1 23:05 zookeeper
[hadoop@zhiyong2 2.0.0.0]$ exit
登出
[root@zhiyong2 server]# chmod 777 -R /srv/udp/2.0.0.0/
[root@zhiyong2 server]# cd /srv/udp/2.0.0.0/
[root@zhiyong2 2.0.0.0]# ll
总用量 24
drwxrwxrwx. 11 hadoop  hadoop   169 3月   1 23:12 dolphinscheduler
drwxrwxrwx. 11 elastic elastic  261 3月   1 23:09 elasticsearch
drwxrwxrwx  10 hadoop  hadoop   156 3月  14 23:15 flink
drwxrwxrwx.  7 hadoop  hadoop   186 3月   1 23:10 flume
drwxrwxrwx   8 hadoop  hadoop   201 3月   3 00:17 hbase
drwxrwxrwx. 12 hadoop  hadoop   206 3月   1 23:06 hdfs
drwxrwxrwx. 13 hadoop  hadoop   229 3月   1 23:08 hive
drwxrwxrwx. 15 hue     hue     4096 3月   1 23:11 hue
drwxrwxrwx.  8 hadoop  hadoop   120 3月   1 23:09 kafka
drwxrwxrwx.  3 root    root      67 3月   1 23:05 node_exporter
drwxrwxrwx   3 root    root      17 3月  11 22:19 old
drwxrwxrwx.  6 hadoop  hadoop  4096 3月   1 23:07 phoenix
drwxrwxrwx. 11 hadoop  hadoop  4096 3月   1 23:11 ranger
drwxrwxrwx. 12 hadoop  hadoop   170 3月   1 23:08 spark
drwxrwxrwx. 10 hadoop  hadoop  4096 3月   1 23:08 sqoop
drwxrwxrwx.  6 hadoop  hadoop  4096 3月   1 23:07 tez
drwxrwxrwx. 12 hadoop  hadoop   206 3月   1 23:07 yarn
drwxrwxrwx. 12 hadoop  hadoop  4096 3月   1 23:05 zookeeper

测试Word Count:

[root@zhiyong2 ~]# start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host zhiyong2.
Starting taskexecutor daemon on host zhiyong2.
[root@zhiyong2 ~]# flink run /srv/udp/2.0.0.0/flink/examples/batch/WordCount.jar --input /root/wordtest1.txt
Printing result to stdout. Use --output to specify output path.
Job has been submitted with JobID b62bdeda4d704b20948eb1587d8035bd
Program execution finished
Job with JobID b62bdeda4d704b20948eb1587d8035bd has finished.
Job Runtime: 3427 ms
Accumulator Results:
- 0ad1f1ef00a3af540a29da2754412f2b (java.util.ArrayList) [7 elements](digital,1)
(haha,3)
(hehe,2)
(hello,1)
(monster,1)
(word,1)
(world,1)
[root@zhiyong2 ~]# stop-cluster.sh
Stopping taskexecutor daemon (pid: 464306) on host zhiyong2.
Stopping standalonesession daemon (pid: 463969) on host zhiyong2.

关闭集群前同样可以打开网站:

http://zhiyong2:8081/#/overview

看到:

zhiyong2的flink替换完成。

同步3台机器的Flink版本

集群的组件版本一定要保持一致!!!否则会发生各种奇怪的错误。由于zhiyong-2集群是备用集群,大概率不会运行计算组件,故本次只替换zhiyong-1集群。

    ┌──────────────────────────────────────────────────────────────────────┐│                 • MobaXterm Personal Edition v21.4 •                 ││               (SSH client, X server and network tools)               ││                                                                      ││ ➤ SSH session to root@192.168.88.102                                 ││   • Direct SSH      :  ✔                                             ││   • SSH compression :  ✔                                             ││   • SSH-browser     :  ✔                                             ││   • X11-forwarding  :  ✔  (remote display is forwarded through SSH)  ││                                                                      ││ ➤ For more info, ctrl+click on help or visit our website.            │└──────────────────────────────────────────────────────────────────────┘Last login: Mon Mar 14 22:43:16 2022 from 192.168.88.1
[root@zhiyong3 ~]# cd /srv/udp/2.0.0.0/
[root@zhiyong3 2.0.0.0]# ll
总用量 16
drwxr-xr-x. 11 hadoop  hadoop   169 3月   1 23:12 dolphinscheduler
drwxr-xr-x. 11 elastic elastic 4096 3月   1 23:52 elasticsearch
drwxr-xr-x. 11 hadoop  hadoop   157 3月   1 23:09 flink
drwxr-xr-x.  7 hadoop  hadoop   186 3月   1 23:10 flume
drwxr-xr-x   8 hadoop  hadoop   201 3月   3 00:17 hbase
drwxr-xr-x. 12 hadoop  hadoop   206 3月   1 23:06 hdfs
drwxrwxr-x. 13 hadoop  hadoop   229 3月   1 23:08 hive
drwxr-xr-x.  8 hadoop  hadoop   120 3月   1 23:09 kafka
drwxrwxrwx.  3 root    root      67 3月   1 23:05 node_exporter
drwxr-xr-x   3 root    root      17 3月  11 22:19 old
drwxr-xr-x.  6 hadoop  hadoop  4096 3月   1 23:07 phoenix
drwxrwxr-x. 12 hadoop  hadoop   170 3月   1 23:08 spark
drwxr-xr-x. 10 hadoop  hadoop  4096 3月   1 23:08 sqoop
drwxr-xr-x. 12 hadoop  hadoop   206 3月   1 23:07 yarn
drwxrwxrwx.  5 hadoop  hadoop    56 3月   1 23:05 zkui
drwxrwxrwx. 12 hadoop  hadoop  4096 3月   1 23:05 zookeeper
[root@zhiyong3 2.0.0.0]# rm -rf flink
[root@zhiyong3 2.0.0.0]# ssh zhiyong4
Warning: Permanently added 'zhiyong4,192.168.88.103' (ECDSA) to the list of known hosts.
Last login: Sun Mar 13 23:09:11 2022 from zhiyong2
[root@zhiyong4 ~]# rm -rf /srv/udp/2.0.0.0/flink
[root@zhiyong4 ~]# exit
登出
Connection to zhiyong4 closed.

执行scp:

[root@zhiyong2 2.0.0.0]# cd /srv/udp/2.0.0.0/
[root@zhiyong2 2.0.0.0]# ll
总用量 24
drwxrwxrwx. 11 hadoop  hadoop   169 3月   1 23:12 dolphinscheduler
drwxrwxrwx. 11 elastic elastic  261 3月   1 23:09 elasticsearch
drwxrwxrwx  10 hadoop  hadoop   156 3月  14 23:15 flink
drwxrwxrwx.  7 hadoop  hadoop   186 3月   1 23:10 flume
drwxrwxrwx   8 hadoop  hadoop   201 3月   3 00:17 hbase
drwxrwxrwx. 12 hadoop  hadoop   206 3月   1 23:06 hdfs
drwxrwxrwx. 13 hadoop  hadoop   229 3月   1 23:08 hive
drwxrwxrwx. 15 hue     hue     4096 3月   1 23:11 hue
drwxrwxrwx.  8 hadoop  hadoop   120 3月   1 23:09 kafka
drwxrwxrwx.  3 root    root      67 3月   1 23:05 node_exporter
drwxrwxrwx   3 root    root      17 3月  11 22:19 old
drwxrwxrwx.  6 hadoop  hadoop  4096 3月   1 23:07 phoenix
drwxrwxrwx. 11 hadoop  hadoop  4096 3月   1 23:11 ranger
drwxrwxrwx. 12 hadoop  hadoop   170 3月   1 23:08 spark
drwxrwxrwx. 10 hadoop  hadoop  4096 3月   1 23:08 sqoop
drwxrwxrwx.  6 hadoop  hadoop  4096 3月   1 23:07 tez
drwxrwxrwx. 12 hadoop  hadoop   206 3月   1 23:07 yarn
drwxrwxrwx. 12 hadoop  hadoop  4096 3月   1 23:05 zookeeper
[root@zhiyong2 2.0.0.0]# scp -r ./flink root@zhiyong3:$PWD
[root@zhiyong2 2.0.0.0]# scp -r ./flink root@zhiyong4:$PWD

可以看到已经scp复制成功:

[root@zhiyong3 2.0.0.0]# pwd
/srv/udp/2.0.0.0
[root@zhiyong3 2.0.0.0]# ll
总用量 16
drwxr-xr-x. 11 hadoop  hadoop   169 3月   1 23:12 dolphinscheduler
drwxr-xr-x. 11 elastic elastic 4096 3月   1 23:52 elasticsearch
drwxrwxrwx  10 root    root     156 3月  14 23:29 flink
drwxr-xr-x.  7 hadoop  hadoop   186 3月   1 23:10 flume
drwxr-xr-x   8 hadoop  hadoop   201 3月   3 00:17 hbase
drwxr-xr-x. 12 hadoop  hadoop   206 3月   1 23:06 hdfs
drwxrwxr-x. 13 hadoop  hadoop   229 3月   1 23:08 hive
drwxr-xr-x.  8 hadoop  hadoop   120 3月   1 23:09 kafka
drwxrwxrwx.  3 root    root      67 3月   1 23:05 node_exporter
drwxr-xr-x   3 root    root      17 3月  11 22:19 old
drwxr-xr-x.  6 hadoop  hadoop  4096 3月   1 23:07 phoenix
drwxrwxr-x. 12 hadoop  hadoop   170 3月   1 23:08 spark
drwxr-xr-x. 10 hadoop  hadoop  4096 3月   1 23:08 sqoop
drwxr-xr-x. 12 hadoop  hadoop   206 3月   1 23:07 yarn
drwxrwxrwx.  5 hadoop  hadoop    56 3月   1 23:05 zkui
drwxrwxrwx. 12 hadoop  hadoop  4096 3月   1 23:05 zookeeper

但是,这么做会导致无法启动USDP的History Server,故还需要复制丢失的脚本:

[root@zhiyong2 bin]# cd /export/software/flink/bin/
[root@zhiyong2 bin]# ll
总用量 2104
-rwxr-xr-x 1 hadoop hadoop 2010313 3月  14 23:00 bash-java-utils.jar
-rwxr-xr-x 1 hadoop hadoop     449 3月  14 23:00 check-flink-logs-dir.sh
-rwxr-xr-x 1 hadoop hadoop   20867 3月  14 23:00 config.sh
-rwxr-xr-x 1 hadoop hadoop    1318 3月  14 23:00 find-flink-home.sh
-rwxr-xr-x 1 hadoop hadoop    2381 3月  14 23:00 flink
-rwxr-xr-x 1 hadoop hadoop    4137 3月  14 23:00 flink-console.sh
-rwxr-xr-x 1 hadoop hadoop    6571 3月  14 23:00 flink-daemon.sh
-rwxr-xr-x 1 hadoop hadoop    1959 3月  14 23:00 historyserver.sh
-rwxr-xr-x 1 hadoop hadoop    2295 3月  14 23:00 jobmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1650 3月  14 23:00 kubernetes-jobmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1717 3月  14 23:00 kubernetes-session.sh
-rwxr-xr-x 1 hadoop hadoop    1770 3月  14 23:00 kubernetes-taskmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1133 3月  14 23:00 mesos-appmaster-job.sh
-rwxr-xr-x 1 hadoop hadoop    1137 3月  14 23:00 mesos-appmaster.sh
-rwxr-xr-x 1 hadoop hadoop    1958 3月  14 23:00 mesos-jobmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1891 3月  14 23:00 mesos-taskmanager.sh
drwxr-xr-x 2 hadoop hadoop      96 3月  14 23:00 old
-rwxr-xr-x 1 hadoop hadoop    2994 3月  14 23:00 pyflink-shell.sh
-rwxr-xr-x 1 hadoop hadoop    3742 3月  14 23:00 sql-client.sh
-rwxr-xr-x 1 hadoop hadoop    2006 3月  14 23:00 standalone-job.sh
-rwxr-xr-x 1 hadoop hadoop    1837 3月  14 23:00 start-cluster.sh
-rwxr-xr-x 1 hadoop hadoop      90 3月  14 23:00 start-flink-historyserver.sh
-rwxr-xr-x 1 hadoop hadoop    3380 3月  14 23:00 start-scala-shell.sh
-rwxr-xr-x 1 hadoop hadoop    1854 3月  14 23:00 start-zookeeper-quorum.sh
-rwxr-xr-x 1 hadoop hadoop    1617 3月  14 23:00 stop-cluster.sh
-rwxr-xr-x 1 hadoop hadoop      89 3月  14 23:00 stop-flink-historyserver.sh
-rwxr-xr-x 1 hadoop hadoop    1845 3月  14 23:00 stop-zookeeper-quorum.sh
-rwxr-xr-x 1 hadoop hadoop    2960 3月  14 23:00 taskmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1725 3月  14 23:00 yarn-session.sh
-rwxr-xr-x 1 hadoop hadoop    2405 3月  14 23:00 zookeeper.sh
[root@zhiyong2 bin]# cp ./start-flink-historyserver.sh /srv/udp/2.0.0.0/flink/bin
[root@zhiyong2 bin]# cd /srv/udp/2.0.0.0/flink/bin
[root@zhiyong2 bin]# ll
总用量 2352
-rwxrwxrwx 1 hadoop hadoop 2290643 3月  14 23:15 bash-java-utils.jar
-rwxrwxrwx 1 hadoop hadoop   20576 3月  14 23:15 config.sh
-rwxrwxrwx 1 hadoop hadoop    1318 3月  14 23:15 find-flink-home.sh
-rwxrwxrwx 1 hadoop hadoop    2381 3月  14 23:15 flink
-rwxrwxrwx 1 hadoop hadoop    4247 3月  14 23:15 flink-console.sh
-rwxrwxrwx 1 hadoop hadoop    6584 3月  14 23:15 flink-daemon.sh
-rwxrwxrwx 1 hadoop hadoop    1564 3月  14 23:15 historyserver.sh
-rwxrwxrwx 1 hadoop hadoop    2295 3月  14 23:15 jobmanager.sh
-rwxrwxrwx 1 hadoop hadoop    1650 3月  14 23:15 kubernetes-jobmanager.sh
-rwxrwxrwx 1 hadoop hadoop    1717 3月  14 23:15 kubernetes-session.sh
-rwxrwxrwx 1 hadoop hadoop    1770 3月  14 23:15 kubernetes-taskmanager.sh
-rwxrwxrwx 1 hadoop hadoop    2994 3月  14 23:15 pyflink-shell.sh
-rwxrwxrwx 1 hadoop hadoop    3742 3月  14 23:15 sql-client.sh
-rwxrwxrwx 1 hadoop hadoop    2006 3月  14 23:15 standalone-job.sh
-rwxrwxrwx 1 hadoop hadoop    1837 3月  14 23:15 start-cluster.sh
-rwxr-xr-x 1 root   root        90 3月  14 23:41 start-flink-historyserver.sh
-rwxrwxrwx 1 hadoop hadoop    1854 3月  14 23:15 start-zookeeper-quorum.sh
-rwxrwxrwx 1 hadoop hadoop    1617 3月  14 23:15 stop-cluster.sh
-rwxrwxrwx 1 hadoop hadoop    1845 3月  14 23:15 stop-zookeeper-quorum.sh
-rwxrwxrwx 1 hadoop hadoop    2960 3月  14 23:15 taskmanager.sh
-rwxrwxrwx 1 hadoop hadoop    1725 3月  14 23:15 yarn-session.sh
-rwxrwxrwx 1 hadoop hadoop    2405 3月  14 23:15 zookeeper.sh
[root@zhiyong2 bin]# scp ./start-flink-historyserver.sh root@zhiyong3:$PWD
start-flink-historyserver.sh                                                                                                100%   90    79.5KB/s   00:00
[root@zhiyong2 bin]# scp ./start-flink-historyserver.sh root@zhiyong4:$PWD
start-flink-historyserver.sh                                                                                                100%   90    26.5KB/s   00:00

此时可以启动USDP的History Server:

此时还需要复制USDP封装的关闭历史服务的脚本,否则在USDP的Web UI启动或者停止FlinkHistoryServer会报错【Apache原生Flink缺少了USDP自己封装的脚本】。

[root@zhiyong2 bin]# cd /export/software/flink/bin/
[root@zhiyong2 bin]# ll
总用量 2104
-rwxr-xr-x 1 hadoop hadoop 2010313 3月  14 23:00 bash-java-utils.jar
-rwxr-xr-x 1 hadoop hadoop     449 3月  14 23:00 check-flink-logs-dir.sh
-rwxr-xr-x 1 hadoop hadoop   20867 3月  14 23:00 config.sh
-rwxr-xr-x 1 hadoop hadoop    1318 3月  14 23:00 find-flink-home.sh
-rwxr-xr-x 1 hadoop hadoop    2381 3月  14 23:00 flink
-rwxr-xr-x 1 hadoop hadoop    4137 3月  14 23:00 flink-console.sh
-rwxr-xr-x 1 hadoop hadoop    6571 3月  14 23:00 flink-daemon.sh
-rwxr-xr-x 1 hadoop hadoop    1959 3月  14 23:00 historyserver.sh
-rwxr-xr-x 1 hadoop hadoop    2295 3月  14 23:00 jobmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1650 3月  14 23:00 kubernetes-jobmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1717 3月  14 23:00 kubernetes-session.sh
-rwxr-xr-x 1 hadoop hadoop    1770 3月  14 23:00 kubernetes-taskmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1133 3月  14 23:00 mesos-appmaster-job.sh
-rwxr-xr-x 1 hadoop hadoop    1137 3月  14 23:00 mesos-appmaster.sh
-rwxr-xr-x 1 hadoop hadoop    1958 3月  14 23:00 mesos-jobmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1891 3月  14 23:00 mesos-taskmanager.sh
drwxr-xr-x 2 hadoop hadoop      96 3月  14 23:00 old
-rwxr-xr-x 1 hadoop hadoop    2994 3月  14 23:00 pyflink-shell.sh
-rwxr-xr-x 1 hadoop hadoop    3742 3月  14 23:00 sql-client.sh
-rwxr-xr-x 1 hadoop hadoop    2006 3月  14 23:00 standalone-job.sh
-rwxr-xr-x 1 hadoop hadoop    1837 3月  14 23:00 start-cluster.sh
-rwxr-xr-x 1 hadoop hadoop      90 3月  14 23:00 start-flink-historyserver.sh
-rwxr-xr-x 1 hadoop hadoop    3380 3月  14 23:00 start-scala-shell.sh
-rwxr-xr-x 1 hadoop hadoop    1854 3月  14 23:00 start-zookeeper-quorum.sh
-rwxr-xr-x 1 hadoop hadoop    1617 3月  14 23:00 stop-cluster.sh
-rwxr-xr-x 1 hadoop hadoop      89 3月  14 23:00 stop-flink-historyserver.sh
-rwxr-xr-x 1 hadoop hadoop    1845 3月  14 23:00 stop-zookeeper-quorum.sh
-rwxr-xr-x 1 hadoop hadoop    2960 3月  14 23:00 taskmanager.sh
-rwxr-xr-x 1 hadoop hadoop    1725 3月  14 23:00 yarn-session.sh
-rwxr-xr-x 1 hadoop hadoop    2405 3月  14 23:00 zookeeper.sh
[root@zhiyong2 bin]# cp ./stop-flink-historyserver.sh /srv/udp/2.0.0.0/flink/bin
[root@zhiyong2 bin]# cd /srv/udp/2.0.0.0/flink/bin
[root@zhiyong2 bin]# scp ./stop-flink-historyserver.sh root@zhiyong3:$PWD
stop-flink-historyserver.sh                                                                                                 100%   89    40.1KB/s   00:00
[root@zhiyong2 bin]# scp ./stop-flink-historyserver.sh root@zhiyong4:$PWD
stop-flink-historyserver.sh                                                                                                 100%   89    37.5KB/s   00:00

这2个脚本存储的内容:

[root@zhiyong2 bin]# cat start-flink-historyserver.sh
#!/bin/bashsu -s /bin/bash hadoop -c "/srv/udp/2.0.0.0/flink/bin/historyserver.sh start"[root@zhiyong2 bin]#
[root@zhiyong2 bin]# cat stop-flink-historyserver.sh
#!/bin/bashsu -s /bin/bash hadoop -c "/srv/udp/2.0.0.0/flink/bin/historyserver.sh stop"[root@zhiyong2 bin]#
[root@zhiyong2 bin]#

复制配置:

[root@zhiyong2 conf]# cd /srv/udp/2.0.0.0/flink/conf/
[root@zhiyong2 conf]# ll
总用量 52
-rwxrwxrwx 1 hadoop hadoop 10615 3月  15 00:02 flink-conf.yaml
-rwxrwxrwx 1 hadoop hadoop  2917 3月  14 23:15 log4j-cli.properties
-rwxrwxrwx 1 hadoop hadoop  3041 3月  14 23:15 log4j-console.properties
-rwxrwxrwx 1 hadoop hadoop  2694 3月  14 23:15 log4j.properties
-rwxrwxrwx 1 hadoop hadoop  2041 3月  14 23:15 log4j-session.properties
-rwxrwxrwx 1 hadoop hadoop  2740 3月  15 00:02 logback-console.xml
-rwxrwxrwx 1 hadoop hadoop  1550 3月  14 23:15 logback-session.xml
-rwxrwxrwx 1 hadoop hadoop  2327 3月  15 00:02 logback.xml
-rwxrwxrwx 1 hadoop hadoop    15 3月  14 23:15 masters
drwxr-xr-x 2 root   root      81 3月  15 00:02 old
-rwxrwxrwx 1 hadoop hadoop    10 3月  14 23:15 workers
-rwxrwxrwx 1 hadoop hadoop  1434 3月  14 23:15 zoo.cfg
[root@zhiyong2 conf]# cp /export/software/flink/conf/hive-site.xml /srv/udp/2.0.0.0/flink/conf/
[root@zhiyong2 conf]# ll
总用量 60
-rwxrwxrwx 1 hadoop hadoop 10615 3月  15 00:02 flink-conf.yaml
-rwxr-xr-x 1 root   root    4469 3月  15 00:05 hive-site.xml
-rwxrwxrwx 1 hadoop hadoop  2917 3月  14 23:15 log4j-cli.properties
-rwxrwxrwx 1 hadoop hadoop  3041 3月  14 23:15 log4j-console.properties
-rwxrwxrwx 1 hadoop hadoop  2694 3月  14 23:15 log4j.properties
-rwxrwxrwx 1 hadoop hadoop  2041 3月  14 23:15 log4j-session.properties
-rwxrwxrwx 1 hadoop hadoop  2740 3月  15 00:02 logback-console.xml
-rwxrwxrwx 1 hadoop hadoop  1550 3月  14 23:15 logback-session.xml
-rwxrwxrwx 1 hadoop hadoop  2327 3月  15 00:02 logback.xml
-rwxrwxrwx 1 hadoop hadoop    15 3月  14 23:15 masters
drwxr-xr-x 2 root   root      81 3月  15 00:02 old
-rwxrwxrwx 1 hadoop hadoop    10 3月  14 23:15 workers
-rwxrwxrwx 1 hadoop hadoop  1434 3月  14 23:15 zoo.cfg
[root@zhiyong2 conf]# scp ./hive-site.xml root@zhiyong3:$PWD
hive-site.xml                                                                                                               100% 4469     1.9MB/s   00:00
[root@zhiyong2 conf]# scp ./hive-site.xml root@zhiyong4:$PWD
hive-site.xml                                                                                                               100% 4469     2.0MB/s   00:00

先暂时解决报错问题,之后发生改动或遇到Bug还是需要手动检查各种xml配置文件的。大数据组件就是这点不好,各种配置很繁琐。

修改Flink配置

此时可以使用USDP的修改配置及同步配置功能:

事实上,由于USDP自带的Yarn的配置缺少了History Server的必要配置,自身都无法启动,也就导致Flink的History起不来!!!但是把启停脚本复制到位后,Web UI不会报错。这部分的配置之后再慢慢改!目前已经可以先凑合着用。

USDP使用笔记(七)使用Flink1.14.3替换自带的老版Flink1.13相关推荐

  1. Hudi 0.11.0 + Flink1.14.4 + Hive + Flink CDC + Kafka 集成

    Hudi 0.11.0 + Flink1.14.4 + Hive + Flink CDC + Kafka 集成 一.环境准备 1.1 软件版本 Flink 1.14.4Scala 2.11CDH 6. ...

  2. Flink1.14多种模式安装部署

    Flink支持多种安装模式 - Local-本地单机模式,学习测试时使用 - Standalone-独立集群模式,Flink自带集群,开发测试环境使用 - StandaloneHA-独立集群高可用模式 ...

  3. 【K210】K210学习笔记七——使用K210拍摄照片并在MaixHub上进行训练

    [K210]K210学习笔记七--使用K210拍摄照片并在MaixHub上进行训练 前言 K210准备工作 K210如何拍摄照片 准备工作 拍摄相关代码定义 用K210拍摄到的照片在MaixHub平台 ...

  4. Typescript 学习笔记七:泛型

    中文网:https://www.tslang.cn/ 官网:http://www.typescriptlang.org/ 目录: Typescript 学习笔记一:介绍.安装.编译 Typescrip ...

  5. 吴恩达《机器学习》学习笔记七——逻辑回归(二分类)代码

    吴恩达<机器学习>学习笔记七--逻辑回归(二分类)代码 一.无正则项的逻辑回归 1.问题描述 2.导入模块 3.准备数据 4.假设函数 5.代价函数 6.梯度下降 7.拟合参数 8.用训练 ...

  6. websocket 获取连接id_Swoole学习笔记七:搭建WebSocket长连接 之 使用 USER_ID 作为身份凭证...

    Swoole学习笔记七:搭建WebSocket长连接 之 使用 USER_ID 作为身份凭证 2年前 阅读 3678 评论 0 喜欢 0 ### 0.前言 前面基本的WebSocket操作,我们基本都 ...

  7. Python数据挖掘笔记 七 .PCA降维操作及subplot子图绘制

    Python数据挖掘笔记 七 .PCA降维操作及subplot子图绘制 这篇文章主要介绍四个知识点,也是我那节课讲课的内容.1.PCA降维操作:2.Python中Sklearn的PCA扩展包:3.Ma ...

  8. ROS学习笔记七:使用rqt_console和roslaunch

    ROS学习笔记七:使用rqt_console和roslaunch 本节主要介绍在调试时使用的rqt_console和rqt_logger_level,以及一次性打开多个节点的工具roslaunch. ...

  9. 《MFC游戏开发》笔记七 游戏特效的实现(一):背景滚动

    本系列文章由七十一雾央编写,转载请注明出处. http://blog.csdn.net/u011371356/article/details/9344721 作者:七十一雾央 新浪微博:http:// ...

最新文章

  1. 【HDOJ 3652】B-number
  2. 使用BH1750测量激光发射器的强度
  3. 拓扑排序杭电 1285确定比赛名次
  4. 图解WildFly8之Servlet容器Undertow剖析
  5. Java中如何引用文档对象模型_在JAVA中使用文档对象模型DOM经验小结
  6. 让nginx支持thinkphp rewrite模式
  7. 科室鄙视链最低端,居然是这类人
  8. python连乘函数_动态规划之矩阵连乘问题Python实现方法
  9. Lambda表达式示例代码
  10. 前端想要了解的Nginx
  11. 3.企业安全建设指南(金融行业安全架构与技术实践) --- 安全规划
  12. Java购物车前端代码_JavaWeb后台购物车类实现代码详解
  13. Python之高等数学的符号求解
  14. App 测试工具大全,收藏这篇就够了
  15. Modis-ET-NPP-GPP
  16. android手机解除root,手机一键ROOT以后如何解除?手机root后怎么恢复
  17. 北风:二类电商“空手套白狼”的赚钱套路
  18. python3GUI——微博图片爬取工具
  19. 神经元分布图高清版最新,神经系统分布高清图
  20. Java 程序员不得不会的 124 道面试题(含答案)

热门文章

  1. 快速切换本地host文件的工具 —— SwitchHosts
  2. Java窗体图书管理系统Java图书借阅管理系统(图书借阅系统)
  3. pyhton获取 中国各个省份/直辖市拥有的上市公司数目
  4. ENDNOTE 添加国标参考文献格式
  5. 移动端草海的渲染方案(一)
  6. copy和deepcopy
  7. c语言:求两个数的最大公约数与最小公倍数
  8. 阿里云存储价格对象存储OSS、文件存储NAS和块存储收费标准
  9. 转:builder模式分析
  10. 用Python实现拨打电话