文章目录

  • 01 引言
  • 02 Local本地单机模式
    • 2.1 工作原理
    • 2.2 安装部署
    • 2.3 测试验证
  • 03 Standalone独立集群模式
    • 3.1 工作原理
    • 3.2 安装部署
    • 3.3 测试验证
  • 04 Standalone-HA高可用集群模式
    • 4.1 工作原理
    • 4.2 安装部署
    • 4.3 测试验证
  • 05 Flink On Yarn模式
    • 5.1 使用Yarn优势
    • 5.2 工作原理
    • 5.3 两种方式
      • 5.3.1 Session模式
      • 5.3.2 Per-Job模式
    • 5.4 安装部署
    • 5.5 测试验证
      • 5.5.1 Session模式
      • 5.5.2 Per-Job分离模式
  • 06 参数总结
  • 07 文末

01 引言

在前面的博客,我们已经大概对Flink有一个初步认识了,有兴趣的同学可以参阅下:

  • 《Flink教程(01)- Flink知识图谱》
  • 《Flink教程(02)- Flink入门》

如果要学习Flink必须先搭建好Flink环境,本文来讲解下Flink的环境搭建。

在上一篇博客 《Flink教程(01)- Flink知识图谱》里面的物理部署层,我们知道了Flink有几种部署模式,根据本地或集群分为以下几种:

  • Local(本地单机模式):学习测试时使用
  • Standalone(独立集群模式):Flink自带集群,开发测试环境使用
  • StandaloneHA(独立集群高可用模式):Flink自带集群,开发测试环境使用
  • On Yarn(计算资源统一由Hadoop YARN管理):生产环境使用

本文来讲解下。

02 Local本地单机模式

2.1 工作原理


上图流程如下:

  1. Flink程序由JobClient进行提交;
  2. JobClient将作业提交给JobManager
  3. JobManager负责协调资源分配和作业执行,资源分配完成后,任务将提交给相应的TaskManager
  4. TaskManager启动一个线程以开始执行,TaskManager会向JobManager报告状态更改,如开始执行,正在进行或已完成;
  5. 作业执行完成后,结果将发送回客户端(JobClient)。

2.2 安装部署

step1:下载安装包

  • https://archive.apache.org/dist/flink/

step2:上传flink-1.12.0-bin-scala_2.12.tgznode1的指定目录

step3:解压

tar -zxvf flink-1.12.0-bin-scala_2.12.tgz

step4:修改权限

chown -R root:root /export/server/flink-1.12.0

step5:改名或创建软链接

mv flink-1.12.0 flink
ln -s /export/server/flink-1.12.0 /export/server/flink

2.3 测试验证

1. 准备文件/root/words.txt

vim /root/words.txt

内容如下:

hello me you her
hello me you
hello me
hello

2. 启动Flink本地“集群”

 /export/server/flink/bin/start-cluster.sh

3.使用jps可以查看到下面两个进程

 - TaskManagerRunner- StandaloneSessionClusterEntrypoint

4.访问FlinkWeb UI: http://node1:8081/#/overview


slotFlink里面可以认为是资源组,Flink是通过将任务分成子任务并且将这些子任务分配到slot来并行执行程序。

5. 执行官方示例:

/export/server/flink/bin/flink run
/export/server/flink/examples/batch/WordCount.jar --input
/root/words.txt --output /root/out

6. 停止Flink

/export/server/flink/bin/stop-cluster.sh

启动shell交互式窗口(目前所有Scala 2.12版本的安装包暂时都不支持Scala Shell)

/export/server/flink/bin/start-scala-shell.sh local

执行如下命令:

benv.readTextFile("/root/words.txt").flatMap(_.split(" ")).map((_,1)).groupBy(0).sum(1).print()

退出shell:

:quit

03 Standalone独立集群模式

3.1 工作原理


工作流程:

  1. client客户端提交任务给JobManager
  2. JobManager负责申请任务运行所需要的资源并管理任务和资源;
  3. JobManager分发任务给TaskManager执行;
  4. TaskManager定期向JobManager汇报状态。

3.2 安装部署

step1:集群规划

  • 服务器: node1(Master + Slave): JobManager + TaskManager
  • 服务器: node2(Slave): TaskManager
  • 服务器: node3(Slave): TaskManager

step2:修改flink-conf.yaml

vim /export/server/flink/conf/flink-conf.yaml

内容如下:

jobmanager.rpc.address: node1
taskmanager.numberOfTaskSlots: 2
web.submit.enable: true#历史服务器
jobmanager.archive.fs.dir: hdfs://node1:8020/flink/completed-jobs/
historyserver.web.address: node1
historyserver.web.port: 8082
historyserver.archive.fs.dir: hdfs://node1:8020/flink/completed-jobs/

step3:修改masters

vim /export/server/flink/conf/masters

内容如下:

node1:8081

step4:修改slaves

vim /export/server/flink/conf/workers

内容如下:

node1
node2
node3

step5:添加HADOOP_CONF_DIR环境变量

vim /etc/profile

新增内容:

export HADOOP_CONF_DIR=/export/server/hadoop/etc/hadoop

step6:分发

scp -r /export/server/flink node2:/export/server/flink
scp -r /export/server/flink node3:/export/server/flink
scp  /etc/profile node2:/etc/profile
scp  /etc/profile node3:/etc/profile

for i in {2..3}; do scp -r flink node$i:$PWD; done

step7:source

source /etc/profile

3.3 测试验证

1. 启动集群,在node1上执行如下命令

 /export/server/flink/bin/start-cluster.sh

或者单独启动

/export/server/flink/bin/jobmanager.sh ((start|start-foreground) cluster)|stop|stop-all
/export/server/flink/bin/taskmanager.sh start|start-foreground|stop|stop-all

2. 启动历史服务器

/export/server/flink/bin/historyserver.sh start

3. 访问Flink UI界面或使用jps查看

  • http://node1:8081/#/overview
  • http://node1:8082/#/overview

TaskManager界面:可以查看到当前Flink集群中有多少个TaskManager,每个TaskManagerslots、内存、CPU Core是多少

4. 执行官方测试案例

/export/server/flink/bin/flink run
/export/server/flink/examples/batch/WordCount.jar --input
hdfs://node1:8020/wordcount/input/words.txt --output
hdfs://node1:8020/wordcount/output/result.txt  --parallelism 2

5. 查看历史日志

  • http://node1:50070/explorer.html#/flink/completed-jobs
  • http://node1:8082/#/overview

6. 停止Flink集群

/export/server/flink/bin/stop-cluster.sh

04 Standalone-HA高可用集群模式

4.1 工作原理


从之前的架构中我们可以很明显的发现 JobManager有明显的单点问题(SPOF,single point of failure)。JobManager 肩负着任务调度以及资源分配,一旦 JobManager出现意外,其后果可想而知。

工作原理:

  • Zookeeper的帮助下,一个 StandaloneFlink集群会同时有多个活着的 JobManager,其中只有一个处于工作状态,其他处于Standby状态。
  • 当工作中的 JobManager 失去连接后(如宕机或Crash),Zookeeper会从 Standby中选一个新的 JobManager 来接管 Flink 集群。

4.2 安装部署

step1:集群规划

  • 服务器: node1(Master + Slave): JobManager + TaskManager
  • 服务器: node2(Master + Slave):JobManager + TaskManager
  • 服务器:node3(Slave): TaskManager

step2:启动ZooKeeper

zkServer.sh status
zkServer.sh stop
zkServer.sh start

step3:启动HDFS

/export/serves/hadoop/sbin/start-dfs.sh

step4:停止Flink集群

/export/server/flink/bin/stop-cluster.sh

step5:修改flink-conf.yaml

vim /export/server/flink/conf/flink-conf.yaml

增加如下内容:

state.backend: filesystem
state.backend.fs.checkpointdir: hdfs://node1:8020/flink-checkpoints
high-availability: zookeeper
high-availability.storageDir: hdfs://node1:8020/flink/ha/
high-availability.zookeeper.quorum: node1:2181,node2:2181,node3:2181

配置解释:

#开启HA,使用文件系统作为快照存储
state.backend: filesystem
#启用检查点,可以将快照保存到HDFS
state.backend.fs.checkpointdir: hdfs://node1:8020/flink-checkpoints
#使用zookeeper搭建高可用
high-availability: zookeeper
# 存储JobManager的元数据到HDFS
high-availability.storageDir: hdfs://node1:8020/flink/ha/
# 配置ZK集群地址
high-availability.zookeeper.quorum: node1:2181,node2:2181,node3:2181

step6:修改masters

vim /export/server/flink/conf/masters

node1:8081
node2:8081

step7:同步

scp -r /export/server/flink/conf/flink-conf.yaml node2:/export/server/flink/conf/
scp -r /export/server/flink/conf/flink-conf.yaml node3:/export/server/flink/conf/
scp -r /export/server/flink/conf/masters node2:/export/server/flink/conf/
scp -r /export/server/flink/conf/masters node3:/export/server/flink/conf/

step8:修改node2上的flink-conf.yaml

vim /export/server/flink/conf/flink-conf.yaml

修改内容如下:

jobmanager.rpc.address: node2

step9:重新启动Flink集群,node1上执行

/export/server/flink/bin/stop-cluster.sh
/export/server/flink/bin/start-cluster.sh


step10:使用jps命令查看,发现没有Flink相关进程被启动

step11:查看日志

cat /export/server/flink/log/flink-root-standalonesession-0-node1.log

发现如下错误:

因为在Flink1.8版本后,Flink官方提供的安装包里没有整合HDFS的jar

step12:下载jar包并在Flink的lib目录下放入该jar包并分发使Flink能够支持对Hadoop的操作

  • 下载地址:https://flink.apache.org/downloads.html
  • 放入lib目录(cd /export/server/flink/lib
  • 分发(for i in {2..3}; do scp -r flink-shaded-hadoop-2-uber-2.7.5-10.0.jar node$i:$PWD; done

step13:重新启动Flink集群,node1上执行

/export/server/flink/bin/start-cluster.sh

step14:使用jps命令查看,发现三台机器已经ok

4.3 测试验证

1. 访问WebUI

  • http://node1:8081/#/job-manager/config
  • http://node2:8081/#/job-manager/config

2. 执行wc

/export/server/flink/bin/flink run
/export/server/flink/examples/batch/WordCount.jar

3. kill掉其中一个master

4.重新执行wc,还是可以正常执行

/export/server/flink/bin/flink run
/export/server/flink/examples/batch/WordCount.jar

5. 停止集群

/export/server/flink/bin/stop-cluster.sh

05 Flink On Yarn模式

5.1 使用Yarn优势

在实际开发中,使用Flink时,更多的使用方式是Flink On Yarn模式,原因如下:

原因1Yarn的资源可以按需使用,提高集群的资源利用率

原因2Yarn的任务有优先级,根据优先级运行作业

原因3:基于Yarn调度系统,能够自动化地处理各个角色的 Failover(容错)

  • JobManager进程和TaskManager进程都由 Yarn NodeManager监控
  • 如果 JobManager进程异常退出,则 Yarn ResourceManager会重新调度 JobManager到其他机器
  • 如果TaskManager 进程异常退出,JobManager会收到消息并重新向Yarn ResourceManager 申请资源,重新启动 TaskManager

5.2 工作原理


工作原理如下:

  1. Client上传jar包和配置文件到HDFS集群上
  2. ClientYarn ResourceManager提交任务并申请资源
  3. ResourceManager分配Container资源并启动AppMaster
  4. 然后AppMaster加载FlinkJar包和配置构建环境,启动JobManagerJobManagerApplicationMaster运行在同一个container上。
  5. 一旦它们被成功启动,AppMaster就知道JobManager的地址(AppMaster它自己所在的机器),它就会为TaskManager生成一个新的Flink配置文件(他们就可以连接到JobManager),这个配置文件也被上传到HDFS上。
  6. 此外,AppMaster容器也提供了Flinkweb服务接口,YARN所分配的所有端口都是临时端口,这允许用户并行执行多个Flink
  7. ApplicationMasterResourceManager申请工作资源,NodeManager加载FlinkJar包和配置构建环境并启动TaskManager
  8. TaskManager启动后向JobManager发送心跳包,并等待JobManager向其分配任务

5.3 两种方式

5.3.1 Session模式



特点:需要事先申请资源,启动JobManager和TaskManger
优点:不需要每次递交作业申请资源,而是使用已经申请好的资源,从而提高执行效率
缺点:作业执行完成以后,资源不会被释放,因此一直会占用系统资源
应用场景:适合作业递交比较频繁的场景,小作业比较多的场景

5.3.2 Per-Job模式



特点:每次递交作业都需要申请一次资源
优点:作业运行完成,资源会立刻被释放,不会一直占用系统资源
缺点:每次递交作业都需要申请资源,会影响执行效率,因为申请资源需要消耗时间
应用场景:适合作业比较少的场景、大作业的场景

5.4 安装部署

step1:关闭yarn的内存检查

vim /export/server/hadoop/etc/hadoop/yarn-site.xml

添加内容:

<!-- 关闭yarn内存检查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name><value>false</value>
</property>
<property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value>
</property>

说明:

  • 是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是true
  • 在这里面我们需要关闭,因为对于flink使用yarn模式下,很容易内存超标,这个时候yarn会自动杀掉job

step2:同步

scp -r /export/server/hadoop/etc/hadoop/yarn-site.xml node2:/export/server/hadoop/etc/hadoop/yarn-site.xml
scp -r /export/server/hadoop/etc/hadoop/yarn-site.xml node3:/export/server/hadoop/etc/hadoop/yarn-site.xml

step3:重启yarn

/export/server/hadoop/sbin/stop-yarn.sh
/export/server/hadoop/sbin/start-yarn.sh

5.5 测试验证

5.5.1 Session模式

yarn-session.sh(开辟资源) +flink run(提交任务)

1. 在yarn上启动一个Flink会话,node1上执行以下命令

/export/server/flink/bin/yarn-session.sh -n 2 -tm 800 -s 1 -d

说明:申请2个CPU1600M内存

# -n 表示申请2个容器,这里指的就是多少个taskmanager
# -tm 表示每个TaskManager的内存大小
# -s 表示每个TaskManager的slots数量
# -d 表示以后台程序方式运行

注意该警告不用管:
WARN org.apache.hadoop.hdfs.DFSClient - Caught exception
java.lang.InterruptedException

2. 查看UI界面:http://node1:8088/cluster

3.使用flink run提交任务

/export/server/flink/bin/flink run
/export/server/flink/examples/batch/WordCount.jar

运行完之后可以继续运行其他的小任务

/export/server/flink/bin/flink run
/export/server/flink/examples/batch/WordCount.jar

4. 通过上方的ApplicationMaster可以进入Flink的管理界面


5. 关闭yarn-session:

yarn application -kill application_1599402747874_0001

rm -rf /tmp/.yarn-properties-root

5.5.2 Per-Job分离模式

1. 直接提交job

/export/server/flink/bin/flink run -m yarn-cluster -yjm 1024 -ytm 1024
/export/server/flink/examples/batch/WordCount.jar
# -m  jobmanager的地址
# -yjm 1024 指定jobmanager的内存信息
# -ytm 1024 指定taskmanager的内存信息

2. 查看UI界面:http://node1:8088/cluster

3.注意

在之前版本中如果使用的是flink on yarn方式,想切换回standalone模式的话,如果报错需要删除:【/tmp/.yarn-properties-root】即:rm -rf /tmp/.yarn-properties-root
因为默认查找当前yarn集群中已有的yarn-session信息中的jobmanager

06 参数总结

[root@node1 bin]# /export/server/flink/bin/flink --help
./flink <ACTION> [OPTIONS] [ARGUMENTS]The following actions are available:Action "run" compiles and runs a program.Syntax: run [OPTIONS] <jar-file> <arguments>"run" action options:-c,--class <classname>               Class with the program entry point("main()" method). Only needed if theJAR file does not specify the class inits manifest.-C,--classpath <url>                 Adds a URL to each user codeclassloader  on all nodes in thecluster. The paths must specify aprotocol (e.g. file://) and beaccessible on all nodes (e.g. by meansof a NFS share). You can use thisoption multiple times for specifyingmore than one URL. The protocol mustbe supported by the {@linkjava.net.URLClassLoader}.-d,--detached                        If present, runs the job in detachedmode-n,--allowNonRestoredState           Allow to skip savepoint state thatcannot be restored. You need to allowthis if you removed an operator fromyour program that was part of theprogram when the savepoint wastriggered.-p,--parallelism <parallelism>       The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.-py,--python <pythonFile>            Python script with the program entrypoint. The dependent resources can beconfigured with the `--pyFiles`option.-pyarch,--pyArchives <arg>           Add python archive files for job. Thearchive files will be extracted to theworking directory of python UDFworker. Currently only zip-format issupported. For each archive file, atarget directory be specified. If thetarget directory name is specified,the archive file will be extracted toa name can directory with thespecified name. Otherwise, the archivefile will be extracted to a directorywith the same name of the archivefile. The files uploaded via thisoption are accessible via relativepath. '#' could be used as theseparator of the archive file path andthe target directory name. Comma (',')could be used as the separator tospecify multiple archive files. Thisoption can be used to upload thevirtual environment, the data filesused in Python UDF (e.g.: --pyArchivesfile:///tmp/py37.zip,file:///tmp/data.zip#data --pyExecutablepy37.zip/py37/bin/python). The datafiles could be accessed in Python UDF,e.g.: f = open('data/data.txt', 'r').-pyexec,--pyExecutable <arg>         Specify the path of the pythoninterpreter used to execute the pythonUDF worker (e.g.: --pyExecutable/usr/local/bin/python3). The pythonUDF worker depends on Python 3.5+,Apache Beam (version == 2.23.0), Pip(version >= 7.1.0) and SetupTools(version >= 37.0.0). Please ensurethat the specified environment meetsthe above requirements.-pyfs,--pyFiles <pythonFiles>        Attach custom python files for job.These files will be added to thePYTHONPATH of both the local clientand the remote python UDF worker. Thestandard python resource file suffixessuch as .py/.egg/.zip or directory areall supported. Comma (',') could beused as the separator to specifymultiple files (e.g.: --pyFilesfile:///tmp/myresource.zip,hdfs:///$namenode_address/myresource2.zip).-pym,--pyModule <pythonModule>       Python module with the program entrypoint. This option must be used inconjunction with `--pyFiles`.-pyreq,--pyRequirements <arg>        Specify a requirements.txt file whichdefines the third-party dependencies.These dependencies will be installedand added to the PYTHONPATH of thepython UDF worker. A directory whichcontains the installation packages ofthese dependencies could be specifiedoptionally. Use '#' as the separatorif the optional parameter exists(e.g.: --pyRequirementsfile:///tmp/requirements.txt#file:///tmp/cached_dir).-s,--fromSavepoint <savepointPath>   Path to a savepoint to restore the jobfrom (for examplehdfs:///flink/savepoint-1537).-sae,--shutdownOnAttachedExit        If the job is submitted in attachedmode, perform a best-effort clustershutdown when the CLI is terminatedabruptly, e.g., in response to a userinterrupt, such as typing Ctrl + C.Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-d,--detached                        If present, runs the job in detachedmode-m,--jobmanager <arg>                Set to yarn-cluster to use YARNexecution mode.-yat,--yarnapplicationType <arg>     Set a custom application type for theapplication on YARN-yD <property=value>                 use value for given property-yd,--yarndetached                   If present, runs the job in detachedmode (deprecated; use non-YARNspecific option instead)-yh,--yarnhelp                       Help for the Yarn session CLI.-yid,--yarnapplicationId <arg>       Attach to running YARN session-yj,--yarnjar <arg>                  Path to Flink jar file-yjm,--yarnjobManagerMemory <arg>    Memory for JobManager Container withoptional unit (default: MB)-ynl,--yarnnodeLabel <arg>           Specify YARN node label for the YARNapplication-ynm,--yarnname <arg>                Set a custom name for the applicationon YARN-yq,--yarnquery                      Display available YARN resources(memory, cores)-yqu,--yarnqueue <arg>               Specify YARN queue.-ys,--yarnslots <arg>                Number of slots per TaskManager-yt,--yarnship <arg>                 Ship files in the specified directory(t for transfer)-ytm,--yarntaskManagerMemory <arg>   Memory per TaskManager Container withoptional unit (default: MB)-yz,--yarnzookeeperNamespace <arg>   Namespace to create the Zookeepersub-paths for high availability mode-z,--zookeeperNamespace <arg>        Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "run-application" runs an application in Application Mode.Syntax: run-application [OPTIONS] <jar-file> <arguments>Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Action "info" shows the optimized execution plan of the program (JSON).Syntax: info [OPTIONS] <jar-file> <arguments>"info" action options:-c,--class <classname>           Class with the program entry point("main()" method). Only needed if the JARfile does not specify the class in itsmanifest.-p,--parallelism <parallelism>   The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.Action "list" lists running and scheduled programs.Syntax: list [OPTIONS]"list" action options:-a,--all         Show all programs and their JobIDs-r,--running     Show only running programs and their JobIDs-s,--scheduled   Show only scheduled programs and their JobIDsOptions for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "stop" stops a running program with a savepoint (streaming jobs only).Syntax: stop [OPTIONS] <Job ID>"stop" action options:-d,--drain                           Send MAX_WATERMARK before taking thesavepoint and stopping the pipelne.-p,--savepointPath <savepointPath>   Path to the savepoint (for examplehdfs:///flink/savepoint-1537). If nodirectory is specified, the configureddefault will be used("state.savepoints.dir").Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "cancel" cancels a running program.Syntax: cancel [OPTIONS] <Job ID>"cancel" action options:-s,--withSavepoint <targetDirectory>   **DEPRECATION WARNING**: Cancellinga job with savepoint is deprecated.Use "stop" instead.Trigger savepoint and cancel job.The target directory is optional. Ifno directory is specified, theconfigured default directory(state.savepoints.dir) is used.Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "savepoint" triggers savepoints for a running job or disposes existing ones.Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]"savepoint" action options:-d,--dispose <arg>       Path of savepoint to dispose.-j,--jarfile <jarfile>   Flink program JAR file.Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability mode

07 文末

本文主要讲解了Flink的本地和集群的安装部署方式,谢谢各位的阅读,本文完!

Flink教程(03)- Flink环境搭建相关推荐

  1. 强化学习快餐教程(1) - gym环境搭建

    强化学习快餐教程(1) - gym环境搭建 欲练强化学习神功,首先得找一个可以操练的场地. 两大巨头OpenAI和Google DeepMind都不约而同的以游戏做为平台,比如OpenAI的长处是DO ...

  2. flink的standalone模式环境搭建

    一.standalone模式 所有的资源都由flink自己管理 flink的jar包:flink-1.11.2-bin-scala_2.11.tgz 把安装包放到linux中 bin #服务或命令 c ...

  3. python3.6 django教程_【Python3.6+Django2.0+Xadmin2.0系列教程一】环境搭建及项目创建

    由于工作需要,接触了大半年时间的Django+xadmin框架,一直没空对这块对进行相关的梳理.最近在同事的怂恿下,就在这分享下笔者的学习及工作经验吧. 好了,话不多说,下面开始进入正题: 环境需求: ...

  4. linux php环境搭建 图文教程,linux php环境搭建教程

    1) 安装依赖包yum -y install wget vim pcre pcre-devel openssl openssl-devel \libicu-devel gcc gcc-c++ auto ...

  5. 手把手教你搭建一个【文件共享平台】系列教程第二话——环境搭建

    文章目录 本话概要 前端 前端整体需求 前端组件树 前端环境搭建 后端 后端整体需求 后端技术路线 后端环境搭建 下期预告 本话概要 这一篇博文主要从整体的角度,概述整个文件共享平台前.后端的需求.技 ...

  6. linux php环境搭建教程,linux php环境搭建教程

    linux php环境搭建的方法:首先获取相关安装包:然后安装Apache以及mysql:接着修改配置文件"httpd.conf":最后设置环境变量和开机自启,并编译安装PHP即可 ...

  7. Apache Flink源码阅读环境搭建

    目录 1 下载源码 2 编译打包 3 导入项目 4 debug 版本 win7 jdk 1.8 maven 3.6.3 scala 2.11.8 这些必须提前安装好 1 下载源码 # 下载源码 git ...

  8. OpenCV入门教程之开发环境搭建(Android、C/C++、Python)

    文章目录 opencv Android搭建OpenCV开发环境 自己写C/C++调用OpenCV实现 小编已经在gayhub开源了一个轮子,可直接使用:一个最简单.免搭建的Android OpenCV ...

  9. 5天学会Linux(实操练手+最全教程) Day1 环境搭建

    大家好,我是测试奇谭的作者风风. 熟悉测试奇谭行文风格的小伙伴都知道--我的文章重在场景举例和实战讲解,非常利于学习并掌握一门新技术.不信请看姊妹篇: 分享一份适合练手的接口测试实战项目 分享一份适合 ...

  10. ionic入门教程第一课--环境搭建和新建ionic项目

    最近由于公司项目需要,自学Ionic.在这里做个备忘,也希望能帮到想自学ionic的其他朋友. 一.首先需要安装node.js环境,对于不了解node.js的同学也没有关系, 因为我们有用到的只是no ...

最新文章

  1. 毕业设计:基于Springboot实现求职招聘,校园招聘系统
  2. 初学Java你有这些疑惑吗?本文给你几个建议
  3. 用 Nginx 基于 Let's Engypt 免费证书打造快速安全的 HTTPS 网站
  4. BZOJ 1662: [Usaco2006 Nov]Round Numbers 圆环数(数位DP+恶心细节)
  5. 吴恩达机器学习作业5.偏差和方差
  6. Spring如何优雅地发送异步发送通知?
  7. asp:dropdownlist如何去掉三角箭头_如何使用css伪元素实现超实用的图标库(附源码)...
  8. 高科技应用之人脸识别、,
  9. js通过身份证计算年龄
  10. 开源云真机平台-Sonic应用实践
  11. 关于 Win10 下使用 IETester 的问题
  12. 支付宝原生组件(酒店时间选择)
  13. strchr、strstr函数
  14. 【netron】模型可视化工具netron
  15. [高通MSM8953][Android10]user版本背光亮度无法调节
  16. Python学习日记-第二十六天-飞机大战(发射子弹和碰撞检测)
  17. RK3399 Android7.1 RTC导致系统无法进入休眠
  18. kotlin 使用ButterKnife
  19. C++对C语言的扩展
  20. vue使用高德地图点标记及复杂操作

热门文章

  1. 组合游戏(Nim游戏)——SG函数
  2. 蓝桥杯单片机之PCF8591模块的使用
  3. MPLS LDP简介-ielab
  4. [玩机] 如何获取Windows Store 商店应用下载链接
  5. CorelDRAW图片导出变色,如何解决?
  6. 沈航计算机学院杨华,【沈航新青年·实践】电子信息工程学院“电信筑梦,科技振兴”暑期社会实践活动纪实...
  7. 《2015互联网安全年报》,移动端成重灾区,黑灰产日益成熟
  8. 苹果Mac字体设计编辑工具:Glyphs
  9. 深度学习笔记---多尺度网络结构归类总结
  10. session制作购物车