参数总结

[root@node1 bin]# /export/server/flink/bin/flink --help./flink <ACTION> [OPTIONS] [ARGUMENTS]The following actions are available:Action "run" compiles and runs a program.Syntax: run [OPTIONS] <jar-file> <arguments>"run" action options:-c,--class <classname>               Class with the program entry point("main()" method). Only needed if theJAR file does not specify the class inits manifest.-C,--classpath <url>                 Adds a URL to each user codeclassloader  on all nodes in thecluster. The paths must specify aprotocol (e.g. file://) and beaccessible on all nodes (e.g. by meansof a NFS share). You can use thisoption multiple times for specifyingmore than one URL. The protocol mustbe supported by the {@linkjava.net.URLClassLoader}.-d,--detached                        If present, runs the job in detachedmode-n,--allowNonRestoredState           Allow to skip savepoint state thatcannot be restored. You need to allowthis if you removed an operator fromyour program that was part of theprogram when the savepoint wastriggered.-p,--parallelism <parallelism>       The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.-py,--python <pythonFile>            Python script with the program entrypoint. The dependent resources can beconfigured with the `--pyFiles`option.-pyarch,--pyArchives <arg>           Add python archive files for job. Thearchive files will be extracted to theworking directory of python UDFworker. Currently only zip-format issupported. For each archive file, atarget directory be specified. If thetarget directory name is specified,the archive file will be extracted toa name can directory with thespecified name. Otherwise, the archivefile will be extracted to a directorywith the same name of the archivefile. The files uploaded via thisoption are accessible via relativepath. '#' could be used as theseparator of the archive file path andthe target directory name. Comma (',')could be used as the separator tospecify multiple archive files. Thisoption can be used to upload thevirtual environment, the data filesused in Python UDF (e.g.: --pyArchivesfile:///tmp/py37.zip,file:///tmp/data.zip#data --pyExecutablepy37.zip/py37/bin/python). The datafiles could be accessed in Python UDF,e.g.: f = open('data/data.txt', 'r').-pyexec,--pyExecutable <arg>         Specify the path of the pythoninterpreter used to execute the pythonUDF worker (e.g.: --pyExecutable/usr/local/bin/python3). The pythonUDF worker depends on Python 3.5+,Apache Beam (version == 2.23.0), Pip(version >= 7.1.0) and SetupTools(version >= 37.0.0). Please ensurethat the specified environment meetsthe above requirements.-pyfs,--pyFiles <pythonFiles>        Attach custom python files for job.These files will be added to thePYTHONPATH of both the local clientand the remote python UDF worker. Thestandard python resource file suffixessuch as .py/.egg/.zip or directory areall supported. Comma (',') could beused as the separator to specifymultiple files (e.g.: --pyFilesfile:///tmp/myresource.zip,hdfs:///$namenode_address/myresource2.zip).-pym,--pyModule <pythonModule>       Python module with the program entrypoint. This option must be used inconjunction with `--pyFiles`.-pyreq,--pyRequirements <arg>        Specify a requirements.txt file whichdefines the third-party dependencies.These dependencies will be installedand added to the PYTHONPATH of thepython UDF worker. A directory whichcontains the installation packages ofthese dependencies could be specifiedoptionally. Use '#' as the separatorif the optional parameter exists(e.g.: --pyRequirementsfile:///tmp/requirements.txt#file:///tmp/cached_dir).-s,--fromSavepoint <savepointPath>   Path to a savepoint to restore the jobfrom (for examplehdfs:///flink/savepoint-1537).-sae,--shutdownOnAttachedExit        If the job is submitted in attachedmode, perform a best-effort clustershutdown when the CLI is terminatedabruptly, e.g., in response to a userinterrupt, such as typing Ctrl + C.Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-d,--detached                        If present, runs the job in detachedmode-m,--jobmanager <arg>                Set to yarn-cluster to use YARNexecution mode.-yat,--yarnapplicationType <arg>     Set a custom application type for theapplication on YARN-yD <property=value>                 use value for given property-yd,--yarndetached                   If present, runs the job in detachedmode (deprecated; use non-YARNspecific option instead)-yh,--yarnhelp                       Help for the Yarn session CLI.-yid,--yarnapplicationId <arg>       Attach to running YARN session-yj,--yarnjar <arg>                  Path to Flink jar file-yjm,--yarnjobManagerMemory <arg>    Memory for JobManager Container withoptional unit (default: MB)-ynl,--yarnnodeLabel <arg>           Specify YARN node label for the YARNapplication-ynm,--yarnname <arg>                Set a custom name for the applicationon YARN-yq,--yarnquery                      Display available YARN resources(memory, cores)-yqu,--yarnqueue <arg>               Specify YARN queue.-ys,--yarnslots <arg>                Number of slots per TaskManager-yt,--yarnship <arg>                 Ship files in the specified directory(t for transfer)-ytm,--yarntaskManagerMemory <arg>   Memory per TaskManager Container withoptional unit (default: MB)-yz,--yarnzookeeperNamespace <arg>   Namespace to create the Zookeepersub-paths for high availability mode-z,--zookeeperNamespace <arg>        Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "run-application" runs an application in Application Mode.Syntax: run-application [OPTIONS] <jar-file> <arguments>Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Action "info" shows the optimized execution plan of the program (JSON).Syntax: info [OPTIONS] <jar-file> <arguments>"info" action options:-c,--class <classname>           Class with the program entry point("main()" method). Only needed if the JARfile does not specify the class in itsmanifest.-p,--parallelism <parallelism>   The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.Action "list" lists running and scheduled programs.Syntax: list [OPTIONS]"list" action options:-a,--all         Show all programs and their JobIDs-r,--running     Show only running programs and their JobIDs-s,--scheduled   Show only scheduled programs and their JobIDsOptions for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "stop" stops a running program with a savepoint (streaming jobs only).Syntax: stop [OPTIONS] <Job ID>"stop" action options:-d,--drain                           Send MAX_WATERMARK before taking thesavepoint and stopping the pipelne.-p,--savepointPath <savepointPath>   Path to the savepoint (for examplehdfs:///flink/savepoint-1537). If nodirectory is specified, the configureddefault will be used("state.savepoints.dir").Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "cancel" cancels a running program.Syntax: cancel [OPTIONS] <Job ID>"cancel" action options:-s,--withSavepoint <targetDirectory>   **DEPRECATION WARNING**: Cancellinga job with savepoint is deprecated.Use "stop" instead.Trigger savepoint and cancel job.The target directory is optional. Ifno directory is specified, theconfigured default directory(state.savepoints.dir) is used.Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "savepoint" triggers savepoints for a running job or disposes existing ones.Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]"savepoint" action options:-d,--dispose <arg>       Path of savepoint to dispose.-j,--jarfile <jarfile>   Flink program JAR file.Options for Generic CLI mode:-D <property=value>   Allows specifying multiple generic configurationoptions. The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. For the "run" action the currently availabletargets are: "remote", "local", "kubernetes-session","yarn-per-job", "yarn-session". For the"run-application" action the currently availabletargets are: "kubernetes-application","yarn-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Set to yarn-cluster to use YARN executionmode.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-D <property=value>             Allows specifying multiple genericconfiguration options. The availableoptions can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration. Attention: Thisoption is respected only if thehigh-availability configuration is NONE.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability mode

2021年大数据Flink(七):​​​​​​​参数总结相关推荐

  1. 2021年大数据Flink(四十八):扩展阅读  Streaming File Sink

    目录 扩展阅读  Streaming File Sink 介绍 场景描述 Bucket和SubTask.PartFile 案例演示 扩展阅读  配置详解 PartFile PartFile序列化编码 ...

  2. 2021年大数据Flink(四十):​​​​​​​Flink模拟双十一实时大屏统计

    目录 Flink模拟双十一实时大屏统计 需求 数据 编码步骤: 1.env 2.source 3.transformation 4.使用上面聚合的结果,实现业务需求: 5.execute 参考代码 实 ...

  3. 2021年大数据Flink(十一):流批一体API Source

    目录 Source 预定义Source 基于集合的Source 基于文件的Source ​​​​​​​基于Socket的Source 自定义Source 随机生成数据 ​​​​​​​MySQL Sou ...

  4. 2021年大数据Flink(一):乘风破浪的Flink-Flink概述

    目录 乘风破浪的Flink-Flink概述 实时即未来 一切从Apache开始 富二代Flink Flink官方介绍 官网地址: Flink组件栈 ​​​​​​​Flink基石 Checkpoint ...

  5. 2021年大数据Flink(四十二):​​​​​​​BroadcastState

    ​​​​​目录 ​BroadcastState BroadcastState介绍 需求-实现配置动态更新 编码步骤 1.env 2.source 3.transformation 4.sink 5.e ...

  6. 2021年大数据Flink(三十九):​​​​​​​Table与SQL ​​​​​​总结 Flink-SQL常用算子

    目录 总结 Flink-SQL常用算子 SELECT WHERE ​​​​​​​DISTINCT ​​​​​​​GROUP BY ​​​​​​​UNION 和 UNION ALL ​​​​​​​JOI ...

  7. 2021年大数据Flink(二十七):Flink 容错机制 Checkpoint

    目录 Flink 容错机制 Checkpoint State Vs Checkpoint Checkpoint执行流程 简单流程 复杂流程 State状态后端/State存储介质 MemStateBa ...

  8. 2021年大数据Flink(二十五):Flink 状态管理

    目录 Flink-状态管理 Flink中的有状态计算 无状态计算和有状态计算 无状态计算 有状态计算 有状态计算的场景 状态的分类 Managed State & Raw State Keye ...

  9. 2021年大数据Flink(十五):流批一体API Connectors ​​​​​​​Kafka

    目录 Kafka pom依赖 参数设置 参数说明 Kafka命令 代码实现-Kafka Consumer 代码实现-Kafka Producer 代码实现-实时ETL Kafka pom依赖 Flin ...

最新文章

  1. 轻松破解NewzCrawler时间限制
  2. 谷歌推出TFQ,一个可训练量子模型的机器学习框架
  3. css设置元素继承父元素宽度_CSS设置HTML元素的高度与宽度的各种情况总结
  4. 数据结构---二叉搜索树
  5. 数据挖掘之CTR预估(FM算法)
  6. Spring 下 MyBatis 的基本使用
  7. 学习Java的9张思维导图
  8. matlab数字图像处理实验报告
  9. informix mysql_Informix数据库查看数据库大小
  10. html改变鼠标指针形状代码,改变鼠标指针形状_js改变鼠标形状与样式的方法
  11. 软考数据库考试有题库吗_科目一考试的题目都是从题库里挑的吗?科一考试技巧分享!...
  12. VUE实现页面局部刷新
  13. 怎样用python计算π的值_Python 计算 π 值的简单示例
  14. Halide学习笔记----Halide tutorial源码阅读2
  15. Fall 2011 CS193P Assignment 2: 可编程计算器答案
  16. Vue中登录验证成功后保存token,并每次请求携带并验证token操作
  17. Ubuntu22.04 实用工具总结 Toniht笔记
  18. markdown特殊符号或语法归纳
  19. Android串口编程
  20. “练习一万小时”定律

热门文章

  1. 将文件上传至ftp服务器,FTP文件上传工具类,将文件上传至服务器指定目录
  2. 无需自己输入include这些的方法
  3. 【Sql Server】DateBase-事务
  4. SpringBoot集成AOP管理日志
  5. Adam那么棒,为什么还对SGD念念不忘 (3)—— 优化算法的选择与使用策略
  6. java.lang.NullPointerException异常原因及解决
  7. 全文翻译(一):TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
  8. H.265视频编码与技术全析(上)
  9. 人脸照片自动生成游戏角色_ICCV2019论文解析
  10. 2021年大数据Kafka(十二):❤️Kafka配额限速机制❤️