1.概述

转载:https://zhangboyi.blog.csdn.net/article/details/114631783 如果侵权,可删。

2. 调用关系

服务 入口类 备注
taskexecutor org.apache.flink.runtime.taskexecutor.TaskManagerRunner xxx
standalonesession org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint xxx
standalonejob org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint xxx
historyserver org.apache.flink.runtime.webmonitor.history.HistoryServer xxx
zookeeper org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer xxx
flink run -t yarn-per-job org.apache.flink.client.cli.CliFrontend yarn-per-job 模式提交任务
yarn-session.sh org.apache.flink.yarn.cli.FlinkYarnSessionCli yarn session模式入口

二 .start-cluster.sh

start-cluster.sh 是Flink的启动脚本,所以看一下这个脚本都干了啥
三步:

  • 在执行start-cluster.sh的时候,首先执行了config.sh. 加载需要用的函数/配置.
  • 启动JobManager实例.
  • 3.启动TaskManager实例.

2.1. 加载全局配置函数config.sh

这里面主要是一些环境变量 和抽取的一些方法.

环境变量信息

### Exported environment variables ###
export FLINK_CONF_DIR
export FLINK_BIN_DIR
export FLINK_PLUGINS_DIR
# export /lib dir to access it during deployment of the Yarn staging files
export FLINK_LIB_DIR
# export /opt dir to access it for the SQL client
export FLINK_OPT_DIR

readMasters 读取conf/masters配置文件,获取master的服务安装位置

readMasters() {MASTERS_FILE="${FLINK_CONF_DIR}/masters"if [[ ! -f "${MASTERS_FILE}" ]]; thenecho "No masters file. Please specify masters in 'conf/masters'."exit 1fiMASTERS=()WEBUIPORTS=()MASTERS_ALL_LOCALHOST=trueGOON=truewhile $GOON; doread line || GOON=falseHOSTWEBUIPORT=$( extractHostName $line)if [ -n "$HOSTWEBUIPORT" ]; thenHOST=$(echo $HOSTWEBUIPORT | cut -f1 -d:)WEBUIPORT=$(echo $HOSTWEBUIPORT | cut -s -f2 -d:)MASTERS+=(${HOST})if [ -z "$WEBUIPORT" ]; thenWEBUIPORTS+=(0)elseWEBUIPORTS+=(${WEBUIPORT})fiif [ "${HOST}" != "localhost" ] && [ "${HOST}" != "127.0.0.1" ] ; thenMASTERS_ALL_LOCALHOST=falsefifidone < "$MASTERS_FILE"
}

readWorkers 读取conf/workers 配置文件获取worker节点的安装位置

readWorkers() {WORKERS_FILE="${FLINK_CONF_DIR}/workers"if [[ ! -f "$WORKERS_FILE" ]]; thenecho "No workers file. Please specify workers in 'conf/workers'."exit 1fiWORKERS=()WORKERS_ALL_LOCALHOST=trueGOON=truewhile $GOON; doread line || GOON=falseHOST=$( extractHostName $line)if [ -n "$HOST" ] ; thenWORKERS+=(${HOST})if [ "${HOST}" != "localhost" ] && [ "${HOST}" != "127.0.0.1" ] ; thenWORKERS_ALL_LOCALHOST=falsefifidone < "$WORKERS_FILE"
}

2.2. 启动 jobManger

根据master配置的信息,循环启动master .
如果是本地的话直接调用jobmanager.sh脚本, 否则的话通过ssh 命令远程执行jobmanager.sh启动…

for ((i=0;i<${#MASTERS[@]};++i)); domaster=${MASTERS[i]}webuiport=${WEBUIPORTS[i]}if [ ${MASTERS_ALL_LOCALHOST} = true ] ; then"${FLINK_BIN_DIR}"/jobmanager.sh start "${master}" "${webuiport}"elsessh -n $FLINK_SSH_OPTS $master -- "nohup /bin/bash -l \"${FLINK_BIN_DIR}/jobmanager.sh\" start ${master} ${webuiport} &"fidone

masters 配置文件内容

localhost:8081

2.3. 启动TaskManager实例

在 start-cluster.sh脚本启动TaskManager实例的代码就一行.

TMWorkers start

需要去config.sh里面找到对应的代码.
如果是本地的话直接调用taskmanager.sh脚本, 否则的话通过ssh 命令远程执行taskmanager.sh启动…

# 确定/停止所有的worker节点
# starts or stops TMs on all workers
# TMWorkers start|stop
TMWorkers() {CMD=$1readWorkersif [ ${WORKERS_ALL_LOCALHOST} = true ] ; then# all-local setupfor worker in ${WORKERS[@]}; do"${FLINK_BIN_DIR}"/taskmanager.sh "${CMD}"doneelse# non-local setup# start/stop TaskManager instance(s) using pdsh (Parallel Distributed Shell) when availablecommand -v pdsh >/dev/null 2>&1if [[ $? -ne 0 ]]; thenfor worker in ${WORKERS[@]}; dossh -n $FLINK_SSH_OPTS $worker -- "nohup /bin/bash -l \"${FLINK_BIN_DIR}/taskmanager.sh\" \"${CMD}\" &"doneelsePDSH_SSH_ARGS="" PDSH_SSH_ARGS_APPEND=$FLINK_SSH_OPTS pdsh -w $(IFS=, ; echo "${WORKERS[*]}") \"nohup /bin/bash -l \"${FLINK_BIN_DIR}/taskmanager.sh\" \"${CMD}\""fifi
}

workers 配置文件内容

localhost

2.4. 完整代码

#!/usr/bin/env bash
################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################bin=`dirname "$0"`
bin=`cd "$bin"; pwd`# 1. 加载公共配置属性&方法
. "$bin"/config.sh# 2. 启动JobManager实例
# Start the JobManager instance(s)# 通知 shell 忽略字符串匹配中的大小写
shopt -s nocasematch
if [[ $HIGH_AVAILABILITY == "zookeeper" ]]; then# HA Mode# 读取 conf/master配置文件 获取master的信息readMastersecho "Starting HA cluster with ${#MASTERS[@]} masters."for ((i=0;i<${#MASTERS[@]};++i)); domaster=${MASTERS[i]}webuiport=${WEBUIPORTS[i]}if [ ${MASTERS_ALL_LOCALHOST} = true ] ; then"${FLINK_BIN_DIR}"/jobmanager.sh start "${master}" "${webuiport}"elsessh -n $FLINK_SSH_OPTS $master -- "nohup /bin/bash -l \"${FLINK_BIN_DIR}/jobmanager.sh\" start ${master} ${webuiport} &"fidoneelseecho "Starting cluster."# Start single JobManager on this machine"$FLINK_BIN_DIR"/jobmanager.sh start
fi
shopt -u nocasematch# 3.启动TaskManager实例
# Start TaskManager instance(s)
TMWorkers start

三 .jobmanager.sh

jobmanager.sh脚本是启动/停止JobManager服务的脚本
参数格式 :

Usage: jobmanager.sh ((start|start-foreground) [host] [webui-port])|stop|stop-all

在执行start-cluster.sh的时候,首先执行了config.sh. 加载需要用的函数/配置.
加载各种参数,最终调用 flink-daemon.sh 脚本…
最终输出的脚本样例 :

${FLINK_HOME}/bin/flink-daemon.sh start standalonesession --configDir /opt/tools/flink-1.12.2/conf --executionMode cluster -D jobmanager.memory.off-heap.size=134217728b -D jobmanager.memory.jvm-overhead.min=201326592b -D jobmanager.memory.jvm-metaspace.size=268435456b -D jobmanager.memory.heap.size=1073741824b -D jobmanager.memory.jvm-overhead.max=201326592b

完整代码 :

#!/usr/bin/env bash
################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################# Start/stop a Flink JobManager.
USAGE="Usage: jobmanager.sh ((start|start-foreground) [host] [webui-port])|stop|stop-all"STARTSTOP=$1
HOST=$2 # optional when starting multiple instances
WEBUIPORT=$3 # optional when starting multiple instancesif [[ $STARTSTOP != "start" ]] && [[ $STARTSTOP != "start-foreground" ]] && [[ $STARTSTOP != "stop" ]] && [[ $STARTSTOP != "stop-all" ]]; thenecho $USAGEexit 1
fibin=`dirname "$0"`
bin=`cd "$bin"; pwd`. "$bin"/config.shENTRYPOINT=standalonesessionif [[ $STARTSTOP == "start" ]] || [[ $STARTSTOP == "start-foreground" ]]; then# Add JobManager-specific JVM optionsexport FLINK_ENV_JAVA_OPTS="${FLINK_ENV_JAVA_OPTS} ${FLINK_ENV_JAVA_OPTS_JM}"parseJmArgsAndExportLogs "${ARGS[@]}"args=("--configDir" "${FLINK_CONF_DIR}" "--executionMode" "cluster")if [ ! -z $HOST ]; thenargs+=("--host")args+=("${HOST}")fiif [ ! -z $WEBUIPORT ]; thenargs+=("--webui-port")args+=("${WEBUIPORT}")fiif [ ! -z "${DYNAMIC_PARAMETERS}" ]; thenargs+=(${DYNAMIC_PARAMETERS[@]})fi
fiif [[ $STARTSTOP == "start-foreground" ]]; thenexec "${FLINK_BIN_DIR}"/flink-console.sh $ENTRYPOINT "${args[@]}"
elseecho "${FLINK_BIN_DIR}"/flink-daemon.sh $STARTSTOP $ENTRYPOINT "${args[@]}"## "${FLINK_BIN_DIR}"/flink-daemon.sh $STARTSTOP $ENTRYPOINT "${args[@]}"
fi

四 .taskmanager.sh

taskmanager.sh脚本是启动/停止TaskManager服务的脚本
参数格式 :

Usage: taskmanager.sh (start|start-foreground|stop|stop-all)

在执行start-cluster.sh的时候,首先执行了config.sh. 加载需要用的函数/配置.
加载各种参数,最终调用 flink-daemon.sh 脚本…
最终输出的脚本样例 :

${FLINK_HOME}/bin/flink-daemon.sh start taskexecutor --configDir /opt/tools/flink-1.12.2/conf -D taskmanager.memory.framework.off-heap.size=134217728b -D taskmanager.memory.network.max=134217730b -D taskmanager.memory.network.min=134217730b -D taskmanager.memory.framework.heap.size=134217728b -D taskmanager.memory.managed.size=536870920b -D taskmanager.cpu.cores=1.0 -D taskmanager.memory.task.heap.size=402653174b -D taskmanager.memory.task.off-heap.size=0b -D taskmanager.memory.jvm-metaspace.size=268435456b -D taskmanager.memory.jvm-overhead.max=201326592b -D taskmanager.memory.jvm-overhead.min=201326592b

完整代码

#!/usr/bin/env bash
################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################# Start/stop a Flink TaskManager.
USAGE="Usage: taskmanager.sh (start|start-foreground|stop|stop-all)"STARTSTOP=$1ARGS=("${@:2}")if [[ $STARTSTOP != "start" ]] && [[ $STARTSTOP != "start-foreground" ]] && [[ $STARTSTOP != "stop" ]] && [[ $STARTSTOP != "stop-all" ]]; thenecho $USAGEexit 1
fibin=`dirname "$0"`
bin=`cd "$bin"; pwd`. "$bin"/config.shENTRYPOINT=taskexecutorif [[ $STARTSTOP == "start" ]] || [[ $STARTSTOP == "start-foreground" ]]; then# if no other JVM options are set, set the GC to G1if [ -z "${FLINK_ENV_JAVA_OPTS}" ] && [ -z "${FLINK_ENV_JAVA_OPTS_TM}" ]; thenexport JVM_ARGS="$JVM_ARGS -XX:+UseG1GC"fi# Add TaskManager-specific JVM optionsexport FLINK_ENV_JAVA_OPTS="${FLINK_ENV_JAVA_OPTS} ${FLINK_ENV_JAVA_OPTS_TM}"# Startup parametersjava_utils_output=$(runBashJavaUtilsCmd GET_TM_RESOURCE_PARAMS "${FLINK_CONF_DIR}" "$FLINK_BIN_DIR/bash-java-utils.jar:$(findFlinkDistJar)" "${ARGS[@]}")logging_output=$(extractLoggingOutputs "${java_utils_output}")params_output=$(extractExecutionResults "${java_utils_output}" 2)if [[ $? -ne 0 ]]; thenecho "[ERROR] Could not get JVM parameters and dynamic configurations properly."echo "[ERROR] Raw output from BashJavaUtils:"echo "$java_utils_output"exit 1fijvm_params=$(echo "${params_output}" | head -n 1)export JVM_ARGS="${JVM_ARGS} ${jvm_params}"IFS=$" " dynamic_configs=$(echo "${params_output}" | tail -n 1)ARGS=("--configDir" "${FLINK_CONF_DIR}" ${dynamic_configs[@]} "${ARGS[@]}")export FLINK_INHERITED_LOGS="
$FLINK_INHERITED_LOGSTM_RESOURCE_PARAMS extraction logs:
jvm_params: $jvm_params
dynamic_configs: $dynamic_configs
logs: $logging_output
"
fiif [[ $STARTSTOP == "start-foreground" ]]; thenexec "${FLINK_BIN_DIR}"/flink-console.sh $ENTRYPOINT "${ARGS[@]}"
elseif [[ $FLINK_TM_COMPUTE_NUMA == "false" ]]; then# Start a single TaskManager# 启动单个 TaskManagerecho  "${FLINK_BIN_DIR}"/flink-daemon.sh $STARTSTOP $ENTRYPOINT "${ARGS[@]}"# "${FLINK_BIN_DIR}"/flink-daemon.sh $STARTSTOP $ENTRYPOINT "${ARGS[@]}"else# Example output from `numactl --show` on an AWS c4.8xlarge:# policy: default# preferred node: current# physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35# cpubind: 0 1# nodebind: 0 1# membind: 0 1read -ra NODE_LIST <<< $(numactl --show | grep "^nodebind: ")for NODE_ID in "${NODE_LIST[@]:1}"; do# Start a TaskManager for each NUMA nodenumactl --membind=$NODE_ID --cpunodebind=$NODE_ID -- "${FLINK_BIN_DIR}"/flink-daemon.sh $STARTSTOP $ENTRYPOINT "${ARGS[@]}"donefi
fi

五 .flink-daemon.sh

启动taskexecutor、 zookeeper、historyserver、standalonesession、standalonejob都需要的任务指令.

Usage: flink-daemon.sh (start|stop|stop-all) (taskexecutor|zookeeper|historyserver|standalonesession|standalonejob) [args]

服务 入口类 备注
taskexecutor org.apache.flink.runtime.taskexecutor.TaskManagerRunner xxx
standalonesession org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint xxx
standalonejob org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint xxx
historyserver org.apache.flink.runtime.webmonitor.history.HistoryServer xxx
zookeeper org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer xxx

5.1.JobManager启动指令

${JAVA_HOME}/bin/java
-Xmx1073741824
-Xms1073741824
-XX:MaxMetaspaceSize=268435456# 日志相关
-Dlog.file=${FLINK_HOME}/log/flink-sysadmin-standalonesession-1-BoYi-Pro.local.log
-Dlog4j.configuration=file:${FLINK_HOME}/conf/log4j.properties
-Dlog4j.configurationFile=file:${FLINK_HOME}/conf/log4j.properties
-Dlogback.configurationFile=file:${FLINK_HOME}/conf/logback.xml# classpath类路径相关
-classpath ${FLINK_HOME}/lib/flink-csv-1.12.2.jar:${FLINK_HOME}/lib/flink-json-1.12.2.jar:${FLINK_HOME}/lib/flink-shaded-zookeeper-3.4.14.jar:${FLINK_HOME}/lib/flink-table-blink_2.12-1.12.2.jar:${FLINK_HOME}/lib/flink-table_2.12-1.12.2.jar:${FLINK_HOME}/lib/log4j-1.2-api-2.12.1.jar:${FLINK_HOME}/lib/log4j-api-2.12.1.jar:${FLINK_HOME}/lib/log4j-core-2.12.1.jar:${FLINK_HOME}/lib/log4j-slf4j-impl-2.12.1.jar:${FLINK_HOME}/lib/flink-dist_2.12-1.12.2.jar::/opt/tools/hadoop-3.2.1/etc/hadoop::/opt/tools/hbase-2.0.2/conf# 启动类相关 [核心]
org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint
--configDir ${FLINK_HOME}/conf
--executionMode cluster
-D jobmanager.memory.off-heap.size=134217728b
-D jobmanager.memory.jvm-overhead.min=201326592b
-D jobmanager.memory.jvm-metaspace.size=268435456b
-D jobmanager.memory.heap.size=1073741824b
-D jobmanager.memory.jvm-overhead.max=201326592b # 输出日志
> ${FLINK_HOME}/log/flink-sysadmin-standalonesession-1-BoYi-Pro.local.out 200<&- 2>&1 < /dev/null

5.2.TaskManger启动指令

${JAVA_HOME}/bin/java  -XX:+UseG1GC -Xmx536870902 -Xms536870902 -XX:MaxDirectMemorySize=268435458 -XX:MaxMetaspaceSize=268435456# 日志相关
-Dlog.file=${FLINK_HOME}/log/flink-sysadmin-taskexecutor-1-BoYi-Pro.local.log
-Dlog4j.configuration=file:${FLINK_HOME}/conf/log4j.properties
-Dlog4j.configurationFile=file:${FLINK_HOME}/conf/log4j.properties
-Dlogback.configurationFile=file:${FLINK_HOME}/conf/logback.xml# classpath类路径相关
-classpath ${FLINK_HOME}/lib/flink-csv-1.12.2.jar:${FLINK_HOME}/lib/flink-json-1.12.2.jar:${FLINK_HOME}/lib/flink-shaded-zookeeper-3.4.14.jar:${FLINK_HOME}/lib/flink-table-blink_2.12-1.12.2.jar:${FLINK_HOME}/lib/flink-table_2.12-1.12.2.jar:${FLINK_HOME}/lib/log4j-1.2-api-2.12.1.jar:${FLINK_HOME}/lib/log4j-api-2.12.1.jar:${FLINK_HOME}/lib/log4j-core-2.12.1.jar:${FLINK_HOME}/lib/log4j-slf4j-impl-2.12.1.jar:${FLINK_HOME}/lib/flink-dist_2.12-1.12.2.jar::/opt/tools/hadoop-3.2.1/etc/hadoop::/opt/tools/hbase-2.0.2/conf# 启动类相关 [核心]
org.apache.flink.runtime.taskexecutor.TaskManagerRunner
--configDir ${FLINK_HOME}/conf
-D taskmanager.memory.framework.off-heap.size=134217728b
-D taskmanager.memory.network.max=134217730b
-D taskmanager.memory.network.min=134217730b
-D taskmanager.memory.framework.heap.size=134217728b
-D taskmanager.memory.managed.size=536870920b
-D taskmanager.cpu.cores=1.0
-D taskmanager.memory.task.heap.size=402653174b
-D taskmanager.memory.task.off-heap.size=0b
-D taskmanager.memory.jvm-metaspace.size=268435456b
-D taskmanager.memory.jvm-overhead.max=201326592b
-D taskmanager.memory.jvm-overhead.min=201326592b # 输出日志
> ${FLINK_HOME}/log/flink-sysadmin-taskexecutor-1-BoYi-Pro.local.out 200<&- 2>&1 < /dev/null

六. yarn-per-job模式

执行命令 :

执行命令 :

cd ${FLINK_HOME}
flink run -t yarn-per-job -c org.apache.flink.streaming.examples.socket.SocketWindowWordCount examples/streaming/SocketWindowWordCount.jar --port 9999

发现是通过flink run 指令执行的, 最终的输出 :

${JAVA_HOME}/bin/java
-Dlog.file=${FLINK_HOME}/log/flink-sysadmin-client-BoYi-Pro.local.log
-Dlog4j.configuration=file:${FLINK_HOME}/conf/log4j-cli.properties
-Dlog4j.configurationFile=file:${FLINK_HOME}/conf/log4j-cli.properties
-Dlogback.configurationFile=file:${FLINK_HOME}/conf/logback.xml
-classpath ${FLINK_HOME}/*.jar::/opt/tools/hadoop-3.2.1/etc/hadoop::/opt/tools/hbase-2.0.2/conf org.apache.flink.client.cli.CliFrontend
run
-t yarn-per-job
-c org.apache.flink.streaming.examples.socket.SocketWindowWordCount
examples/streaming/SocketWindowWordCount.jar --port 9999

看到这里, 我们 程序提交的入口就是 :

org.apache.flink.client.cli.CliFrontend

7… yarn-session.sh

入口类:

org.apache.flink.yarn.cli.FlinkYarnSessionCli

【Flink】Flink 1.12.2 启动脚本相关推荐

  1. opensuse 12.1 启动脚本

    google 百度上搜索. d人都讲 /etc/init.d/after.local 系opensuse的启动脚本, 相当于redhat ubuntu的 rc.local, 大家可以在启动脚本添加命令 ...

  2. 【Flink】Flink 报错 flink 1.12.5 启动作业报 partition not found

    文章目录 1.概述 1.概述 flink 1.12.5 启动作业报 partition not found的问题, 可能是什么原因导致的 启动作业就有这个问题,特别是耗费资源一百多g的大作业出现的更频 ...

  3. [基础架构] [Flink] Flink/Flink-CDC的部署和配置

    简介 下载官方Flink依赖包 (笔者所用版本为1.13.6) 下载下面列出的依赖包,并将它们放到目录 flink-1.13.6/lib/ 下: 下载elasticsearch连接器flink-sql ...

  4. 凌波微步Flink——Flink的技术逻辑与编程步骤剖析

    转载请注明出处:http://blog.csdn.net/dongdong9223/article/details/95459606 本文出自[我是干勾鱼的博客] Ingredients: Java: ...

  5. LNMP安装与启动脚本编写

    1.安装mysql 1 cd /usr/local/src/ 下载mysql: 1 wget http://mirrors.sohu.com/mysql/MySQL-5.1/mysql-5.1.72- ...

  6. Linux_自制系统服务启动脚本

    目录 目录 前言 Case语句 Apache 启动脚本 Postfix service 启停脚本 前言 在Linux的某些系统服务中,需要自己定制启动服务的脚本.通常会使用Cash语句来实现. Cas ...

  7. android+启动脚本,imx6q android 添加开机启动脚本

    1.在xx/out/target/product/sabresd_6dq/root/init.rc中添加以下内容 ========================================== ...

  8. linux 启动脚本 tty,Linux启动过程简介

    许多人对Linux的启动过程感到很神秘,因为所有的启动信息都在屏幕上一闪而过.其实, Linux的启动过程并不象启动信息所显示的那样复杂,它主要分成两个阶段: 1.启动内核.在这个阶段,内核装入内存并 ...

  9. Spark学习之路 (十五)SparkCore的源码解读(一)启动脚本

    讨论QQ:1586558083 目录 一.启动脚本分析 1.1 start-all.sh 1.2 start-master.sh 1.3 spark-config.sh(1.2的第5步) 1.4 lo ...

最新文章

  1. TransG : A Generative Model for Knowledge Graph Embedding ACL 2016.Berlin, Germany.
  2. MongoDB探索之路(二)——系统设计之CRUD
  3. 【翻译自mos文章】OGG的集成捕捉模式支持Oracle database标准版么?
  4. Redis 主从复制
  5. 安装VISTA我们应该选择哪种
  6. 信息学奥赛一本通 1154:亲和数
  7. java jvisualvm linux,从Linux JDK中发出jvisualvm时出现乱码
  8. 死于决斗的数学天才伽罗瓦-人生的有限域
  9. 利用openssl创建私有CA的步骤和过程
  10. Matlab编程风格指南--Richard Johnson(命名规则,文件与结构,基本语句,布局,注释与文档)
  11. 正规简单租房合同样板word电子版百度云下载房屋租赁
  12. 衣带渐宽终不悔,为伊消得人憔悴。
  13. excel查找命令_快速查找Excel功能区命令
  14. (附源码)springboot家庭财务分析系统 毕业设计641323
  15. Markdown个人学习记录
  16. 看顶级渣男如何邀约100个女朋友(二)
  17. 你真的认为自己熟练Python?带你一篇文章 查漏补缺,感受自己离深入掌握 Python 还有多远。
  18. Python 模拟登录AUSU路由器获取在线用户列表
  19. Python基础1——读取数据(公众号数据科学实践)
  20. c语言怎么判断常量合不合法_C语言z简单的入门

热门文章

  1. 三星Galaxy S22 Ultra真机首曝:颜值与实力并存堪称完美
  2. iPhone 13 Pro原型机曝光:全新玫瑰金配色,女性首选
  3. 《2021新青年生长力报告》:水果青年、农货青年、设计青年,哪个最潮?
  4. 教育部成立校外教育培训监管司 K12迎最强监管 教育中概股再跳水
  5. 冒充“老干妈”公司工作人员行骗三人被提起公诉
  6. 疑似小米11系列旗舰跑分曝光:骁龙875性能突破天际
  7. 苹果或推出不到两千元的iPhone!安卓手机不淡定了
  8. 新晋千元王者!红米Note 8系列发布:999元起,价格真香
  9. 三星Galaxy Note10+最后的爆料:配备更大的S-Pen手写笔
  10. 中国联通辟谣“不支持华为”:恶意诽谤 将通过法律手段维护权益