Kubernetes搭建Flink会话集群

一、准备工作

  • linux 系统环境 :linux系统采用的是Centos 8 、安装linux软件包管理工具yum
  • 安装docker-ce
yum install -y docker-ce
  • 启动docker服务
systemctl start docker
  • 查看docker运行状态
systemctl status docker

显示状态

[root@INMS-T ~]# systemctl status docker
* docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2021-08-31 16:54:22 CST; 3h 12min leftDocs: https://docs.docker.comMain PID: 1339 (dockerd)Tasks: 34Memory: 198.2MCGroup: /system.slice/docker.service`-1339 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
  • 安装kubernetes集群

省略 。。

  • 安装istio 网关访问控制组件

省略 。。

  • 查看kubernetes运行状态
systemctl status kubelet
[root@INMS-T ~]# systemctl status kubelet
* kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /etc/systemd/system/kubelet.service.d`-10-kubeadm.confActive: active (running) since Tue 2021-08-31 16:54:00 CST; 3h 5min leftDocs: http://kubernetes.io/docs/Main PID: 880 (kubelet)Tasks: 17 (limit: 62498)Memory: 161.7MCGroup: /system.slice/kubelet.service`-880 /var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override>8<D4><C2> 31 13:48:26 INMS-T kubelet[880]: I0831 13:48:26.257187     880 scope.go:111] "RemoveContainer" containerID="6e76783c081c87e40fddbadb2ad8c22b8ebed5794bcaa3921f3a2b532c5622e3"
8<D4><C2> 31 13:48:27 INMS-T kubelet[880]: I0831 13:48:27.017515     880 scope.go:111] "RemoveContainer" containerID="1c4b10aa4887dd7027c90a81f0ebff18bc5c6dfb6f29fcaadd1f5480f231533e"

二、搭建Flink 会话集群

1、构建Flink 镜像

编写dockerfile,Flink镜像采用源码包: openjava8 、flink 1.3.2 、scala 2.12


# Install dependencies
RUN set -ex; \apt-get update; \apt-get -y install libsnappy1v5 gettext-base libjemalloc-dev; \rm -rf /var/lib/apt/lists/*# Grab gosu for easy step-down from root
ENV GOSU_VERSION 1.11
RUN set -ex; \wget -nv -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)"; \wget -nv -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc"; \export GNUPGHOME="$(mktemp -d)"; \for server in ha.pool.sks-keyservers.net $(shuf -e \hkp://p80.pool.sks-keyservers.net:80 \keyserver.ubuntu.com \hkp://keyserver.ubuntu.com:80 \pgp.mit.edu) ; do \gpg --batch --keyserver "$server" --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && break || : ; \done && \gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \gpgconf --kill all; \rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \chmod +x /usr/local/bin/gosu; \gosu nobody true# Configure Flink version
ENV FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-1.13.2/flink-1.13.2-bin-scala_2.12.tgz \FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.2/flink-1.13.2-bin-scala_2.12.tgz.asc \GPG_KEY=78A306590F1081CC6794DC7F62DAD618E07CF996 \CHECK_GPG=true# Prepare environment
ENV FLINK_HOME=/opt/flink
ENV PATH=$FLINK_HOME/bin:$PATH
RUN groupadd --system --gid=9999 flink && \useradd --system --home-dir $FLINK_HOME --uid=9999 --gid=flink flink
WORKDIR $FLINK_HOME# Install Flink
RUN set -ex; \wget -nv -O flink.tgz "$FLINK_TGZ_URL"; \\if [ "$CHECK_GPG" = "true" ]; then \wget -nv -O flink.tgz.asc "$FLINK_ASC_URL"; \export GNUPGHOME="$(mktemp -d)"; \for server in ha.pool.sks-keyservers.net $(shuf -e \hkp://p80.pool.sks-keyservers.net:80 \keyserver.ubuntu.com \hkp://keyserver.ubuntu.com:80 \pgp.mit.edu) ; do \gpg --batch --keyserver "$server" --recv-keys "$GPG_KEY" && break || : ; \done && \gpg --batch --verify flink.tgz.asc flink.tgz; \gpgconf --kill all; \rm -rf "$GNUPGHOME" flink.tgz.asc; \fi; \\tar -xf flink.tgz --strip-components=1; \rm flink.tgz; \\chown -R flink:flink .;# Configure container
COPY docker-entrypoint.sh /
COPY flink-console.sh /opt/flink/bin/
ENTRYPOINT ["/docker-entrypoint.sh"]   #入口启动脚本
EXPOSE 6123 8

编写docker-entrypoint.sh

#!/usr/bin/env bash###############################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################COMMAND_STANDALONE="standalone-job"
COMMAND_HISTORY_SERVER="history-server"# If unspecified, the hostname of the container is taken as the JobManager address
JOB_MANAGER_RPC_ADDRESS=${JOB_MANAGER_RPC_ADDRESS:-$(hostname -f)}
CONF_FILE="${FLINK_HOME}/conf/flink-conf.yaml"drop_privs_cmd() {if [ $(id -u) != 0 ]; then# Don't need to drop privs if EUID != 0returnelif [ -x /sbin/su-exec ]; then# Alpineecho su-exec flinkelse# Othersecho gosu flinkfi
}copy_plugins_if_required() {if [ -z "$ENABLE_BUILT_IN_PLUGINS" ]; thenreturn 0fiecho "Enabling required built-in plugins"for target_plugin in $(echo "$ENABLE_BUILT_IN_PLUGINS" | tr ';' ' '); doecho "Linking ${target_plugin} to plugin directory"plugin_name=${target_plugin%.jar}mkdir -p "${FLINK_HOME}/plugins/${plugin_name}"if [ ! -e "${FLINK_HOME}/opt/${target_plugin}" ]; thenecho "Plugin ${target_plugin} does not exist. Exiting."exit 1elseln -fs "${FLINK_HOME}/opt/${target_plugin}" "${FLINK_HOME}/plugins/${plugin_name}"echo "Successfully enabled ${target_plugin}"fidone
}set_config_option() {local option=$1local value=$2# escape periods for usage in regular expressionslocal escaped_option=$(echo ${option} | sed -e "s/\./\\\./g")# either override an existing entry, or append a new oneif grep -E "^${escaped_option}:.*" "${CONF_FILE}" > /dev/null; thensed -i -e "s/${escaped_option}:.*/$option: $value/g" "${CONF_FILE}"elseecho "${option}: ${value}" >> "${CONF_FILE}"fi
}prepare_configuration() {set_config_option jobmanager.rpc.address ${JOB_MANAGER_RPC_ADDRESS}set_config_option blob.server.port 6124set_config_option query.server.port 6125TASK_MANAGER_NUMBER_OF_TASK_SLOTS=${TASK_MANAGER_NUMBER_OF_TASK_SLOTS:-1}set_config_option taskmanager.numberOfTaskSlots ${TASK_MANAGER_NUMBER_OF_TASK_SLOTS}if [ -n "${FLINK_PROPERTIES}" ]; thenecho "${FLINK_PROPERTIES}" >> "${CONF_FILE}"fienvsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}"
}maybe_enable_jemalloc() {if [ "${DISABLE_JEMALLOC:-false}" == "false" ]; thenexport LD_PRELOAD=$LD_PRELOAD:/usr/lib/x86_64-linux-gnu/libjemalloc.sofi
}maybe_enable_jemalloccopy_plugins_if_requiredprepare_configurationargs=("$@")
if [ "$1" = "help" ]; thenprintf "Usage: $(basename "$0") (jobmanager|${COMMAND_STANDALONE}|taskmanager|${COMMAND_HISTORY_SERVER})\n"printf "    Or $(basename "$0") help\n\n"printf "By default, Flink image adopts jemalloc as default memory allocator. This behavior can be disabled by setting the 'DISABLE_JEMALLOC' environment variable to 'true'.\n"exit 0
elif [ "$1" = "jobmanager" ]; thenargs=("${args[@]:1}")echo "Starting Job Manager"# exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start-foreground "${args[@]}"exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start "${args[@]}"
elif [ "$1" = "jobmanagerv10" ]; thenargs=("${args[@]:1}")echo "Starting Job Manager"exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start-foreground "${args[@]}"#exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start "${args[@]}"elif [ "$1" = ${COMMAND_STANDALONE} ]; thenargs=("${args[@]:1}")echo "Starting Job Manager"exec $(drop_privs_cmd) "$FLINK_HOME/bin/standalone-job.sh" start-foreground "${args[@]}"
elif [ "$1" = ${COMMAND_HISTORY_SERVER} ]; thenargs=("${args[@]:1}")echo "Starting History Server"exec $(drop_privs_cmd) "$FLINK_HOME/bin/historyserver.sh" start-foreground "${args[@]}"
elif [ "$1" = "taskmanager" ]; thenargs=("${args[@]:1}")echo "Starting Task Manager"#exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start-foreground "${args[@]}"exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start "${args[@]}"a
elif [ "$1" = "taskmanagerv10" ]; thenargs=("${args[@]:1}")echo "Starting Task Manager"exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start-foreground "${args[@]}"#    exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start "${args[@]}"fiargs=("${args[@]}")# Running command in pass-through mode
exec $(drop_privs_cmd) "${args[@]}"

编写 flink-console.sh

#!/usr/bin/env bash
################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################# Start a Flink service as a console application. Must be stopped with Ctrl-C
# or with SIGTERM by kill or the controlling process.
USAGE="Usage: flink-console.sh (taskexecutor|zookeeper|historyserver|standalonesession|standalonejob|kubernetes-session|kubernetes-application|kubernetes-taskmanager) [args]"SERVICE=$1
ARGS=("${@:2}") # get remaining arguments as arraybin=`dirname "$0"`
bin=`cd "$bin"; pwd`. "$bin"/config.shcase $SERVICE in(taskexecutor)CLASS_TO_RUN=org.apache.flink.runtime.taskexecutor.TaskManagerRunner;;(historyserver)CLASS_TO_RUN=org.apache.flink.runtime.webmonitor.history.HistoryServer;;(zookeeper)CLASS_TO_RUN=org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer;;(standalonesession)CLASS_TO_RUN=org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint;;(standalonejob)CLASS_TO_RUN=org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint;;(kubernetes-session)CLASS_TO_RUN=org.apache.flink.kubernetes.entrypoint.KubernetesSessionClusterEntrypoint;;(kubernetes-application)CLASS_TO_RUN=org.apache.flink.kubernetes.entrypoint.KubernetesApplicationClusterEntrypoint;;(kubernetes-taskmanager)CLASS_TO_RUN=org.apache.flink.kubernetes.taskmanager.KubernetesTaskExecutorRunner;;(*)echo "Unknown service '${SERVICE}'. $USAGE."exit 1;;
esacFLINK_TM_CLASSPATH=`constructFlinkClassPath`if [ "$FLINK_IDENT_STRING" = "" ]; thenFLINK_IDENT_STRING="$USER"
fipid=$FLINK_PID_DIR/flink-$FLINK_IDENT_STRING-$SERVICE.pid
mkdir -p "$FLINK_PID_DIR"
# The lock needs to be released after use because this script is started foreground
command -v flock >/dev/null 2>&1
flock_exist=$?
if [[ ${flock_exist} -eq 0 ]]; thenexec 200<"$FLINK_PID_DIR"flock 200
fi
# Remove the pid file when all the processes are dead
if [ -f "$pid" ]; thenall_dead=0while read each_pid; do# Check whether the process is still runningkill -0 $each_pid > /dev/null 2>&1[[ $? -eq 0 ]] && all_dead=1done < "$pid"[ ${all_dead} -eq 0 ] && rm $pid
fi
id=$([ -f "$pid" ] && echo $(wc -l < "$pid") || echo "0")FLINK_LOG_PREFIX="${FLINK_LOG_DIR}/flink-${FLINK_IDENT_STRING}-${SERVICE}-${id}-${HOSTNAME}"
log="${FLINK_LOG_PREFIX}.log"
out="${FLINK_LOG_PREFIX}.out"  #新增 STDOUT 文件输出log_setting=("-Dlog.file=${log}" "-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j-console.properties" "-Dlog4j.configurationFile=file:${FLINK_CONF_DIR}/log4j-console.properties" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback-console.xml")echo "Starting $SERVICE as a console application on host $HOSTNAME."# Add the current process id to pid file
echo $$ >> "$pid" 2>/dev/null# Release the lock because the java process runs in the foreground and would block other processes from modifying the pid file
[[ ${flock_exist} -eq 0 ]] &&  flock -u 200exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} "${ARGS[@]}" > "$out" 200<&- 2>&1 < /dev/null

2、生成Flink docker 镜像

执行语句生成 Flink image

docker build -f ./flink-1.13-scala_2.12-java8 -t flink:1.13.2 .
Sending build context to Docker daemon  2.114MB
Step 1/15 : FROM openjdk:8-jre---> da343308793f
Step 2/15 : RUN set -ex;   apt-get update;   apt-get -y install libsnappy1v5 gettext-base libjemalloc-dev;   rm -rf /var/lib/apt/lists/*---> Using cache---> aad9db3002e7
Step 3/15 : ENV GOSU_VERSION 1.11---> Using cache---> 34ee8367ae28
Step 4/15 : RUN set -ex;   wget -nv -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)";   wget -nv -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc";   export GNUPGHOME="$(mktemp -d)";   for server in ha.pool.sks-keyservers.net $(shuf -e                           hkp://p80.pool.sks-keyservers.net:80                           keyserver.ubuntu.com                           hkp://keyserver.ubuntu.com:80                           pgp.mit.edu) ; do       gpg --batch --keyserver "$server" --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && break || : ;   done &&   gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu;   gpgconf --kill all;   rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc;   chmod +x /usr/local/bin/gosu;   gosu nobody true---> Using cache---> ec528cf7b71a
Step 5/15 : ENV FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-1.13.2/flink-1.13.2-bin-scala_2.12.tgz     FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.2/flink-1.13.2-bin-scala_2.12.tgz.asc     GPG_KEY=78A306590F1081CC6794DC7F62DAD618E07CF996     CHECK_GPG=true---> Using cache---> b020ef331780
Step 6/15 : ENV FLINK_HOME=/opt/flink---> Using cache---> e9a25984a5ac
Step 7/15 : ENV PATH=$FLINK_HOME/bin:$PATH---> Using cache---> 8a3fb6109932
Step 8/15 : RUN groupadd --system --gid=9999 flink &&     useradd --system --home-dir $FLINK_HOME --uid=9999 --gid=flink flink---> Using cache---> cc208dfcf4c2
Step 9/15 : WORKDIR $FLINK_HOME---> Using cache---> b2773e788cd5
Step 10/15 : RUN set -ex;   wget -nv -O flink.tgz "$FLINK_TGZ_URL";     if [ "$CHECK_GPG" = "true" ]; then     wget -nv -O flink.tgz.asc "$FLINK_ASC_URL";     export GNUPGHOME="$(mktemp -d)";     for server in ha.pool.sks-keyservers.net $(shuf -e                             hkp://p80.pool.sks-keyservers.net:80                             keyserver.ubuntu.com                             hkp://keyserver.ubuntu.com:80                             pgp.mit.edu) ; do         gpg --batch --keyserver "$server" --recv-keys "$GPG_KEY" && break || : ;     done &&     gpg --batch --verify flink.tgz.asc flink.tgz;     gpgconf --kill all;     rm -rf "$GNUPGHOME" flink.tgz.asc;   fi;     tar -xf flink.tgz --strip-components=1;   rm flink.tgz;     chown -R flink:flink .;---> Using cache---> 0cebe1ff3a6b
Step 11/15 : COPY docker-entrypoint.sh /---> Using cache---> 89211b835ea6
Step 12/15 : COPY flink-console.sh /opt/flink/bin/---> Using cache---> b3dca2180744
Step 13/15 : ENTRYPOINT ["/docker-entrypoint.sh"]---> Using cache---> 7ed0e9daaecc
Step 14/15 : EXPOSE 6123 8081---> Using cache---> 92a85c820d14
Step 15/15 : CMD ["help"]---> Using cache---> ab7b8b26049b
Successfully built ab7b8b26049b
Successfully tagged flink:1.13.2

查看flink docker image

[root@INMS-T bin]# docker images flink:1.13.2
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
flink        1.13.2    ab7b8b26049b   38 hours ago   625MB
[root@INMS-T bin]# 

如需要 Flink on docker 请到官方连接查找 https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/ops/

3、通过Kubernetes - configmap 构建flink环境变量配置

编写flink-env-config.yaml文件

apiVersion: v1
kind: ConfigMap
metadata:name: flink-env-confignamespace: istio-app
data:JOB_MANAGER_RPC_ADDRESS: flink-jm-rpc-service.istio-appLOG: /opt/flink/log

执行命令生成 flink configmap

kubectl apply -f flink-env-config.yaml

查看configmap 列表是否有 flink-env-config

LOG: /opt/flink/log[root@INMS-T ~]# kubectl get cm -A
NAMESPACE         NAME                                  DATA   AGE
default           flink-env-config                      2      3d16h
default           istio-ca-root-cert                    1      3d16h
default           kube-root-ca.crt                      1      3d16h
istio-system      istio                                 2      3d16h
istio-system      istio-ca-root-cert                    1      3d16h
istio-system      istio-gateway-leader                  0      3d16h
istio-system      istio-leader                          0      3d16h
istio-system      istio-namespace-controller-election   0      3d16h
istio-system      istio-sidecar-injector                2      3d16h
istio-system      kube-root-ca.crt                      1      3d16h
kube-node-lease   kube-root-ca.crt                      1      3d16h
kube-public       cluster-info                          1      3d16h
kube-public       kube-root-ca.crt                      1      3d16h
kube-system       coredns                               1      3d16h
kube-system       extension-apiserver-authentication    6      3d16h
kube-system       kube-proxy                            2      3d16h
kube-system       kube-root-ca.crt                      1      3d16h
kube-system       kubeadm-config                        2      3d16h
kube-system       kubelet-config-1.21                   1      3d16h

查看 flink-env-config key、value是否正确

[root@INMS-T ~]# kubectl describe cm flink-env-config
Name:         flink-env-config
Namespace:    istio-app
Labels:       <none>
Annotations:  <none>Data
====
JOB_MANAGER_RPC_ADDRESS:
----
flink-jm-rpc-service.default
LOG:
----
/opt/flink/logBinaryData
====Events:  <none>

4、生成 flink 的 job- task 通信服务

编写 yaml

[root@INMS-T ~]# cat  flink-rpc-service.yaml
apiVersion: v1
kind: Service
metadata:name: flink-jm-rpc-servicenamespace: istio-app
spec:type: ClusterIP ports:- protocol: TCPport: 6123targetPort: 6123name: rpc- protocol: TCPport: 6124targetPort: 6124name: blob- protocol: TCPport: 6125targetPort: 6125name: queryselector:app: flinkcomponent: jobmanager

执行命令生成service

kubectl apply -f flink-jm-rpc-service.yaml

5、生成Flink Job-manager 并启动

编写 Jobmanager.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: flink-jobmanagernamespace: istio-applabels:app: flink
spec:replicas: 1selector:matchLabels:app: flinkcomponent: jobmanagertemplate:metadata:labels:app: flinkcomponent: jobmanagerspec:containers:- name: jobmanagerimage: flink:1.13.2imagePullPolicy: IfNotPresentworkingDir:  /opt/flinkargs: ["jobmanagerv10"]ports:- containerPort: 6123name: rpc- containerPort: 6124name: blob- containerPort: 6125name: query- containerPort: 8081name: uienv:- name: JOB_MANAGER_RPC_ADDRESSvalueFrom:configMapKeyRef:name: flink-env-configkey: JOB_MANAGER_RPC_ADDRESS- name: FLINK_LOG_DIRvalueFrom:configMapKeyRef:name: flink-env-configkey: LOG- name: TZvalue: Asia/Shanghai

执行 Jobmanager.yaml 生成jobmanager 并启动

kubectl apply -f flink-jobmanager.yaml
[root@INMS-T ~]# kubectl get pod -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
flink-jobmanager-86d8d7df5d-hdd2k      1/1     Running   0          84m   172.17.0.2   inms-t   <none>           <none>
flink-taskmanager-1-7cf4c7b8f-tg4f8    1/1     Running   0          84m   172.17.0.3   inms-t   <none>           <none>
flink-taskmanager-2-7949bdb4b4-lbwr7   1/1     Running   0          84m   172.17.0.5   inms-t   <none>           <none>

6、生成taskmanager

编写taskmanager.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: flink-taskmanager-1namespace: istio-applabels:app: flink
spec:replicas: 2selector:matchLabels:app: flinkcomponent: taskmanagertemplate:metadata:labels:app: flinkcomponent: taskmanagerspec:containers:- name: taskmanagerimage: flink:1.13.2imagePullPolicy: IfNotPresentargs: ["taskmanagerv10"]ports:- containerPort: 6122name: rpc- containerPort: 6125name: queryenv:- name: JOB_MANAGER_RPC_ADDRESSvalueFrom:configMapKeyRef:name: flink-env-configkey: JOB_MANAGER_RPC_ADDRESS- name: FLINK_LOG_DIRvalueFrom:configMapKeyRef:name: flink-env-configkey: LOG- name: TZvalue: Asia/Shanghai

执行 taskmanager.yaml 生成taskmanager 并启动

kubectl apply -f flink-taskmanager.yaml
[root@INMS-T ~]# kubectl get pod -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
flink-jobmanager-86d8d7df5d-hdd2k      1/1     Running   0          84m   172.17.0.2   inms-t   <none>           <none>
flink-taskmanager-1-7cf4c7b8f-tg4f8    1/1     Running   0          84m   172.17.0.3   inms-t   <none>           <none>
flink-taskmanager-2-7949bdb4b4-lbwr7   1/1     Running   0          84m   172.17.0.5   inms-t   <none>           <none>

7.生成web服务

编写flink-web-ui.yaml

apiVersion: v1
kind: Service
metadata:name: flink-jobmanager-restnamespace: istio-app
spec:ports:- name: restport: 8081targetPort: 8081selector:app: flinkcomponent: jobmanager

执行yaml文件

kubectl apply -f flink-web-ui.yaml

8.添加istio 网关和虚拟服务

编写flink-gateway-vs.yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:name: flink-gatewaynamespace: istio-app
spec:selector:istio: ingressgateway # use istio default controllerservers:- port:number: 8081name: httpprotocol: HTTPhosts:- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:name: flink-vsnamespace: istio-app
spec:hosts:- "*"gateways:- flink-gatewayhttp:- route:- destination:host: flink-jobmanager-restport:number: 8081

执行yaml文件

kubectl apply -f flink-gateway-vs.yaml

9.暴露flink ui 服务

kubectl edit svc istio-ingressgateway  -n istio-system

访问http://10.0.2.153:8081


可以看到1 个job 2个task
至此搭建kubernetes 上搭建 flink 集群完毕。

Kubernetes搭建Flink会话集群相关推荐

  1. 16、Kubernetes搭建高可用集群

    文章目录 前言 一.高可用集群 1.1 高可用集群技术细节 二.部署高可用集群 2.1 准备环境 2.2 所有master节点部署keepalived 2.2.1 安装相关包和keepalived 2 ...

  2. Kubernetes搭建单master集群

    1.环境说明 Linux version 3.10.0-1160.el7.x86_64 mockbuild@x86-vm-26.build.eng.bos.redhat.com gcc version ...

  3. Flink从入门到精通100篇(十八)-CentOS环境中搭建Flink分布式集群

    一. Flink的下载 安装包下载地址:http://flink.apache.org/downloads.html  ,选择对应Hadoop的Flink版本下载 [admin@node21 soft ...

  4. 使用Kubeadm搭建Kubernetes(1.12.2)集群

    Kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,在2018年将进入GA状态,说明离生产环境中使用的距离越来 ...

  5. Kubernetes集群搭建之Etcd集群配置篇

    docker快速启动etcd脚本 https://www.cnblogs.com/ka1em/p/11300355.html rm -rf /tmp/etcd-data.tmp && ...

  6. 使用国内的镜像源搭建 kubernetes(k8s)集群

    概述 老话说的好:努力学习,提高自己,让自己知道的比别人多,了解的别人多. 言归正传,之前我们聊了 Docker,随着业务的不断扩大,Docker 容器不断增多,物理机也不断增多,此时我们会发现,登录 ...

  7. 搭建Kubernetes(k8s)集群(cka考试环境)

    搭建Kubernetes(k8s)集群 基础介绍 containerd简介 Docker vs containerd K8S为什么要放弃使用Docker作为容器运行时,而使用containerd呢? ...

  8. 在 Kubernetes 中, 搭建高可用集群

    永久地址:在 Kubernetes 中, 搭建高可用集群(保存网址不迷路

  9. Docker——阿里云搭建Docker Swarm集群

    阿里云搭建Docker Swarm集群 Docker Swarm概念 环境部署 Swarm集群搭建 安装Docker 配置阿里云镜像加速 搭建集群 Raft一致性算法 Swarm集群弹性创建服务(扩缩 ...

最新文章

  1. WTL 出现的SetMsgHandled和IsMsgHandled 错误
  2. Select控件可选可输入
  3. Kafka学习之路 (三)Kafka的高可用
  4. express使用JWT和httpOnly cookie进行安全验证
  5. Linux中的存储设备管理
  6. Openssl 嵌入式arm移植笔记
  7. 性能测试:基础(4)
  8. 【Android】实验3 颜色、字符串资源的使用【提交截止时间:2016.4.1】
  9. (七) UVC框架分析
  10. nginx/windows: nginx多虚拟主机配置
  11. centos7下定时重启tomcat
  12. Database of Fog
  13. s5pv210 linux,S5PV210-零基础体验uboot
  14. Addressable资源热更新疑问
  15. 给文档添加签名,介绍用iPhone的实例,安卓手机类似
  16. 蓝牙耳机型号有哪些?口碑最好的蓝牙耳机
  17. TOM企业邮箱注册流程是什么,如何开通邮箱
  18. 华为手机卡在升级界面_华为手机停在开机画面的解决方法【图文教程】
  19. 搭建自己的dns服务器
  20. 化纤与纺织技术杂志化纤与纺织技术杂志社化纤与纺织技术编辑部2022年第2期目录

热门文章

  1. 使用sinatra实现简单的crud功能
  2. 开源分布式量化交易系统——回测系统(一)
  3. 才发现!微信、QQ和支付宝还有这么好用的翻译功能,一键轻松翻译
  4. 我待JAVA如初恋,JAVA虐我千百遍,如何彻底卸载JAVA
  5. Lucene入门教程(一)
  6. linux grup进入操作系统,Linux操作系统下GRUB引导过程及原理
  7. 列车运行图的编制原则是什么_列车运行图编制的4个原则是什么?
  8. 研发管理和绩效考核还可以同步进行?
  9. (3)行列式的展开定理
  10. 2021年高压电工找解析及高压电工实操考试视频