一、初始化

参考链接:https://kubernetes.io/docs/reference/setup-tools/kubeadm/

1.1、配置要求

  • 节点配置为2c4g+
  • 集群中所有机器之间互通
  • hostname 和 mac 地址集群内唯一
  • CentOS版本为7,最好是7.9,低版本的建议更新下
  • 集群中所有节点可以访问外网拉取镜像
  • swap分区关闭
  • 注:这里进行单master方式安装。多master安装方式同理

1.2、初始化

1.所有主机上都要操作
systemctop disable firewalld;systemctl stop firewalld
swapoff -a ;sed -i '/swap/s/^/#/g'  /etc/fstab
setenforce 0;sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'  /etc/selinux/config2.主机名初始化
hostnamectl set-hostname  $主机名 #不同的主机山设置不同的主机名
[root@master1 $]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.74.128  master1 registry.mt.com
192.168.74.129  master2
192.168.74.130  master33、内核初始化-所有主机
[root@master1 yaml]# cat /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables=1  #需要modprobe br_netfilter模块才会有
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=655360000
fs.nr_open=655360000
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720[root@master1 yaml]# sysctl --system4、时间同步
ntpdate time.windows.com5、设置资源配置文件
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf6、开机加载模块
[root@master1 ~]# cat /etc/modules-load.d/k8s.conf
lvs
lvs_rr
br_netfilter
lvs_wrr
lvs_sh
[root@master1 ~]# modprobe lvs
[root@master1 ~]# modprobe lvs_rr
[root@master1 ~]# modprobe br_netfilter
[root@master1 ~]# modprobe lvs_wrr
[root@master1 ~]# modprobe lvs_sh [root@iZ2zef0llgs69lx3vc9rfgZ ~]# lsmod |grep vs
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          137239  1 ip_vs
libcrc32c              12644  2 ip_vs,nf_conntrack

二、安装

2.1、安装docker

# 在所有节点上执行:
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum list docker-ce.x86_64 --showduplicates | sort -r  #可以看到可以安装的docker-ce版本,选择其中一个安装即可
yum install docker-ce-18.06.1.ce-3.el7# 修改/etc/docker/docker-daemon.json配置文件
{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],"insecure-registries": ["www.mt.com:9500"]
}说明:registry-mirrors 为aliyun镜像加速设置的地址,用于加速下载国外的镜像。不然下载会很慢#修改完成后,docker info验证
systemctl disbale docker;systemctl enable docker
[root@master1 $]# docker info |tail -10
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:www.mt.com:9500127.0.0.0/8
Registry Mirrors:https://b9pmyelo.mirror.aliyuncs.com/
Live Restore Enabled: false

2.2、安装kubectl/kubeadm/kubelet

#创建repo文件  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
#baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearch
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0#安装
yum install kubeadm-1.18.0 kubectl-1.18.0 kubelet-1.18.0#启动服务
systemctl start kubelet;systemctl enable kubelet

2.3、初始化

注意:针对多master节点,初始化方法有差异。 //本实验没有用到该步骤
kubeadm init --kubernetes-version=$版本 --apiserver-advertise-address=$本masterip \
--control-plane-endpoint=$本masterip:6443 \  #这一行必须要
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers  \
-service-cidr=10.96.0.0/12
第一个节点如此操作,第二个节点使用kubeadm join方式加入control-plane
#只在master上运行
[root@master1 ~]# kubeadm init --apiserver-advertise-address=192.168.74.128 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
W0503 16:48:24.338256   38262 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks  #1、预检查,拉镜像
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  #2、初始化kubelet信息
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"  #3、证书准备
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.74.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.74.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.74.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"  #4、写入kubeconfig信息
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests" #5、写入control-plane 配置信息
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0503 16:49:05.984728   38262 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0503 16:49:05.985466   38262 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.002641 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"  #6、打标签
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i91fyq.rp8jv1t086utyxif  #7、配置bootstrap和RBAC
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS  #8、插件安装
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:# 创建kubectl访问apiserver 配置文件。在master上执行mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config# 创建network
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:#加入集群使用
kubeadm join 192.168.74.128:6443 --token i91fyq.rp8jv1t086utyxif \--discovery-token-ca-cert-hash sha256:a31d6b38fec404eda854586a4140069bc6e3154241b59f40a612e24e9b89bf37[root@master1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        13 months ago       173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        13 months ago       162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        13 months ago       95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        15 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        18 months ago       288MB
k8s.gcr.io/etcd                                                   3.2.24              3cab8e1b9802        2 years ago         220MB
k8s.gcr.io/coredns                                                1.2.2               367cdc8433a4        2 years ago         39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64                             v1.10.0             0dab2435c100        2 years ago         122MB
k8s.gcr.io/flannel                                                v0.10.0-amd64       f0fad859c909        3 years ago         44.6MB
k8s.gcr.io/pause                                                  3.1                 da86e6ba6ca1        3 years ago         742kB
[root@master1 ~]#
# 创建kubectl配置文件:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

2.4、加入集群

[root@master1 ~]# kubectl  get nodes
NAME      STATUS     ROLES    AGE   VERSION
master1   NotReady   master   36m   v1.18.0# master2和master3上执行
kubeadm join 192.168.74.128:6443 --token i91fyq.rp8jv1t086utyxif --discovery-token-ca-cert-hash sha256:a31d6b38fec404eda854586a4140069bc6e3154241b59f40a612e24e9b89bf37[root@master1 ~]# kubectl  get nodes
NAME      STATUS     ROLES    AGE   VERSION
master1   NotReady   master   40m   v1.18.0
master2   NotReady   <none>   78s   v1.18.0
master3   NotReady   <none>   31s   v1.18.0

2.5、安装网络插件

[root@master1 yaml]# kubectl apply -f  https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
[root@master1 yaml]# kubectl  get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-l9h6x          1/1     Running   0          60m
coredns-7ff77c879f-lp9p2          1/1     Running   0          60m
etcd-master1                      1/1     Running   0          60m
kube-apiserver-master1            1/1     Running   0          60m
kube-controller-manager-master1   1/1     Running   0          60m
kube-flannel-ds-524n6             1/1     Running   0          45s
kube-flannel-ds-8gqwp             1/1     Running   0          45s
kube-flannel-ds-fgvpf             1/1     Running   0          45s
kube-proxy-2fz26                  1/1     Running   0          21m
kube-proxy-46psf                  1/1     Running   0          20m
kube-proxy-p7jnd                  1/1     Running   0          60m
kube-scheduler-master1            1/1     Running   0          60m[root@master1 yaml]# kubectl  get nodes #在安装网络插件之前STATUS都是NotReady状态
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   60m   v1.18.0
master2   Ready    <none>   21m   v1.18.0
master3   Ready    <none>   20m   v1.18.0

三、验证

[root@master1 yaml]# kubectl  create deployment nginx --image=nginx
[root@master1 yaml]# kubectl  expose deployment  nginx --port=80 --type=NodePort
[root@master1 yaml]# kubectl  get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
nginx-f89759699-fvlfg   1/1     Running   0          99s   10.244.2.3   master3   <none>           <none>
[root@master1 yaml]# kubectl  get svc/nginx
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.105.249.76   <none>        80:30326/TCP   85s[root@master1 yaml]# curl -s -I master2:30326  #访问集群中任意一个node节点的 30326端口都可以通
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Mon, 03 May 2021 09:58:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes[root@master1 yaml]# curl -s -I master3:30326
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Mon, 03 May 2021 09:58:50 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes[root@master1 yaml]# curl -s -I master1:30326 #master1为master节点非worker节点,
^C镜像:
[root@master1 yaml]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                                            v0.14.0-rc1         0a1a2818ce59        2 weeks ago         67.9MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        13 months ago       162MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        13 months ago       173MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        13 months ago       95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        15 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        18 months ago       288MB[root@master2 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                               v0.14.0-rc1         0a1a2818ce59        2 weeks ago         67.9MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns      1.6.7               67da37a9a360        15 months ago       43.8MB[root@master3 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                               v0.14.0-rc1         0a1a2818ce59        2 weeks ago         67.9MB
nginx                                                latest              62d49f9bab67        2 weeks ago         133MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns      1.6.7               67da37a9a360        15 months ago       43.8MB

四、kubeadm用法

4.1、init

用途:用于初始化master节点

4.1.1、简介

preflight                    Run pre-flight checks
certs                        Certificate generation/ca                          Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components/apiserver                   Generate the certificate for serving the Kubernetes API/apiserver-kubelet-client    Generate the certificate for the API server to connect to kubelet/front-proxy-ca              Generate the self-signed CA to provision identities for front proxy/front-proxy-client          Generate the certificate for the front proxy client/etcd-ca                     Generate the self-signed CA to provision identities for etcd/etcd-server                 Generate the certificate for serving etcd/etcd-peer                   Generate the certificate for etcd nodes to communicate with each other/etcd-healthcheck-client     Generate the certificate for liveness probes to healthcheck etcd/apiserver-etcd-client       Generate the certificate the apiserver uses to access etcd/sa                          Generate a private key for signing service account tokens along with its public key
kubeconfig                   Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file/admin                       Generate a kubeconfig file for the admin to use and for kubeadm itself/kubelet                     Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes/controller-manager          Generate a kubeconfig file for the controller manager to use/scheduler                   Generate a kubeconfig file for the scheduler to use
kubelet-start                Write kubelet settings and (re)start the kubelet
control-plane                Generate all static Pod manifest files necessary to establish the control plane/apiserver                   Generates the kube-apiserver static Pod manifest/controller-manager          Generates the kube-controller-manager static Pod manifest/scheduler                   Generates the kube-scheduler static Pod manifest
etcd                         Generate static Pod manifest file for local etcd/local                       Generate the static Pod manifest file for a local, single-node local etcd instance
upload-config                Upload the kubeadm and kubelet configuration to a ConfigMap/kubeadm                     Upload the kubeadm ClusterConfiguration to a ConfigMap/kubelet                     Upload the kubelet component config to a ConfigMap
upload-certs                 Upload certificates to kubeadm-certs
mark-control-plane           Mark a node as a control-plane
bootstrap-token              Generates bootstrap tokens used to join a node to a cluster
kubelet-finalize             Updates settings relevant to the kubelet after TLS bootstrap/experimental-cert-rotation  Enable kubelet client certificate rotation
addon                        Install required addons for passing conformance tests/coredns                     Install the CoreDNS addon to a Kubernetes cluster/kube-proxy                  Install the kube-proxy addon to a Kubernetes cluster

4.1.2、工作流 kubeadm init workflow

  • 1)预检查,检查系统状态,进行异常warning警告或者exit退出。直到问题修复或者 --ignore-preflight-errors=<list-of-errors>

  • 2)产生自签ca为集群中各个组件设置集群身份。用户可以自行设置ca或者key--cert-dir,默认为/etc/kubernetes/pki, APIServer 证书将为任何 --apiserver-cert-extra-sans 参数值提供附加的 SAN(Optional extra Subject Alternative Names )条目,必要时将其小写。

  • 3)将kubeconfig写入/etc/kubernetes/目录,以便kubelet/controller-manager/scheduler访问apiserver使用。他们都有自己的身份标识。同时生成一个名为admin.conf的独立的kubeconfig文件,用于管理操作。

  • 4)为apiserver/controller-manager/scheduler生成静态 manifests (默认在/etc/kubernetes/manifests), 假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。 kubelet会监听/etc/kubernetes/manifests目录,以便在启动的时候创建pod

  • 5)对control-plane节点进行打标和taints防止作为workload被调度

  • 6)生成token,供节点加入

  • 7)为节点能够通过Bootstrap TokensTLS Bootstrap加入集群做必要的配置;创建一个configmap提供集群节点添加所需要的信息,并为configmap配置相关的RBAC访问规则。允许启动引导令牌访问CSR签名API。配置自动签发新的CSR请求

  • 8)通过apiserver安装coreDns和kube-proxy等addon。这里的coreDns之后在cni插件部署完成后才会被调度

4.1.3、分阶段创建control-plane

  • kubeadm允许使用kubeadm init phase命令分段创建控制平面节点
[root@master1 ~]# kubeadm init phase --help
Use this command to invoke single phase of the init workflowUsage:kubeadm init phase [command]Available Commands:addon              Install required addons for passing Conformance testsbootstrap-token    Generates bootstrap tokens used to join a node to a clustercerts              Certificate generationcontrol-plane      Generate all static Pod manifest files necessary to establish the control planeetcd               Generate static Pod manifest file for local etcdkubeconfig         Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig filekubelet-finalize   Updates settings relevant to the kubelet after TLS bootstrapkubelet-start      Write kubelet settings and (re)start the kubeletmark-control-plane Mark a node as a control-planepreflight          Run pre-flight checksupload-certs       Upload certificates to kubeadm-certsupload-config      Upload the kubeadm and kubelet configuration to a ConfigMap
...
  • kubeadm init 还公开了一个名为 --skip-phases 的参数,该参数可用于跳过某些阶段。 参数接受阶段名称列表,并且这些名称可以从上面的有序列表中获取 sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml
  • kubeadm中kube-proxy说明参考
  • kubeadm中ipvs说明参考
  • 向控制平面传递自定义的命令行参数说明
  • 使用自定义镜像:
    • 使用其他的 imageRepository 来代替 k8s.gcr.io
    • 将 useHyperKubeImage 设置为 true,使用 HyperKube 镜像。
    • 为 etcd 或 DNS 附件提供特定的 imageRepository 和 imageTag
  • 设置节点名称--hostname-override
  • ...

4.2、join

4.2.1、阶段划分

当节点加入 kubeadm 初始化的集群时,我们需要建立双向信任。 这个过程可以分解为发现(让待加入节点信任 Kubernetes control-plane节点)和 TLS 引导(让Kubernetes 控制平面节点信任待加入节点)两个部分。

发现阶段有两种主要的发现方案(二者择其一):

  • 1、使用共享令牌和 API 服务器的 IP 地址。传递 --discovery-token-ca-cert-hash 参数来验证 Kubernetes 控制平面节点提供的根证书颁发机构(CA)的公钥。 此参数的值指定为 ":",其中支持的哈希类型为 "sha256"。哈希是通过 Subject Public Key Info(SPKI)对象的字节计算的(如 RFC7469)。 这个值可以从 "kubeadm init" 的输出中获得,或者可以使用标准工具进行计算。 可以多次重复 --discovery-token-ca-cert-hash 参数以允许多个公钥。
  • 2、提供一个文件 - 标准 kubeconfig 文件的一个子集,该文件可以是本地文件,也可以通过 HTTPS URL 下载。 格式是:
    • kubeadm join --discovery-token abcdef.1234567890abcdef 1.2.3.4:6443
    • kubeadm join --discovery-file path/to/file.conf
    • kubeadm join --discovery-file https://url/file.conf #这里必须是https

TLS引导阶段

  • 也通过共享令牌驱动。 用于向 Kubernetes 控制平面节点进行临时的身份验证,以提交本地创建的密钥对的证书签名请求(CSR)。 默认情况下,kubeadm 将设置 Kubernetes 控制平面节点自动批准这些签名请求。 这个令牌通过--tls-bootstrap-token abcdef.1234567890abcdef 参数传入。在集群安装完成后,可以关闭自动签发,改为手动approve csr

通常两个部分会使用相同的令牌。 在这种情况下可以使用 --token 参数,而不是单独指定每个令牌。

4.2.2、命令用法

"join [api-server-endpoint]" 命令执行下列阶段:preflight              Run join pre-flight checks
control-plane-prepare  Prepare the machine for serving a control plane/download-certs        [EXPERIMENTAL] Download certificates shared among control-plane nodes from the kubeadm-certs Secret/certs                 Generate the certificates for the new control plane components/kubeconfig            Generate the kubeconfig for the new control plane components/control-plane         Generate the manifests for the new control plane components
kubelet-start          Write kubelet settings, certificates and (re)start the kubelet
control-plane-join     Join a machine as a control plane instance/etcd                  Add a new local etcd member/update-status         Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap/mark-control-plane    Mark a node as a control-plane[flag]  #这里只列出常用的参数
--apiserver-advertise-address string  #如果该节点托管一个新的控制平面实例,则 API 服务器将公布其正在侦听的 IP 地址。如果未设置,则使用默认网络接口
--apiserver-bind-port int32     默认值: 6443 #如果节点应该托管新的控制平面实例,则为 API 服务器要绑定的端口。
--control-plane 在此节点上创建一个新的控制平面实例
--discovery-file string 对于基于文件的发现,给出用于加载集群信息的文件或者 URL。
--discovery-token string    对于基于令牌的发现,该令牌用于验证从 API 服务器获取的集群信息。
--discovery-token-ca-cert-hash stringSlice  对基于令牌的发现,验证根 CA 公钥是否与此哈希匹配 (格式: "<type>:<value>")。
--tls-bootstrap-token string    指定在加入节点时用于临时通过 Kubernetes 控制平面进行身份验证的令牌。
--token string  如果未提供这些值,则将它们用于 discovery-token 令牌和 tls-bootstrap 令牌。

4.2.3、Join工作流

  • kubeadm 从 apiserver下载必要的集群信息。 默认情况下,它使用引导令牌和 CA 密钥哈希来验证数据的真实性。 也可以通过文件或 URL 直接发现根 CA。

  • 一旦知道集群信息,kubelet 就可以开始 TLS 引导过程。

    TLS 引导程序使用共享令牌与 Kubernetes API 服务器进行临时的身份验证,以提交证书签名请求 (CSR); 默认情况下,控制平面自动对该 CSR 请求进行签名。

  • 最后,kubeadm 配置本地 kubelet 使用分配给节点的确定标识连接到 API 服务器。

对于新增节点需要增加到master需要额外的步骤:

  • 从集群下载控制平面节点之间共享的证书(如果用户明确要求)。
  • 生成控制平面组件清单、证书和 kubeconfig。
  • 添加新的本地 etcd 成员。
  • 将此节点添加到 kubeadm 集群的 ClusterStatus。

4.2.4、发现要信任的集群 CA

发现集群ca的集中方式:

方式1、带 CA 锁定模式的基于令牌的发现

Kubernetes 1.8 及以上版本中的默认模式。 在这种模式下,kubeadm 下载集群配置(包括根CA)并使用令牌验证它, 并且会验证根 CA 的公钥与所提供的哈希是否匹配, 以及 API 服务器证书在根 CA 下是否有效。

CA 键哈希格式为 sha256:<hex_encoded_hash>。 默认情况下,在 kubeadm init 最后打印的 kubeadm join 命令 或者 kubeadm token create --print-join-command 的输出信息中返回哈希值。 它使用标准格式 (请参考 RFC7469) 并且也能通过第三方工具或者制备系统进行计算。openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

对于工作节点:
kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443对于控制节点:
kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443

注意:CA 哈希通常在主节点被提供之前是不知道的,这使得构建使用 kubeadm 的自动化配置工具更加困难。 通过预先生成CA,你可以解除这个限制。

方式2、无 CA 锁定模式的基于令牌的发现

Kubernetes 1.7 和早期版本_中的默认设置;使用时要注意一些重要的补充说明。 此模式仅依赖于对称令牌来签名(HMAC-SHA256)发现信息,这些发现信息为主节点建立信任根。 在 Kubernetes 1.8 及以上版本中仍然可以使用 --discovery-token-unsafe-skip-ca-verification 参数,但是如果可能的话,你应该考虑使用一种其他模式。

kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443

方式3、基于 HTTPS 或文件发现

这种方案提供了一种带外方式在主节点和引导节点之间建立信任根。 如果使用 kubeadm 构建自动配置,请考虑使用此模式。 发现文件的格式为常规的 Kubernetes kubeconfig 文件。

如果发现文件不包含凭据,则将使用 TLS 发现令牌。

kubeadm join 命令示例:

  • kubeadm join --discovery-file path/to/file.conf (本地文件)
  • kubeadm join --discovery-file https://url/file.conf (远程 HTTPS URL)

4.2.5、提高安全性

# 关闭自动approve csr
kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap  ##关闭对集群信息configmap的公开访问
kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/    //" | tee cluster-info.yaml
使用 cluster-info.yaml 文件作为 kubeadm join --discovery-file 参数。kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo

4.3、config

[root@master1 pki]# kubeadm  config --helpThere is a ConfigMap in the kube-system namespace called "kubeadm-config" that kubeadm uses to store internal configuration about the
cluster. kubeadm CLI v1.8.0+ automatically creates this ConfigMap with the config used with 'kubeadm init', but if you
initialized your cluster using kubeadm v1.7.x or lower, you must use the 'config upload' command to create this
ConfigMap. This is required so that 'kubeadm upgrade' can configure your upgraded cluster correctly.Usage:kubeadm config [flags]kubeadm config [command]Available Commands:images      Interact with container images used by kubeadmmigrate     Read an older version of the kubeadm configuration API types from a file, and output the similar config object for the newer versionprint       Print configurationview        View the kubeadm configuration stored inside the cluster
# 1)images kubeadm使用到的镜像查看和拉取
list #Print a list of images kubeadm will use. The configuration file is used in case any images or image repositories are customized
pull #Pull images used by kubeadm[root@master1 yaml]# kubeadm config images  list
I0503 22:55:00.007902  111954 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.18
W0503 22:55:01.118197  111954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.18
k8s.gcr.io/kube-controller-manager:v1.18.18
k8s.gcr.io/kube-scheduler:v1.18.18
k8s.gcr.io/kube-proxy:v1.18.18
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7# 2)view #查看kubeadm集群配置
[root@master1 yaml]# kubeadm  config view
apiServer:extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}可以修改config view的内容,然后指定文件初始化集群:kubeadm init --config kubeadm.yml
config文件支持# 3)print init-defaults
注意:可以使用init-defaults作为kubeadm.yml配置文件,
[root@master1 yaml]# kubeadm config print  'init-defaults'
W0503 23:00:53.662149  113332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 1.2.3.4bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: master1taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12
scheduler: {}[root@master1 yaml]# kubeadm config print  init-defaults  --component-configs KubeProxyConfiguration
[root@master1 yaml]# kubeadm config print  init-defaults  --component-configs KubeletConfiguration# 4)print join-defaults
[root@master1 yaml]# kubeadm config print  'join-defaults'
apiVersion: kubeadm.k8s.io/v1beta2
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:bootstrapToken:apiServerEndpoint: kube-apiserver:6443token: abcdef.0123456789abcdefunsafeSkipCAVerification: truetimeout: 5m0stlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:criSocket: /var/run/dockershim.sockname: master1taints: null
[root@master1 yaml]# kubeadm config print  join-defaults  --component-configs KubeProxyConfiguration
[root@master1 yaml]# kubeadm config print  join-defaults  --component-configs KubeletConfiguration

4.4、certs

1)[root@master1 pki]# kubeadm alpha  certs --help
Commands related to handling kubernetes certificatesUsage:kubeadm alpha certs [command]Aliases:certs, certificatesAvailable Commands:certificate-key  Generate certificate keys  #生成证书check-expiration Check certificates expiration for a Kubernetes cluster  #检查过期renew            Renew certificates for a Kubernetes cluster  #刷新证书[root@master1 pki]# kubeadm  alpha certs  check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 May 03, 2022 08:49 UTC   362d                                    no
apiserver                  May 03, 2022 08:49 UTC   362d            ca                      no
apiserver-etcd-client      May 03, 2022 08:49 UTC   362d            etcd-ca                 no
apiserver-kubelet-client   May 03, 2022 08:49 UTC   362d            ca                      no
controller-manager.conf    May 03, 2022 08:49 UTC   362d                                    no
etcd-healthcheck-client    May 03, 2022 08:49 UTC   362d            etcd-ca                 no
etcd-peer                  May 03, 2022 08:49 UTC   362d            etcd-ca                 no
etcd-server                May 03, 2022 08:49 UTC   362d            etcd-ca                 no
front-proxy-client         May 03, 2022 08:49 UTC   362d            front-proxy-ca          no
scheduler.conf             May 03, 2022 08:49 UTC   362d                                    noCERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 01, 2031 08:49 UTC   9y              no
etcd-ca                 May 01, 2031 08:49 UTC   9y              no
front-proxy-ca          May 01, 2031 08:49 UTC   9y              no

4.5、kubeconfig

用于管理kubeconfig文件

[root@master1 pki]# kubeadm alpha kubeconfig  user --client-name=kube-apiserver-etcd-clientI0505 23:28:58.999787   49317 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.18W0505 23:29:01.024319   49317 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]apiVersion: v1clusters:- cluster:    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=    server: https://192.168.74.133:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: kube-apiserver-etcd-client  name: kube-apiserver-etcd-client@kubernetescurrent-context: kube-apiserver-etcd-client@kuberneteskind: Configpreferences: {}users:- name: kube-apiserver-etcd-client  user:    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0ekNDQWN1Z0F3SUJBZ0lJTlZmQkNWZFNrYTR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURVeE5USTVNREZhTUNVeApJekFoQmdOVkJBTVRHbXQxWW1VdFlYQnBjMlZ5ZG1WeUxXVjBZMlF0WTJ4cFpXNTBNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFyTGc2cG9EK28zMnZpLytMVkhoV0FSbHZBZHFrWFRFYjAvR0QKY0xDeFlraDdhYUVVVmxKTjBHUFFkTkJhb3R5TGtvak9jL3Y4WEdxcVh2bGRpOUN0eVFMR2EydGNuNWlKZkQweAp1bjFNQXZIT1cxaUVDM2RLR1BXeXIraUZBVFExakk2ZXA2aDNidEVsSHkwSVVabEhQUDJ3WW5JQnV1NCtsLy9xCmsrc1lMQjZ0N0ZoalVQbmhnaHk4T2dMa0o3UmRjMWNBSDN3ejR2L0xoYy9yK0ppc0kvZnlRTXpiYVdqdE5GRTEKYk1MdnlYM0RXbmhlVnlod3EyUTZEbHhMaGVFWUJRWWxhbjNjdVQ3aG5YSm9NTGJKUnRhSVFWbktVOVJzYVlSUgpVVnRvdUZ2QkN5d21qOGlvOUdJakV6M2dMa0JPTk1BMERvVTlFZjhBcFpuZFY4VmN5d0lEQVFBQm95Y3dKVEFPCkJnTlZIUThCQWY4RUJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RRWUpLb1pJaHZjTkFRRUwKQlFBRGdnRUJBTGw5NURPVlFmRTkxejgwQUFCOE5mR2x3d200ZnNHM0hjRXh2cXBudjdhb2I4WTQ3QWRvMkNBaApNY1pUdUxWQ3B2SHRUYk15YXhoRE41VzNmdnU2NXRuUVJKM21RNzhBSllsWjZzVmZWODNTNzgzOGRnYWtKckluCjRtNm5KME5sRjRyeFpZbEVwRU5yWUxBOUlYM25oZndVY0FpbG9uWG03cmlaOVIvV1FJZHZ1WXVQMFdDMUI1RUoKY1ZiTWN4dUJOZGwwTHpKa1dYWUc3Y3ZaYjR5NmR5U2FPZnBranNKdFFudzlzbm9nNHVBUW1DMENnZFZpcUx5ZwpscExuYVExR3BVeVF5bTlDSTVvRlMzSThrS2RaQmV5d1duQURCYXFjS3BKTnFRWkFRWnRQYzhXSjRIczREYlVMCjM3YnlPSEp6VUZkbWxPMU9ubDRhQWRsVXp3Y0IxemM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBckxnNnBvRCtvMzJ2aS8rTFZIaFdBUmx2QWRxa1hURWIwL0dEY0xDeFlraDdhYUVVClZsSk4wR1BRZE5CYW90eUxrb2pPYy92OFhHcXFYdmxkaTlDdHlRTEdhMnRjbjVpSmZEMHh1bjFNQXZIT1cxaUUKQzNkS0dQV3lyK2lGQVRRMWpJNmVwNmgzYnRFbEh5MElVWmxIUFAyd1luSUJ1dTQrbC8vcWsrc1lMQjZ0N0ZoagpVUG5oZ2h5OE9nTGtKN1JkYzFjQUgzd3o0di9MaGMvcitKaXNJL2Z5UU16YmFXanRORkUxYk1MdnlYM0RXbmhlClZ5aHdxMlE2RGx4TGhlRVlCUVlsYW4zY3VUN2huWEpvTUxiSlJ0YUlRVm5LVTlSc2FZUlJVVnRvdUZ2QkN5d20KajhpbzlHSWpFejNnTGtCT05NQTBEb1U5RWY4QXBabmRWOFZjeXdJREFRQUJBb0lCQUJ1R1VISndaQ1FSeDRQNworV3hBc1JRRHhaajZDdTkrLy94S3BMTzB0TkFBMVFvRVRZVmtJRnB4VGFzUCtTR3pHOXNDU2tSWmgrSUNiWnd0CkNTZGEzaGNHaGpCZ0w2YVBYSG1jRnV5dFF3dkZGU21oZFltT1BSUzFNd0N0Z1dTcnVVem8vWWVpWlVZWHRsNjkKZ25IZWgyZkUxZk1hVUFSR0sxdDF3U0JKZXRTczIrdG5wcnhFZ3E5a1BxRkpFUEJ3SG8vRElkQ3gzRWJLZWY2NQpSSzRhMURDcWIzLzNrYzE2dGoweWwrNXFZKzVFQ2xBMjZhbTNuSFJHWVdUQ0QyUEVsNnZoUXVZMHN5bHh3djAwClgwLzFxakNVeU1DWkJVYlo1RkNpM25yOE5HMnBETnFGbEtnZnJXTXVkMkdoTEMxWUp5ekdIU3AyTm91L0U3L2cKRmVzV2ZVRUNnWUVBMVlTWUtJeS9wa3pKZFhCTVhBOUVoYnZaQ3hudk40bkdjNXQxdXZxb2h6WVBVbjhIbDRmNwpxN1Q0d2tpZEtDN29CMXNIUVdCcmg5d3UvRXVyQ0ZRYzRremtHQmZJNkwzSzBHalg3VXVCc3VOODQyelBZbmtvCjFQakJTTXdLQlVpb0ZlMnlJWnZmOU53V0FLczA3VU9VMTNmeVpodHVVcTVyYklrOVE2VXRGeU1DZ1lFQXp4V1oKSjQwa3lPTmliSml5ZFFtYkd4UG9sK0dzYjFka1JNUlVCMjk4K3l4L0JnOE4zOWppVko1bjZkbmVpazc5MklYdQprU0k5VENJdkh4ZlBkUTR0Q1JpdnFWUVcza0xvSk5jVkg3UW8rWnIzd3JOYmZrOWR3MDJ1WGZqWlpNTmx3ZmgwCkR3S0NnSFBIVVBucU4wcW1RUzVva3A0QUM4WmVKOHQ2S3FHR1Vqa0NnWUVBeTM4VStjaXpPNDhCanBFWjViK1QKWWhZWGxQSUJ3Ui9wYVBOb2NHMUhRNTZ0V2NYQitaVGJzdG5ISUh2T2RLYkg4NEs1Vm9ETDIyOXB4SUZsbjRsegpBZWVnbUtuS2pLK2VaYVVXN28xQkxycUxvOEZub2dXeGVkRWZmZjhoS2NvR2tPZTdGemNWYXF4N3QrVjBpeEVYCkFZakxHSy9hSktraHJ3N1p1ZWZxSXBzQ2dZQUpLcWFWNXB5TE8rMStheC96S0ZLeVZ5WkRtdHk4TFAwbVFoNksKR2JoSmtnV3BhZjh1T25hQ1VtUzlLRVMra0pLU0JCTzBYdlNocXgyMDNhUDBSWVZlMHJYcjQrb0RPcWoyQUlOUgozUEszWWRHM3o2S3NLNjAxMlBsdjlYVUNEZGd5UnVJMFMrTWs5bnNMTFpUZGo3TmVUVVNad042MXBybENQN0tQCnNvaTBtUUtCZ0hPRC9ZV2h0NENLNWpXMlA0UWQyeUJmL1M1UUlPakRiMk45dVgvcEk2OFNYcVNYUWQrYXZ3cjkKRjN2MFkxd3gvVXc3U1lqVy8wLytNaE4ycW5ZYWZibDI2UEtyRTM1ZTZNYXhudFMxNTJHTzVaTHFPWVByOHpDVAp6Nk9MMko0a0lONHJ5QUhsdkFqc1htUVAzTG4yZ0JESVFhb3dZWUtTa2phZE5jc1lMUWhhCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

4.6、reset

用于,集群init失败后,重置集群

--ignore-preflight-errors stringSlice 错误将显示为警告的检查列表;例如:IsPrivilegedUser,Swap。取值为 'all' 时将忽略检查中的所有错误。

[root@master1 pki]# kubeadm  reset phase --help
Use this command to invoke single phase of the reset workflowUsage:
kubeadm reset phase [command]Available Commands:
cleanup-node          Run cleanup node.  #清理节点  preflight
Run reset pre-flight checks  #检查
remove-etcd-member    Remove a local etcd member.  #清理etcd成员
update-cluster-status Remove this node from the ClusterStatus object.  #更新集群状态

回到顶部

五、附件

[root@master1 ~]# tree /etc/kubernetes/
/etc/kubernetes/
├── admin.conf
├── controller-manager.conf
├── kubelet.conf
├── manifests
│   ├── etcd.yaml
│   ├── kube-apiserver.yaml
│   ├── kube-controller-manager.yaml
│   └── kube-scheduler.yaml
├── pki
│   ├── apiserver.crt
│   ├── apiserver-etcd-client.crt
│   ├── apiserver-etcd-client.key
│   ├── apiserver.key
│   ├── apiserver-kubelet-client.crt
│   ├── apiserver-kubelet-client.key
│   ├── ca.crt
│   ├── ca.key
│   ├── etcd
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── healthcheck-client.crt
│   │   ├── healthcheck-client.key
│   │   ├── peer.crt
│   │   ├── peer.key
│   │   ├── server.crt
│   │   └── server.key
│   ├── front-proxy-ca.crt
│   ├── front-proxy-ca.key
│   ├── front-proxy-client.crt
│   ├── front-proxy-client.key
│   ├── sa.key
│   └── sa.pub
└── scheduler.conf3 directories, 30 files
[root@master2 ~]# tree /etc/kubernetes/
/etc/kubernetes/
├── kubelet.conf
├── manifests
└── pki└── ca.crt2 directories, 2 files#同master1

5.1、conf配置

crt文件查看: openssl x509 -in apiserver.crt -text

5.1.1、admin.conf

1)[root@master1 kubernetes]# cat admin.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.74.128:6443name: kubernetescontexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Configpreferences: {}users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJVFVrUFJHZUxsYTB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJnbUFsZmVCZ3FsWG5OeEIKSVZ2V040WnBUTHNOMTRsN0I1c0hnTUQ4UUQzNEsyOUpmOFVaV2ZhUlJPR3IwK0hadXhhdldCM1F4TGQ3SDQ1RwpkTkFMMFNxWmlqRUlPL1JCc1BhS0tQcEYvdGIzaVlrWGk5Y0tzL3UxOVBLMVFHb29kOWwrUzR1Vzh1OG9tTHA2CldJQ3VjUWYwa2sxOTVJNnVHVXBRYmZpY1BRYVdLWC9yK1lYbDFhbUl2YTlGOEVJZlEzVjZQU0Jmb3BBajNpVjkKVU1Ic0dIWU1mLzlKcThscGNxNWxSZGZISkNJaVRPQ21SSjZhekIrRGpVLytES0RiRG5FaEJ2b2ZYQldYczZPWQpJbVdodEhFbENVZ3BnQWZBNjRyd3ZEaVlteERjbitLUWd5dk1GamwzNDMzMy9yWTZDNWZoUmVjSmtranJtNHI1CjFRVmUyd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFOREJBVy9qTVNRYlE4TGd5cktTRDZZYmtZY3ZYWHRaNUVodAovUXBsZnNTS1Frd1IvNzhnQjRjUTFQMHFXMmZrWnhhZjZTODJ3cGxGcEQ0SktId3VkNG1SZy9mTU5oYVdiY0tDCnRJejMydnpsY2dlMm9ydEdMSmU3MVVPcVIxY0l4c25qZHhocFk3SkRWeFNDcE9vbDkzc1ZUT3hNaTRwYjErNXEKL1MxWERLdk1FZmxQTEtQcERpTGVLZFBVVHo4Y21qdlhFUndvS0NLOHYvUVY3YStoN0gvKzlYZHc2bEkyOXpRawp6V01kamxua2RJVzQ2L1Q1RDVUMnNQVkZKak9nZEhLR2loMG9pU1ZBZWZiVzVwdTFlTktrb2h3SWJ1Q0RRM0oyCmk5ZUFsbmFvYWU0QkU0bTBzVGpTR0FLTTZqTUxTZXZiaTY1dnZCUGw3OU9CSXQ2WXZjVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMmdtQWxmZUJncWxYbk54QklWdldONFpwVExzTjE0bDdCNXNIZ01EOFFEMzRLMjlKCmY4VVpXZmFSUk9HcjArSFp1eGF2V0IzUXhMZDdINDVHZE5BTDBTcVppakVJTy9SQnNQYUtLUHBGL3RiM2lZa1gKaTljS3MvdTE5UEsxUUdvb2Q5bCtTNHVXOHU4b21McDZXSUN1Y1FmMGtrMTk1STZ1R1VwUWJmaWNQUWFXS1gvcgorWVhsMWFtSXZhOUY4RUlmUTNWNlBTQmZvcEFqM2lWOVVNSHNHSFlNZi85SnE4bHBjcTVsUmRmSEpDSWlUT0NtClJKNmF6QitEalUvK0RLRGJEbkVoQnZvZlhCV1hzNk9ZSW1XaHRIRWxDVWdwZ0FmQTY0cnd2RGlZbXhEY24rS1EKZ3l2TUZqbDM0MzMzL3JZNkM1ZmhSZWNKa2tqcm00cjUxUVZlMndJREFRQUJBb0lCQUhQNjdBQloyUFZVK1ByQwptbzZSR0dFT3lZSjhXYitXTFBCOXdiNzJhUGdQUHF4MEZTZTNBMlk4WjBlNXR6b05BRkdwbm5vRDJpSlo2MDk4CjBmT2ZHem9YSy9jN1g4THNpZWtGSzdiaWNrczl0QXpmOUx0NUZ3Tm9XSURFZmkrV2lKSkFDaE5MWEc4N1VsL3oKaWRMOEdFNmR5Ylh0TEpOZ1pqR2p1eWJVUU4rZ1h1VTFZdUo1NEtEWnFFblp4Uk1PT0pqMm1YU2dZWW9kNTFqMQpyZkZ0Z0xidGVLckhBSFFubzhheDdOQjVudzh4U3lBTDdVbGxnNWl5MDRwNHpzcFBRZnpOckhwdytjMG5FR3lICmdIa3pkdC94RUJKNkg0YWk1bnRMdVJGSU54ZE5oeEFEM2d5YU1SRFBzLzF2N3E5SFNTem91ejR1Ri94WURaYXgKenIrWjlNa0NnWUVBNXhOUXl6SUsvSFAwOUYxdHpDMFgzUDg0eEc5YWNXTUxMYkxzdGdJYWpwSWxzQ1JGMzV4cgpYeUtSeFpOb0JBS1M4RnhlZ1ZkbmJKbUNvL0FwZHVtb2ZJZGovYXEwQ1VHK3FVV3dFcUVlYXRMSWhUdUE0aFFkClg4dEJLSStndFU1alpqdmMzSjlFSGczWnpKSFUwc0UyKzAyTDVjb3BZZWRZMGczaTJsWmIwOTBDZ1lFQThZNG8KeUZtMFkxZjdCamUvdGNnYTVEMWEyaG5POWozUjNic1lMb0xrVE52MWpmMm9PTG0vMTBKQnhVVkFQd1lYbm8zUAo0U3FPallsQlNxUjRKOTZCd1hOaGRmenoxS0MwSXpXNUZ2T1lHRFpXSXBuMFMxZy9KTnRqOVdyS1RpWjNEenU0CjBQbGx3dzZXaE5hWmlpeWs0SDYvT0Z4NFR6MzE5NzBWelRHWVRoY0NnWUFOMGtueTNYdHF2a1RZbVA0SVNHbzAKL2M4WGNOR29GcFNFbHo4eFk4N1MyRXNJemlLZnpXdGV0V0tpdnI1cC92MXJBeHRrQVNaZWlKQVgzaldjdHowcwp0YXgxYjlCMC9VbTZOa0RoM0dGRlluWThBZU1qb3JCZkduazdROXdJL0RkVjFoN1AwM2J2bFVTQngvZEM0K3UxCi9GMXgwVFhJZFY0S3NtbnZSVnNZd1FLQmdRRFpyTlBQaUJib2x6WWM2a3dXVWhiNXF0aWVSamVjNnlTZC9hWFMKOUIwcnJlUGdhcjhYTHp4VGpOK2NGOFhIaFlQdlc3Z0RIc2lMZnk2WlJ4RUlUSmo5YlM1Y2x2QmJvZDN6Qk15ZwpoQytCVWlYWTFJZXpCZmtSQzZ0T1UwZXZtVFlkUWlKUUh3NjI4Z1J0L0wwc0tRTURVdlNhbzZtL0x3VGlsVUI2ClFzRVBUUUtCZ0ZpWUpLZklnZ0NqamltRWZZb1ZxaXp6MnVUMmtPWHhCNlhycUJDbGlXclNCUElFT2lmcjIraU4KTmx6MTBIcTkzRWVUd0VlbUhpQy9DVmR5amh3N2JjY2gvQWZWLzZJSzhoWHRlUmhRcDZvdnpFSUtVa0NYQWx0Mwo2RzA3RUw2Qk9JVFgwbFBIWlYzUDBDRGVvRnFXSUd6RHpnUHA2ak40SVhPdGQwc0RPbFZkCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.1.2、controller-manager.conf

[root@master1 kubernetes]# cat controller-manager.conf
apiVersion: v1clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.74.128:6443name: kubernetescontexts:
- context:cluster: kubernetesuser: system:kube-controller-managername: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}users:
- name: system:kube-controller-manageruser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lJVWpSRlRjUG9MWUl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxGd3NVRWU3am5nNGQwVHAwclAxSklMVzJ2aFNIMzgKY09hdnhIRnMzOForZHhXNVJ5NDB2TTRyeldmakhtRUhyQXpydmxUWjhHL1FKS0xWSlYrd0R5c2MraEJxSEhYegpyUzF0MHlOTWZERktHQU04dDRSQjQxSjVCaUhIcFFOYWl4cVk4WGhPbnZsLzJaV3dhVHNoMGVEcFdpKy9YbUFpCm9oc2xjL2U0Nk1Rb1hSdmlGK0Uva0o0K05YeEFQdXVCeVBNQVd1Y0VUeTlrRXU0d1ladkp6bVJEcnY4SjdIam0KQVNkL2JoYjlzc0s1QlFITlE5QncyaUMvUWp0YTZZVTRlQ2l3RTFEOGNndUszbzMvcVFsMkVkdGNoSEtHMGxmTgoyY3JvQW1yMWFqZkRwUFpvRERxa0lDU0x0STdtL3dNZ1QybUdyaDgzUFcxTFdjSjVyR1FRWU1FQ0F3RUFBYU1uCk1DVXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBelBjUUNPOElHN0lhSmdqdlN4RWYzU0NER3RXT0JZd0ZOYUtZck9LUDVKZGlJN01ZZQpiTnBEZWZKR2szd1V6bEVMYm5NOTQ1NkpjNExaYXp1WFFwUkF5d0VJelVTUHdKV2JicERsQlpIWnV3Ym5wd1lJCkdhZExyUVRXVXBBV3RaSXhOb1RjbzJlK09menVack9IS0YzYjR2akljWGdUV2VtVmJqazJ4RUd6dHpDMG5UZ04KZFR1UW1GaDNJRnl1VmdjbjNqZytTeTZQb3ZKV1lianFBeW9MWlFkUGJUQnl0YXZQcWcrNjhRNXdZamVpZXhTZwo1bHdlMmVrcHB3VUkxVU1oZlk5a2ZBY1Bma0NWbjdxKzEveGtpdlF0dTJ0UTN6dDR0dzVnaks5ZkR5NTROejlPCkJJd1ZiYTBRdVgySW1OemVCR2EvYzVNUjV2S09tcFpHSnkrZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBc1hDeFFSN3VPZURoM1JPblNzL1VrZ3RiYStGSWZmeHc1cS9FY1d6ZnhuNTNGYmxICkxqUzh6aXZOWitNZVlRZXNET3UrVk5ud2I5QWtvdFVsWDdBUEt4ejZFR29jZGZPdExXM1RJMHg4TVVvWUF6eTMKaEVIalVua0dJY2VsQTFxTEdwanhlRTZlK1gvWmxiQnBPeUhSNE9sYUw3OWVZQ0tpR3lWejk3am94Q2hkRytJWAo0VCtRbmo0MWZFQSs2NEhJOHdCYTV3UlBMMlFTN2pCaG04bk9aRU91L3duc2VPWUJKMzl1RnYyeXdya0ZBYzFECjBIRGFJTDlDTzFycGhUaDRLTEFUVVB4eUM0cmVqZitwQ1hZUjIxeUVjb2JTVjgzWnl1Z0NhdlZxTjhPazltZ00KT3FRZ0pJdTBqdWIvQXlCUGFZYXVIemM5YlV0WndubXNaQkJnd1FJREFRQUJBb0lCQVFDVUV5b280UW9Hek85UAowYzNpOWFzOE1UUWF4QWI5OUVPM2oyak5Dd0Yzb1NQNXdnTnZ3TnpxNU16bWJEZDIyN010bVRIZGwzNDVvU1poCnFLUW14VUx6Ukp3K1JINzV3OTk2TU5ObytyUU5ZZnJHQU01WkZhOEJyVE43enlLYXVOMnExWVYxVTQ4QlFUc3YKMnVjR1RNUGNBSUNkcGdLNUVVM2NmNVhXWGI0SnF3MjlzdWJER0ZtL2kzUlpiTzlTejFBZUFEU0tXN1lGS2thMwpQRzFsWklUenlndkRjV20zK2U5TlRYR3VKTVNHK1FXOGlSUWJkZk9HbGdtRDNUa0FzUGpxYkphZ0Z3NGpoVEErCjJwODhDNVRvVVVkakRhS0d3RTBWcmpsWUUxandBRnRoWTY4dXd0T0l1MHlWYlN6R3RnOWxlNUVVMEsvWGdnekcKOGd5TWZPQUJBb0dCQU5oQmRjNm9SdVZ6ZmpsV3NQK0QybExjbTZFVDZwNU15T1A0WlJ4TU0yejJoUTR2UGNRZQorTXd4UHA3YUJWczQvVlJadk5JSWJIcTdHZCtKdXNFOFdyeHQ5Z0xBcDU3QTdTUXdkeWUvMGtuZk5tSDFWdTNuCkxDM3AybWU3SnhETlF6bTdRMVpRWVM5TTl4RHBpak9oc2RHY3BRN3hMODk3cUV2aFBaSUtQL0pCQW9HQkFOSU4KQStWNVNsRTYwWDNtZHhPMHEzV0ZCaEFKMWtMNnhHNThjNnpYdGgvYnBXdFRlNXF2Q1FteUhyZ1I1a2pvSzlsZgpYeUdRTEtIMHhRRVBzRG94cGpjMmZzemhXN1cxNVJSRkpyMlM1Q0kybndZSVdUUG5vMXBEbG8rWXJ5TXAxWnZDCkxrVlpHOFpZeURPeURIY3VueStMSHM2Z3NseUdpMTliK2dhZEo4NkJBb0dBR0x0THhNbWI2Z3ZPU0xKd1paaG4KdEloRVNDU2w5VnFrc3VXcWNwVUlZSkxFM3IxcVcrNksxNWRlS1A2WUZEbXRSeU5JSStFUXZ1eDg1Z0t6Vi93VwpDR3l1OE51bGo5TlNpNHY3WkpGY2RGUlJ2Tnc1QjlZalNGRHhTR0d2OHd6MmZqaTdWN2l6bEp4QnVTNXNQc0ZrCk82dWxlTkwrZThVUmx6UDRQYVpzYjhFQ2dZQWVtOExybDRjYTJ5Vlg0Vk9NelpFR3FRRy9LSS9PWnRoaytVR3AKK0MwVDYxL3BpZHJES2FwNWZUazR2WEwvUU1YVEFURE5wVUs3dnYxT01Fa1AwZGhVeDE0bTROZ0tYSjBySFFDTwpNMitIQk1xYmlHL25QbVB4YlZQdFRPU0lqVG9SWG5SN3FvWi9tc1JodEJwWTY3UktxMDByOHdMS3ROaHVadXJDCk4vaHJBUUtCZ0VHalk3U0ZrbCtudGgvUkh0aWFublpSallicnFaNXdUbk1EYjMwTEJKOFVwd01zQUJiWnYweFgKdk9wVXROdWVMV243QWtEVCtRYWtIanBDbmVDQUNRcE55VVhRZmJtYVgwaEFkbDVKVlRrWUZHaDUzcmhOL1UzRAowc3FZbDRjOWJjYWVua2xZMUdqbUNUcmlwTDFsVjFjZlA1bklXTTRVbUlCSm9GV1RYS2VECi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.1.3、kubelet.conf

[root@master1 kubernetes]# cat kubelet.conf
apiVersion: v1clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.74.128:6443name: kubernetescontexts:
- context:cluster: kubernetesuser: system:node:master1name: system:node:master1@kubernetes
current-context: system:node:master1@kubernetes
kind: Config
preferences: {}users:
- name: system:node:master1user:client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pemclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem    # master2的kubelet.conf
[root@master2 kubernetes]# cat kubelet.conf
apiVersion: v1
clusters:
- cluster:    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.74.128:6443name: default-clustercontexts:
- context:cluster: default-clusternamespace: defaultuser: default-authname: default-context
current-context: default-
contextkind: Config
preferences: {}
users:
- name: default-authuser:client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pemclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem    #master3的配置
[root@master3 ~]# cat /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.74.128:6443name: default-clustercontexts:
- context:cluster: default-clusternamespace: defaultuser: default-authname: default-context
current-context: default-context
kind: Config
preferences: {}users:
- name: default-authuser:client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pemclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem

5.1.4、scheduler.conf

[root@master1 kubernetes]# cat scheduler.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.74.128:6443name: kubernetescontexts:
- context:cluster: kubernetesuser: system:kube-schedulername: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
kind: Config
preferences: {}users:
- name: system:kube-scheduleruser:    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMzakNDQWNhZ0F3SUJBZ0lJRnA0V1FFTFp0Ymt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTUNBeApIakFjQmdOVkJBTVRGWE41YzNSbGJUcHJkV0psTFhOamFHVmtkV3hsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUx1ZmJmeG8vQUlORWMrL3ZTNFY5RDdFejdRUU5PazM4Y21tbFBrMitWVTYKQXpxWFZKT2E2VDQ1Tmpib0hIeFZOWEZVNStvRmlmTU11M3N3WEthdllMYSs4RW1lK1gxU05zaGg2RlF4L2FOUApOVlpLRE9XMzJFZnhMSkpKdml2OEZuenE5MkdjTTNOWFd5MjlCdkp0UHBIRmx3SjFFSzc0QXh5NmQ5dm9GN2VsCml0WUNNUk92L3pWV0szNjhlb0xSaUNOd1A0NWtnbW5MeHBvU1VyUmgrWHhHeEdjcTJCdVg0ZTZSTzd5REVtdUsKNjhpUFprRjRlRE5aUWpieEhnUzRKNTE2aGFqR1RKWExNMUovbVEvaFo0TEU2L2JXOWlKZCtkVEEzeGgyOG9SagpNREZISUwzUk9wcHJnRFZodGxGY2VGUmhpamJlRmpDcXNXWEthOXNGZ01NQ0F3RUFBYU1uTUNVd0RnWURWUjBQCkFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUIKQVFBdVBWYmhIblU2MmpkV1l6TmdQM1JjZEtBblNhZWszMjUwQWZqM3dBOWJhL0drdzlZS3Nsb2M5eWhrekYyegptK0lsVWlMczdXM0MxTlFzSDNmUytCS3llWS9NMGtQVWRYVWw2YVVwOGZ2cW1iRDdRZi80Ty94eDhWK2oweG9MCmhZaWpmNGFCc2dYTFN4YlBMZ0FDWDJTYXNHaXgxYkZRSlBtTFUrem5PVWpQUnJzQWdlMlJtY2ZOS0VwUEMwaEoKR1F2ZkdaTDY1TkgvamNDSHpHM3prQlBxeCtQTUZOc2RuK3hnYndUU0haTlFYWk00OE0rWnR0eG5uYm1sL1Rxcwp4Slc2OWJMdU80cVVaTGtiemZVN29oaFhFejBhcHRid2R3QUpXUVdtQy9heEpvbmVHQ2lEb1A4c3hoSnpoUmtWCkVoQlQyYWxBOVdpdFFqNDFyMitMdlhwOQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdTU5dC9HajhBZzBSejcrOUxoWDBQc1RQdEJBMDZUZnh5YWFVK1RiNVZUb0RPcGRVCms1cnBQamsyTnVnY2ZGVTFjVlRuNmdXSjh3eTdlekJjcHE5Z3RyN3dTWjc1ZlZJMnlHSG9WREg5bzA4MVZrb00KNWJmWVIvRXNra20rSy93V2ZPcjNZWnd6YzFkYkxiMEc4bTAra2NXWEFuVVFydmdESExwMzIrZ1h0NldLMWdJeApFNi8vTlZZcmZyeDZndEdJSTNBL2ptU0NhY3ZHbWhKU3RHSDVmRWJFWnlyWUc1Zmg3cEU3dklNU2E0cnJ5STltClFYaDRNMWxDTnZFZUJMZ25uWHFGcU1aTWxjc3pVbitaRCtGbmdzVHI5dGIySWwzNTFNRGZHSGJ5aEdNd01VY2cKdmRFNm1tdUFOV0cyVVZ4NFZHR0tOdDRXTUtxeFpjcHIyd1dBd3dJREFRQUJBb0lCQUh5KzlldmJDYU43ZVJvKwpDOVIyZUZ5N2tyWFFDTDMvbWwxT3lzSWdVUXJmZFlJaFYvU0VEUXg0RVpuVUhneDB3d0hGU0NVSzVidVovWlZjCmhGMjNRWUIvMTFlN3dYb1hqYUVScDkxREY3YmJWVVU0R3ZjcGt6M1NGcVoxTFdJbFMvWm1hM0NVNElpUnptZk0KeEsrdS91a0JEUFJ2VFZab1EvbDM2WFZuRFUzbU9NOTBVQ1QwaHA3akVNVitRa2k1K2Vnam5GOExodEpRcmVDTQpTNHZQUE91UGNxTjdSQm9IRkJVSG0zMFBheXlaN3FHZWdBZlVjZFAzM3U5bXd0WUl2VmxzVDNvVkJXTjNub0E5CkFjZGU4QXFmY0dnUlB1YVBTWlI0TW5xK1Bhb2RnSGVwUTR5NEc3Y01oSi9SeUFTRndwWkdYTFhHdnJCcVpKS3MKdDBGOGV0RUNnWUVBeWZ3R250RmM0OWlUSXIvU2NEV1hnS0Z4QVpFSTdBR3h2Yi91bW9IZVp4R0VjbkxVYXNKTApFeHl0aEJtY1VDSEZncVZmWW5mejY2bzVJVGExY1JJcjcxbVFuUnRPdWgxUzAwRCtCQzF2c0ovN2krMHc3Nm9LCmtsbUpSUE5ud2tGUjBvaHpvL1JQWCtvL1Z0Tm1nYUdTNDhJdmh2SzRaRENudnRORHdRbXFTOVVDZ1lFQTdjd3gKZGJMbURuUFJRL3lTcE9Scy8yWklCUGxtQ0wrQWg2RDJZQThiTVBmeVNIRnhGek40TFUxUWprS2RrK2tKbit0UApIcE5WekJiemlDalg1R0UvaDhRRC9RZmdnUDdLcjJId1RZd2NuNW1IRXl2WDdDQ2NLUUhmL05RYnhVQmh4OEdiCjFPekllMUU5NndMV2p1RDhpNzhjU3ZEdm9WZmlaTllJNGUrN1hqY0NnWUFMSmNhend6aE9OdWkvOVRoSEN4NG0KY2tLTFpKYktkN2w0a0h3NXVNc3Vndy85UlFzbUxUejVmQTZ6aUxwUXpkeFp2b2pLSlhhbjNnZ3pKaExUZjc0LwpBb0Z4dWswWkJuOUl1NENKZUh4K2tnWFBEak15TnY5SVhucXQvSVVRZW94cWd5OW1zQmdsWWdkRzRubjQwNU1JCjBQSFFqOXJQWk1RTlN4bWxNTVJlVlFLQmdCdVJwdEpNY1Z1UGxkMVo5TzVsQlRYKzk2Nkw4NFprSFZTY0ZyUkEKVEJpN1JqMmIyVTZsU3ZPRm1TZEZGZHZHRXJXVnBGQ1pLRU5IRGVqbFExSlk2L0tqaVFyVzFQSmZsOFFKaU1DVQowK1MwK2ZJQkRVRjA3bVhhcjhzeUZCNGtQckhZQW1jSEpKOFhaaVJPNmUwYXJHelBOVXFDOEdVMk9Tc1RuV2dFClVTYTFBb0dCQUlXZDZFSzZ4ZC8vdmF3dFhvUDU2anF4WTdQY3dGVnJPUEV3ekF6UGtWYU1wMElvNnR3TkJQVmoKZ0FDQjMzcUwxaWxWQ1NaZ2JBOFhRVWNqcHdTT2o2Wm8wNmRBUWpqM3I3b2ZJeHJKSWhtMndhSEIvbGFNVkdyWgpEQ1hHdlpCNm9HRGIyZCtVQUhxWkNDUzRMRCtwQmdwbG9sZ0hYbVN2MW00RFJiR0FZcW5PCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.2、manifests

注:此时只有master有

5.2.1、etcd.yaml

[root@master1 manifests]# cat etcd.yaml
apiVersion: v1
kind: Podmetadata:annotations:    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.74.128:2379creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-system
spec:containers:- command:- etcd- --advertise-client-urls=https://192.168.74.128:2379- --cert-file=/etc/kubernetes/pki/etcd/server.crt- --client-cert-auth=true- --data-dir=/var/lib/etcd- --initial-advertise-peer-urls=https://192.168.74.128:2380- --initial-cluster=master1=https://192.168.74.128:2380- --key-file=/etc/kubernetes/pki/etcd/server.key- --listen-client-urls=https://127.0.0.1:2379,https://192.168.74.128:2379- --listen-metrics-urls=http://127.0.0.1:2381- --listen-peer-urls=https://192.168.74.128:2380- --name=master1- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt- --peer-client-cert-auth=true- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt- --snapshot-count=10000- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtimage: registry.aliyuncs.com/google_containers/etcd:3.4.3-0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthport: 2381scheme: HTTPinitialDelaySeconds: 15timeoutSeconds: 15name: etcdresources: {}volumeMounts:- mountPath: /var/lib/etcdname: etcd-data- mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certs- hostPath:path: /var/lib/etcdtype: DirectoryOrCreatename: etcd-datastatus: {}

5.2.2、kube-apiserver.yaml

[root@master1 manifests]# cat kube-apiserver.yaml
apiVersion: v1
kind: Podmetadata:annotations:    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.74.128:6443creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-system
spec:containers:- command:- kube-apiserver- --advertise-address=192.168.74.128- --allow-privileged=true- --authorization-mode=Node,RBAC- --client-ca-file=/etc/kubernetes/pki/ca.crt- --enable-admission-plugins=NodeRestriction- --enable-bootstrap-token-auth=true- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key- --etcd-servers=https://127.0.0.1:2379- --insecure-port=0- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key- --requestheader-allowed-names=front-proxy-client- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt- --requestheader-extra-headers-prefix=X-Remote-Extra-- --requestheader-group-headers=X-Remote-Group- --requestheader-username-headers=X-Remote-User- --secure-port=6443- --service-account-key-file=/etc/kubernetes/pki/sa.pub- --service-cluster-ip-range=10.96.0.0/12- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt- --tls-private-key-file=/etc/kubernetes/pki/apiserver.keyimage: registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 192.168.74.128path: /healthzport: 6443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserverresources:requests:cpu: 250mvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certsreadOnly: true- mountPath: /etc/pkiname: etc-pkireadOnly: true- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certs- hostPath:path: /etc/pkitype: DirectoryOrCreatename: etc-pki- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certsstatus: {}

5.2.3、kube-controller-manager.yaml

[root@master1 manifests]# cat kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:component: kube-controller-managertier: control-planename: kube-controller-managernamespace: kube-system
spec:containers:- command:- kube-controller-manager- --allocate-node-cidrs=true- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf- --bind-address=127.0.0.1- --client-ca-file=/etc/kubernetes/pki/ca.crt- --cluster-cidr=10.244.0.0/16- --cluster-name=kubernetes- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key- --controllers=*,bootstrapsigner,tokencleaner- --kubeconfig=/etc/kubernetes/controller-manager.conf- --leader-elect=true- --node-cidr-mask-size=24- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt- --root-ca-file=/etc/kubernetes/pki/ca.crt- --service-account-private-key-file=/etc/kubernetes/pki/sa.key- --service-cluster-ip-range=10.96.0.0/12- --use-service-account-credentials=trueimage: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 10257scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-controller-managerresources:requests:cpu: 200mvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certsreadOnly: true- mountPath: /etc/pkiname: etc-pkireadOnly: true- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/execname: flexvolume-dir- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: true- mountPath: /etc/kubernetes/controller-manager.confname: kubeconfigreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certs- hostPath:path: /etc/pkitype: DirectoryOrCreatename: etc-pki- hostPath:path: /usr/libexec/kubernetes/kubelet-plugins/volume/exectype: DirectoryOrCreatename: flexvolume-dir- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certs- hostPath:path: /etc/kubernetes/controller-manager.conftype: FileOrCreatename: kubeconfigstatus: {}

5.2.4、kube-scheduler.yaml

[root@master1 manifests]# cat kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:component: kube-schedulertier: control-planename: kube-schedulernamespace: kube-system
spec:containers:- command:- kube-scheduler- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf- --bind-address=127.0.0.1- --kubeconfig=/etc/kubernetes/scheduler.conf- --leader-elect=trueimage: registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 10259scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-schedulerresources:requests:cpu: 100mvolumeMounts:- mountPath: /etc/kubernetes/scheduler.confname: kubeconfigreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/scheduler.conftype: FileOrCreatename: kubeconfigstatus: {}

5.3、pki-apiserver

[root@master1 pki]# cat apiserver.crt
-----BEGIN CERTIFICATE-----
MIIDVzCCAj+gAwIBAgIIKUS+2WsvNC8wDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yMTA1MDMwODQ5MDNaFw0yMjA1MDMwODQ5MDNaMBkx
FzAVBgNVBAMTDmt1YmUtYXBpc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA6yhoybfD8bD6+jTDpo+3m/ZtXvQXDCTETC0GOgdJPUyT6EtbPJRV
Hs+t9aWDkKbl4K3lhsxmGsf6yDZot4ImZIgSpU2y6zU2t/lCyR0sM+2umCtdo/FA
o53zFYe/UaepFFKA72sUvkCm4DnRzWMYzWofjHV7AkFOFZzbEyM9PpiGnUESY37o
FLHhqe7c0pei1hypiLZDSHcloXObvO+kiRy4TmD8kdqW/3iR67edhYaUbpD/oZg1
g9Qihir6QCs3iiNokrRbZEhsUeZmcqvAViZDye8FNwpzRe4e9M0ibOf3AD3ikpWw
8Nrs+BiTmeWYyxig/Od7kGwsqYA6t5d46wIDAQABo4GmMIGjMA4GA1UdDwEB/wQE
AwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATB8BgNVHREEdTBzggdtYXN0ZXIxggpr
dWJlcm5ldGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVs
dC5zdmOCJGt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIcECmAA
AYcEwKhKgDANBgkqhkiG9w0BAQsFAAOCAQEAfaMG4LqEIHhi+Bs0qJHXd+yA1UIU
kM9AjfgbRUcJytF5LSZakH+b+S5+J9ihjrvi5jaGWmEKjagOUkkWtTq+uDXDZk55
kNuAobt16o/jrOobpTCH4Bd9U/7R99Ui4H608LopNfEn3GChA658nEBxYELvCKEq
9YM/i05qhHbLJnocjd4pErkWbS7JE0H5JxGvA5avLHgwSqbHkAD8xn+nST4/tTFr
4S9GZd/8kB4QRG0TFFX8zAuZsihd3QpIpDuwgw3PwImHjD/Mpxw58eR71i68oqPd
w2Vsc07Ir+YlSr5ZREdzWmp0xTRJ4DyykWdU0gM9MycfH48Lm5K78XQxfQ==
-----END CERTIFICATE-----
[root@master1 pki]# cat apiserver-etcd-client.crt
-----BEGIN CERTIFICATE-----
MIIC+TCCAeGgAwIBAgIIfOypnw+V1s0wDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDVaMD4xFzAV
BgNVBAoTDnN5c3RlbTptYXN0ZXJzMSMwIQYDVQQDExprdWJlLWFwaXNlcnZlci1l
dGNkLWNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALRhqUvX
x93vtwxuaW5zEEKLTMreUuKmvFggLDyeRN2jSbI2RWX1UT230ZnJOt+C3hD6u65B
0cIYNyFRxoTIF7jtbc6YPvlHFS1JMh1OjZE42tLuXomdFLXjtuc9fRLtOLXhBofS
hUZTaxEZzqthspFPzetaVmLEIMLOh989TJJI0HP+Go07T09XsaBFLyabyZl1QhfY
Itjbr7NaEBzdRZ8GbETfi5nXgtsfguD1CslDqZTfMNP3scwco2kqVmvuJ/Fli6SA
5OnB7HdfL9jNW5mswXCQ+9N4jsTqOoHRg80iO7mAFDnR/gjnUob6WxeNipcSKjIN
8IaIuIM5HSPiibcCAwEAAaMnMCUwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoG
CCsGAQUFBwMCMA0GCSqGSIb3DQEBCwUAA4IBAQAFIgXU0b+vtIXeBPaBVwMb5b//
aLqAMnxbxHxjIee6NDxQHLK13aPUXuGdku3TI6cTFWOu2OYFxJbJ73J6xjtAJ2r4
MU8yiOLddap4eFsHhFQUuXDcOyuiMau2E86IrTkXVQR3vk21k4bJNT51zjOrNg9I
/MSIWA5CRaQop6WDXmnmbiZEqhB+OH6FL+7yn2lGXw7CYNMe9XaQjr37arFSXvlR
zBjaEU+jBZMrQvdatH+LFcLn4Bvrhtao+cfdCP1dl6iJYlBAEnT+7Gmvl1O5BQQn
qVP/zJxIl4VzzmlqLAGHz8UH/RR329jQHjS5ySK94LUFXRvGpZ4CvuanFBuJ
-----END CERTIFICATE-----
[root@master1 pki]# cat apiserver-etcd-client.key
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAtGGpS9fH3e+3DG5pbnMQQotMyt5S4qa8WCAsPJ5E3aNJsjZF
ZfVRPbfRmck634LeEPq7rkHRwhg3IVHGhMgXuO1tzpg++UcVLUkyHU6NkTja0u5e
iZ0UteO25z19Eu04teEGh9KFRlNrERnOq2GykU/N61pWYsQgws6H3z1MkkjQc/4a
jTtPT1exoEUvJpvJmXVCF9gi2Nuvs1oQHN1FnwZsRN+LmdeC2x+C4PUKyUOplN8w
0/exzByjaSpWa+4n8WWLpIDk6cHsd18v2M1bmazBcJD703iOxOo6gdGDzSI7uYAU
OdH+COdShvpbF42KlxIqMg3whoi4gzkdI+KJtwIDAQABAoIBAHxeQZ3bPyDUYL8f
eW3/w5w980qEk11WXNHeDOIWtaCjLvLC3IJ56/PDw65mwkLNNlM6rSBunTNYAtrk
SR3P4BtPCMDC09iHnCBHMVhnitAwBSAd3ey/80GdqcQx7wSXrtwoNJp9GgrtBQsb
YhVkHPx3q6Cz/o/GblgikifnWd4ZUGis14TvG1rEhxKEZneDVKuUIpuhZhCHt33s
LAiHhrA6nPkd7w4LmYIWPQW491oQ9Fc+jRzEP9GhmcbYQKeMMg32etO23k0vVibX
cQnnSL6uQmNIZcuzBi1LFerpw3Xz1xephNPuR1pEjm6Ph3bBq0b3G5NAqa4pHL8m
Rof7dxECgYEAwqISJ69dmvaUuhtt+CmO8AJBBEzDBrFfkpI9h2JfcJPWBl4ts5ko
e1OlxEpG9uPwso3NudzP3RH/iDt7pUanjBTtHRi7XYfR2U77ogr/OdbJKlGrsdwy
x9UTsYx6LoOA82Xkc1+Bx6UHl9FCdj/AvdbEdYcrxE9CDwdpJXM1HtsCgYEA7UFD
hkLGnyiYK7nZjgK3gRdAIGEmQWNb19e5DHVDECLOg7nI+CBCZA84wVecrbrsBWH+
dfAt6ZC2tjebrnYYOOJwVkkLnzj6feIGqdzxBwsJ7ZzEQ7MIcYLWqgK/WV5NNlvJ
EAj9lHfFhzoe4Xb4feJct1buYdcyS5JA0HA0UVUCgYEAkzzcEx186H/lXyzk8jku
Iq7x1HjliKiiLlVnKoXmwVl1LXgNhrI0h6dt3aJ7MMabDdhsa1B6BzlYYAzvqsZa
dYRXJA3ToBvhSk2P2rQLBAxSPitugayc1cOBlG06+PkOkhLg0c7MdOWJavYpGx97
haF1GZvaJjX3OTtX9bbD1sUCgYAgrAkhdxalGlECTICiJsugclQ5YUeEX6tpKOLp
zUgj87ceurnrOX4LC3GUZn1EC2avQxRop1+bN3uB0lyVBNxHER/JMhvwnEcaiMLE
J5Hll2aRmzIH5KK4Bv2KwgAZzXuyjac9lw9cn7XK7n0MLXcA1uhPsx/2x0y8zXIx
ghIiVQKBgALUXZc68Nc2ubVa86nr78g2wnEgxiMeIM7c7tMALyYRtvnmBoHBhIba
VETlxkAdkzhmtDi1gUFrkJ8nLDrmmKA/5A6kObouTOHBbk5HGEJEkn9PnWj7pCRb
rOk4a1nc4Y0IiAjM+WdyzOR5Mj+ENmFm2sKQ6LtKKXj7iVuO/F2k
-----END RSA PRIVATE KEY-----
[root@master1 pki]# cat apiserver.key
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA6yhoybfD8bD6+jTDpo+3m/ZtXvQXDCTETC0GOgdJPUyT6Etb
PJRVHs+t9aWDkKbl4K3lhsxmGsf6yDZot4ImZIgSpU2y6zU2t/lCyR0sM+2umCtd
o/FAo53zFYe/UaepFFKA72sUvkCm4DnRzWMYzWofjHV7AkFOFZzbEyM9PpiGnUES
Y37oFLHhqe7c0pei1hypiLZDSHcloXObvO+kiRy4TmD8kdqW/3iR67edhYaUbpD/
oZg1g9Qihir6QCs3iiNokrRbZEhsUeZmcqvAViZDye8FNwpzRe4e9M0ibOf3AD3i
kpWw8Nrs+BiTmeWYyxig/Od7kGwsqYA6t5d46wIDAQABAoIBACNEozqlofCMr4d5
BGLlqQ7uDYcxKoe6t+oI0qc/Un+sDX7IVn2mbYG6egeedDXsogtpaUQnQaUAmx8N
8fSbw3BObCV4mr3l9DfxXU/WXTvIiOfvkRK2axBe7wcqncn8UEJpAUdnEuxZu+1j
HpEkLKMaKHMjZ3h2HOTm6oBbR6MsaVboR93Ux9otiGPO2ndb3PMtbg7NBVuD13dV
w+gMaqFlN4aCjN6e5gEIHow3KOA8VTEwud4Uv8T3+aG884rkbnm+D9QkG/e0Hp5L
NOxgn/iyYuPdAS4vGzDICwbYwBOFWmvyLqc4Hc+LAcYTp5IsJl3xUNEG8xKYcyKX
qIx7ufkCgYEA7KKmrMTsnPEaFhk1RVFU/0pEjSDiFsV5JrxYxhJPEgf+AEI8Nazd
Ku5yY1mNGZ0PwZONUpDFbXJKot7Xafj2YWs+yqQKjQe7BPSmuN83W9zr3f94YLxy
VfOwoDpZfU6AMU5zsQrZ/DmE1+coBxLWtwav+VwlQudk/UpPe6nZY3UCgYEA/mbO
NQQNrb926T+JApJqZvAH5emjDFTpp5y+p1zY6rLsOBFMdc+PhTpoM51zi0tuX7H+
udU91PfjmZovMmoo8zwgxFFgfEwkOWImWzSOnG/KVcriBsTrlIirQVedhyjWn3O7
565dURNOpq8GH6mTVaqvKniTpkwO+sj9+u2lvt8CgYBXIVCjvuKsqu4DAwclXdwh
H/R7zobRAacpRyKc0/L/Xaf96mWHEf5hp2jBAiE9NCKwESdxJlM7iGDI9ap1n7EA
j9+P97TW1uja20ZkPfSBQ6gplr55SAoFcfQwGywGQphbD1rz7l3zTC6I3NlVOW+L
9s9mzrH9n3wE846upwyfXQKBgQDPJf76hFZvB9xXiPiTM42YTBLiTyAIxouLg8Jq
nNu0IATgkpVjyKLgpPJ8NNUEs2MoYNM9ljlG1KJrTHTp5C97/5XexTR/gbBtWVJK
Kb2F/DERMqZhRK9evvpTtnf6unIoXCDBQeWSQtpkN1gRKA9kThtbxdrUKlJ4Onk0
fZXcmQKBgQCu2uxIW2QcrrT9eaWxs3sCdnliBCHo99SGRHP3ASe+8gzE4JpypI6r
EHWdcb4z4ewc51AP8h4msKXuXx0OBWqZSp0SMv47c2asBTHXYwVP01KcJaHlecGq
rCm2A5xWgkqieYsLoORJ1czKpw2UTyZ0YODCUDiPXQCVJ0Hpg4IxkA==
-----END RSA PRIVATE KEY-----
[root@master1 pki]# cat apiserver-kubelet-client.crt
-----BEGIN CERTIFICATE-----
MIIC/zCCAeegAwIBAgIIO4G9X1sn9vswDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yMTA1MDMwODQ5MDNaFw0yMjA1MDMwODQ5MDRaMEEx
FzAVBgNVBAoTDnN5c3RlbTptYXN0ZXJzMSYwJAYDVQQDEx1rdWJlLWFwaXNlcnZl
ci1rdWJlbGV0LWNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALf7qf4RayVh5Tz81yjLF3KqNn7bPJWVN3DVTbIWRSLo8mDbQvDm7kCIaoXbVxds
owiGHT67EQ9xzaDA6USMOfGplfenowOyjO1/8C8gbJgzGKOR/rKcvbUE5qGh1bLf
gr/RMNyfT6hXvkLop/PDVgpYyg7OnJKwNzpez+XKnc8c4v7NJo7vKf3habnmCKRL
yEVuQsU1lH71fl6BQlHyBxqJbAV3Hq3g8NyQZbzzPXdSevJB2QVE83IRATddEECH
we1ITul/2/fi2DixFoXdaZxUd9QJ7Z3xIrs88dQ8FGKD5OYNJTo+osf0Z9srZrhs
2DELV7AGWShGCl+zl0dqcV8CAwEAAaMnMCUwDgYDVR0PAQH/BAQDAgWgMBMGA1Ud
JQQMMAoGCCsGAQUFBwMCMA0GCSqGSIb3DQEBCwUAA4IBAQAoaC4zxVScmaazOT+P
BYmxnkgyXnfghje+4X9OZkSqwJBT4qX4hC3ppI1FQbgb9vbF0AabT7tZmn8pdYKR
eowfLvagwgoQm5jJQ1WvB2CIWPozPYPkulNG29e3DBvMEHkyqlVu3xBxHRtBHDIO
JkzrsT5T7+y190JAWKleV4pZ8HpplTe47cC7E8wiHvHvd8a7tAEUd4KJ+oN/b4Ei
r+78ZIlB8/WjXt79wAlhRtx4BcVtJt6a0hPMnqcdsiX0SsrXw9MtPzLuYvqLUOVp
kq6uz8f2qqPIcWLBXg0/1OWp8voZiHGi2zGOqWKrF9ne48dvClXYHpMtW26iSiev
05a/
-----END CERTIFICATE-----
[root@master1 pki]# cat apiserver-kubelet-client.key
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAt/up/hFrJWHlPPzXKMsXcqo2fts8lZU3cNVNshZFIujyYNtC
8ObuQIhqhdtXF2yjCIYdPrsRD3HNoMDpRIw58amV96ejA7KM7X/wLyBsmDMYo5H+
spy9tQTmoaHVst+Cv9Ew3J9PqFe+Quin88NWCljKDs6ckrA3Ol7P5cqdzxzi/s0m
ju8p/eFpueYIpEvIRW5CxTWUfvV+XoFCUfIHGolsBXcereDw3JBlvPM9d1J68kHZ
BUTzchEBN10QQIfB7UhO6X/b9+LYOLEWhd1pnFR31AntnfEiuzzx1DwUYoPk5g0l
Oj6ix/Rn2ytmuGzYMQtXsAZZKEYKX7OXR2pxXwIDAQABAoIBAQCEfFQwYaivdaxW
25ewh3buGkY92W/qI1aWCPP3DvRgLDEFsD6nLRRaIiHbHFS9yHwqUjFTD/A8F+5E
GUahFv1O2ZjlirDno7a5+8wgk4+/lePjPemUAyzU4p+Vuu0g7rS/nks6Q/pftjeL
BPCUp5AYyVFPklbLhttuTAIXbm1vSxwZ/HSKn5fhnWvdwN1Jd6iGU5En7XQ5yteS
+szs+DJykzIotMAt9oybvCmd3pW/od0V+4lvuGD79092o+UdQ7vczpqHx3nX0OlV
ByNhFy8pbv2yw0/e86NAvzXcgykN7YWwgy3KOzY4w+SA64RbCXyN085duqKUPE8v
mGS5z/F5AoGBANC2iGHF4xrBnQJUX8ENEyJM1eMEbJRNzCQaeJo2MOVKvk1yqpVL
B3UhnIbRUekSR1IrD72mfmaMxTr+i0e0tN/2n0mRuP//2dkyWS4oFYu7pQwR1g0s
Xo36tLKxLEiV3XEFnUAMA6NHWFgazW76y79eusnP2XJGaD02XhXHxEeNAoGBAOGq
x9KS9ro+IbCPzgecqZIk/TVD89ZjpbVCAiiZv9gdfxMH0qrBBVqVuEkqSE9J/lkb
KgREBzknwXqarniTrIOmTzzSq2hhoADoTL3le7Rz3ON+47nPBhAWbVv91qPf1s23
t0DaXJsjRkQByW0ehIn8iCeVFWVNPuMuHuf7tFubAoGACw3P1VXUvGMKvMfZNnFJ
1SQ6o8ZlNcmVCUh5oLlEB7DYuWNcU4HgyDxafO1zKCP2sQxkzgeWZDoKbCB1IfwZ
JE98ijn0kWJsmEtJW991nKv4htYe/x2deGmRznEBxmphiw3gETdRrgEmVaw9uyX/
Sohq3itq+dluxecuPnsREzUCgYA5r79u69SYXWOdT9V6CqkqS7xSjnFZn5VvlVUZ
7dulsjyWr8xBjCADPPyj72QWqLKVMqV1+7HhAXGrFrl85zsVWEEvKidZAoO1V6yu
amhKA8g2e2xZRjulhyYjeusQbxro8Yqt0GQV4FmI7u//repxn5VqkOisQafOyS5r
XOOI+wKBgDJq1Yuu+tXTR+jKx8MOMO4Mde7EKiAgPpAhKX+5Ot2B+M1vIJ8sSQLI
Qk2rXZh5OJpz3gviQAR7cDVoPhKOj5dyUbqFbJ73I6bzSn5W5w0j3oxA+n4KHXfv
Po8oG4LOiWVhV6l4JTxyOD1HK09ty/bSxfrchMF8mTT12W+90XJ5
-----END RSA PRIVATE KEY-----

5.4、pki-ca

[root@master1 pki]# cat ca.crt ca.key
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIxMDUwMzA4NDkwM1oXDTMxMDUwMTA4NDkwM1owFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANti
Jd5n5bXH1DHJoaMjqCvzo+c4vGBelbLchXk/zvyJZ6NoAMOtoOEd00/XZiXJKIPA
ZnqmBsfKoWaHtJTX5bK9reWDirO3t6AKLDnm31bDvnf1zsFwvyBStaRANQ4sdNml
BAh5HYivbsn76KJbVMcvR/Hrcrl9SB2XJB2AQPtn8yhk/x1HN4lfkMJUNBNy8n5t
S3RdcgYSTHICAF/1qvIzsMN63DtpHj/K62A5007tWPAx1ZOtap/Zq3U2y97KX1bq
vMRjqr3IWeoY1glZroutcqZhzXvVeJDOCcZU4Q26iE32radEOv4m5odE3H7Db0QZ
X1ST768HdXMkYXBq9bMCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMgG2yQsoGrbJCDOueuVuS2WheDu
VpdBQHtQqJAFJSRCS8iorC19+h1yh18HI5gxy9cPaAMy0Jg3EYpQFjm91AjIXPNB
wchtGSKkPii7czjhWeiuwIl19xIk3ilcWm50HSaZUjs1uBRjLj3l8PE8BHNSaapC
a96AcrZgVkU8Qi6YOiLC9Zh0LbBCVh6HWjQ0bTVuvhuecLyZRiNZ/PEfvhHkD7dR
+DrVph5uG1wf/vjApaBc/MfvLk02Rxu3YGEMlgvNwO1+h9rwjtInCVuR2gxHb2mf
ooDLDvS094upL6DzmvjvFRSZcXIKjIPglZ7EFvZkYf+r52mnZyQ45UOqIps=
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA22Il3mfltcfUMcmhoyOoK/Oj5zi8YF6VstyFeT/O/Ilno2gA
w62g4R3TT9dmJckog8BmeqYGx8qhZoe0lNflsr2t5YOKs7e3oAosOebfVsO+d/XO
wXC/IFK1pEA1Dix02aUECHkdiK9uyfvooltUxy9H8etyuX1IHZckHYBA+2fzKGT/
HUc3iV+QwlQ0E3Lyfm1LdF1yBhJMcgIAX/Wq8jOww3rcO2keP8rrYDnTTu1Y8DHV
k61qn9mrdTbL3spfVuq8xGOqvchZ6hjWCVmui61ypmHNe9V4kM4JxlThDbqITfat
p0Q6/ibmh0TcfsNvRBlfVJPvrwd1cyRhcGr1swIDAQABAoIBABVDXghAabNEuvxY
XqJBQnuAEdLHXPq6MCg113n5BUbUyoa7/db5bS5khaanae8foB2k+EnK7b1PlnUp
kgcbJdg9Ki2kojzpAZMxaTfzeJIgRsW5vWBiXSP04EYbMwk8pdayd8Gae5JT7pkF
IXcbAwyLOJ3qBCSWT/cOPyHc3G+BZcNsxiI+Y45/1wND9m7ZhAK1Hi2c3Fvil9zK
N0UqMB25lGA1JfrrhiJR70BLwQ6PuqrdbkriEg3B2mzrcTWAVaRw6AMEN6qp1VtI
eS86XqppfQ/AYr3x5JejXm/NJusWlOYNyL8/1ZMYhMsSv8xVBipZCUQzJY6bP0Bk
2BchjoECgYEA9/4vnRmsRsWHUC8IQBOlux352Zs6rnbNttHjYzCNx6gHXJngqKGe
2GWnKhR/+UwVfGESPgDYGJicR8KFkvPhRmf1l85Id2tV3xZtgWqs7NO21emUDGdM
gk14WvoOuR+RaJyHfqgHKqKgEte4uSv6U5xL5YQzCy3V8/ljm6aDWJ0CgYEA4nd8
kw+swxtn1B4jlRhOto/NCpB4IBBHA0XCPSEpIDrzf53vEhEGMz0eiHXXdaVPbd13
kMpGSkwUb1rCEEFBorxlfSVUziC7/YW7RXprtpBMuBfUGiKI3BXgE+jTGrOUz952
+E41IaZnjosh8lpUSsD7wiWkNQndw2yw9GENbo8CgYBpY3lCjyV6UflmJwafjHny
4hNK2b//YnebyOiUP48RGSQ/wxkJMN37Yn++z0VvYVkEKZCCDwPGuBw6Fr2DLOdA
b2+cWsrLDS9KBhL1W6svXe2mTIRhHQkTmu6Z4wicvYCi71pZhfi9sqzKNSjIcJsK
KzLJz/uNNaZl70bYX9QTtQKBgQCGjIkV8pUpIiow62smlNeHPa6LnUPRgPo/5n09
xmrhvESZSKMWb8joPmLannDRc9LaKl90Rck3MTZe5mQwNiUh457EmJ5nDSnDuWWH
JPHD+L2sDnQ0xtnbMJ/+FDEARzuduMWkRwroIC6ckOstSx+Tfk7VjXmfDWqVRglo
WBUb3wKBgE8Fvn970GoqdiEJF+SIhEOAPAk5UMx5/Dd5kzSoZggL21umDYVa0xqX
MXYvN2yF0SGvLxjqewcsacEhMiD9RuXyqjjs8MvjPdWiNZavuailf4QvcHwp8MJB
UffsCwXBOGOhLDX56wTKzhstNu0Rd8BXZCrcaFNh4vJ3/gNvKNrt
-----END RSA PRIVATE KEY-----

5.5、pki-front-proxy

[root@master1 pki]# cat front-proxy-ca.crt
-----BEGIN CERTIFICATE-----MIIC0DCCAbigAwIBAgIBADANBgkqhkiG9w0BAQsFADAZMRcwFQYDVQQDEw5mcm9udC1wcm94eS1jYTAeFw0yMTA1MDMwODQ5MDRaFw0zMTA1MDEwODQ5MDRaMBkxFzAVBgNVBAMTDmZyb250LXByb3h5LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA9+uyxi365UT3pKawVI7g0GKYUYce/jdY3ELiLSYRnHCVmEbKSIVeTdB7VbXQFIMB5p+aZWTgfWvdMOW9dI88HKNtszCsu97OOoO0sKB2Ipdso5utuj4aDvB7j6SUztu/jJUAavnH1aKVRNgn+QmAuSMcujzvyaAAQP8fnMFCSRw/jfrKYlV9E4PgawWzmmEnzuGxbvHiBsQjrtalYzAbnRGbk1pX5tuIc5qJvYdel1ReIt9IJ2BELPDNARK9QMSqFLIhBU1iaPtRSz72bxXaT7D21M3oYifhjNmzokTx7DXDlA4b5pS5byMZAF4AYBPmNkRppPGYY1a4cWyrUqRpNQIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAF1IlamjA0+o+eb5Hi45p3qbSNMvbfYAwZg350IKcYVpdndR7wMoGFhWZJ8wOWJTNE94hgZ2qsVBjs2Gbst2cp9neK28m/2uDBJgz6OTzDZ8aSQ8zvQoVWoiYKxxNOP5rQOmjwKadg4kTICqs4RkgQxTS3LrhuqmHmXwVTvlqNb9InyJctN+V7ItDEvzYvLBxuYW6f2UKHtlsV/Qk5drBslA5yveZ+QXlNlgx0rzM4BNgeVQfHgTpjRHSLLhpveO0xMVtebln7i4tXNuUGjTgOd9OWwxUja4iTin3MCb8otQDH/xEwmavDkBCu+bAqmPNaVjfVd5+h0DpRsIfla9uvA==-----END CERTIFICATE-----
[root@master1 pki]# cat front-proxy-ca.key
-----BEGIN RSA PRIVATE KEY-----MIIEpAIBAAKCAQEA9+uyxi365UT3pKawVI7g0GKYUYce/jdY3ELiLSYRnHCVmEbKSIVeTdB7VbXQFIMB5p+aZWTgfWvdMOW9dI88HKNtszCsu97OOoO0sKB2Ipdso5utuj4aDvB7j6SUztu/jJUAavnH1aKVRNgn+QmAuSMcujzvyaAAQP8fnMFCSRw/jfrKYlV9E4PgawWzmmEnzuGxbvHiBsQjrtalYzAbnRGbk1pX5tuIc5qJvYdel1ReIt9IJ2BELPDNARK9QMSqFLIhBU1iaPtRSz72bxXaT7D21M3oYifhjNmzokTx7DXDlA4b5pS5byMZAF4AYBPmNkRppPGYY1a4cWyrUqRpNQIDAQABAoIBAF68R0Upfs0rTIIzXAAD1O5sLo5A1twHpEIOoMTl3ibsco2Mx3Fs3TtY5jg7UHb2FLze0i3anVnv5MbxkzK+JRdAcAPgHrFvk1iSyXIQ7vOK722ZaIpZfrWkuWKLXn2pRQngSheWuQDurqFvA99K/VBBlZGpBWwDYvVzR84rnzu1+ht6Duhhc/V4BP5KmRiIKs5SEtLU/w7nEWbhCrGY9SZ4fmgCxlahloRhCD71PLQxMunraA3L47CqshjPm4ylt1J9uAm71hwrby1Q5VXgpU8rz5n5k4VVLHzgQSx+pOWq51c7o7UU6sY4ZpmpNtwEcGsC3XzMywBq9VmCAQMJ6iUCgYEA+XJt5yyMChu4LuUg89Tj6iHt1JKqBxvWjQvd7FcSQtT1yYkx0er9ShPkPSv75yX34dXuWOHPp3FocWmAA9w9HVB8XmhBpfXoDB0wI1o7n/uGGevBasZxEFl4pfc8D9qoPq/qqAYQgzphzkyKLxsYxubnl5yNo9GcHO6l+jo9iOcCgYEA/m8BJKZH6tjXqL+/E1af2b8ZPqDuYSOZkTgh7wu7UMSdwkWEnRJ4fPnunkQlW7m3bu2s1+lUHwgyY8EucEeD+4jRPN5zPV1GiDiYsqXmBcmUtUpaqqjNTFGpGbkPzLXzprSdWwd3u0zwgtALDdhav6JA1iqCkDQSUXRzb2KbbYMCgYAaKfBxIPEHVmT5NjtAmAHX2vsxIrkGydq1LJt4YKGftOqa2vMIy5cJoBB+ghCH7CmV3HSFihnXvENyMdiljwIyAvEojdLk72gJbT5RVvOOEjm8mkfNRUcyqc/HyKjaGNswyA7a1NgCi6saklikHDl7E1kTQ+5vUlsHhdiO6HDv3QKBgQCtBqAoZEwUEVLXl05BwG8EjUiFprt1o9gTQbER91BzJMKEEvKUPrNhijYTuxQMxMdR0J/yVOK4F8Lsw7ro8Dl5HRnt4vlLidslWBe/pcI/vU4720y9Mf4rIH122Ls9457Gh51bAkESRshorUJXMALGv3iILHCN0FuEuUSnQs+gMQKBgQDfWbrsFmDaTvtrF/CekocQOtsOH8tSHIuTrx6KIislSJ/w+gwv/xugheUl5l9p9Pz6beulUINf7S3e5sI6ASa5nJvOMfSAJuzldzDWy8CfllyjXi+qZEmIDELHYjuJs0H2d3dVPM1F5vR8m7F0/99agzmmpHjZi8gOPswHH3Tjmw==-----END RSA PRIVATE KEY-----

5.6、pki-sa

[root@master1 pki]# cat sa.key
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAu8m0vfJe70JTRZW288FXGUA3DIxPcvEvCll3l1NZ1CabfNki
lW3A0CqGUX2YlPLz/GgFP6PgNNcJR0y9aa6m4QwLFPjpv65LghVtFgsLckgl3DZX
MaEtY6iIENEnrqFRzkGdT3XDYdSZfXORs6HTT+NEcWsFrJaPFrk2/0iLvlFKSV4I
basTLEXj/KDRLToMuxNZMeX+IJyhi9YI7CrgpDAxfJB+usBQVPl6Y4ZU/GmMT3bi
Z7q0NXjSt+9+C4Dt2bLYEpuuxbDVDUEvC+4DOaVP55IVHFk70Y0JwE+xbNnvQkDA
s3zvTo/Ued4u9tIGC9akvNjoL0FBDXyJ7v6xowIDAQABAoIBAAv92G3cwVUz/g9O
fS1Zpk81e45wk046uo9FoU5ngy/5+yngz8WNCagBXyxrAchZL11p4xPqShH1vWDx
NJNAFOYAF+ER+BNGdQnshlfHAsccdlZ2neDMcxKPG4k/YfJT2N578Ci302823UpW
i/JVniHW2HMJq4YW4zJHR4zLvCi9+iLKAWYe3fnjF0OxB0CKpzROQcrV2jYMyLw/
3rxaLJ622S4aNlydkfgGI5tXaZ0abBuI9D45HoxkGVq6UvW3ikTA0aK7fbmWm1H5
pr0M1nO9Ebnq2XnzFChZd92XYZhl8+osw9sSgj17x6m5fjPcxO/5bfdw4xjl9KmM
kpCHxHkCgYEA5GKQ01RPnR1Q8ebSH0eAZUvNTb1Aw6hbz4+UImUlDYTuVKYeGdKh
K+fzHiP50F/6W9l/RwQsR4wLheMQjjAVC67fd3XAC+aDtpUyid0P+C5v3crqg3U6
VYUNa+mnKl7KW9bzkFQcoArjdP5sKdcw/pTglncx2gH0ghZL+t5n+VUCgYEA0n5/
JFSdlLhVtWrBfEbezN+tauBjltf8YhoSXySLDnbyl3kTfWK2UiU0I4NyhezfLiOQ
+dNqaKeNXoU+/okdtT/8RuMisFMTIINgJG/LqUHsvxDQOTZJpsb8RQPJjGPpQ4Wv
jMAgTkk9GCBKkbvMx0/pZaHI6T4uAP/M55KeHxcCgYEAxKfa7R38L92+hY2sASMg
fBj5f6cmzVN7Ow73D2bosOt2DY28/Z9RCO2BesKfqb37ZnuyDQSa3EDK607KQqVE
efrqkYLjC1xCrkVqbyvbRGk4ClNf/DJFOL6JABMBzoow1UQSFoVW4Lh/g45QtPaH
SbAIc4fPdVmZoSpx4mMARMECgYEAlR2tvjP/Sirn9NQC66JdFa/jb1I02th5X5nu
p94AcKfNJYdNSkcSt9DJRdtJ1xw94rapboHZ4PfJi0tDnBfQpuUEN8eSfGztoNvQ
0R8tnOMp7xTfHZiaxn4ymkWbk0v4JLBg84nrmOoDUMMXcHQlFpFC24+n/6vf9S9B
nk9cmtMCgYEAvcMkHKyTF8etT8WkLMYQHCFF6J+1CWWZ4iaVUcuu/kZY2bx5fOa1
4ENzIcEYBaUSrQuIlKN1RtMzl4WlPr5OWNE8W61mpdaTOP2acPyejwGAv4+naejb
6MQWWmQhmzPVsceiKVz9zCsU/+tgikYB1Hgzi3es8t7l2HHY7IHKsjU=
-----END RSA PRIVATE KEY-----
[root@master1 pki]# cat sa.pub
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu8m0vfJe70JTRZW288FX
GUA3DIxPcvEvCll3l1NZ1CabfNkilW3A0CqGUX2YlPLz/GgFP6PgNNcJR0y9aa6m
4QwLFPjpv65LghVtFgsLckgl3DZXMaEtY6iIENEnrqFRzkGdT3XDYdSZfXORs6HT
T+NEcWsFrJaPFrk2/0iLvlFKSV4IbasTLEXj/KDRLToMuxNZMeX+IJyhi9YI7Crg
pDAxfJB+usBQVPl6Y4ZU/GmMT3biZ7q0NXjSt+9+C4Dt2bLYEpuuxbDVDUEvC+4D
OaVP55IVHFk70Y0JwE+xbNnvQkDAs3zvTo/Ued4u9tIGC9akvNjoL0FBDXyJ7v6x
owIDAQAB
-----END PUBLIC KEY-----

5.7、pki-etcd

注意:/etc/kubernetes/pki/etcd目录下的ca.key和ca.crt不同于/etc/kubernetes/pki中的ca.crt和ca.key
[root@master1 etcd]# cat ca.crt
-----BEGIN CERTIFICATE-----
MIICwjCCAaqgAwIBAgIBADANBgkqhkiG9w0BAQsFADASMRAwDgYDVQQDEwdldGNk
LWNhMB4XDTIxMDUwMzA4NDkwNFoXDTMxMDUwMTA4NDkwNFowEjEQMA4GA1UEAxMH
ZXRjZC1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMorsnk0NePN
eBU4KEHAXicxWGjDvfP97YoXqwZPGKMnZnCgSY4srcehaatca5bUjoXQGRABtd7G
4QjS9ny2IdkZ3BX0PsWPCfTJb51GM5C9tkXqHG8O/bMvvEd88GPVpOKa3zS+JuSU
h7JQUY7znuF+7HSDwkv3uCNXIzTlQB5MGyKrAD5suoX0Y893t4c+TMDtAFaFvoyF
C/vQAFmS6LWgYoBDQWabeu2ZqoqWp1bZGFGjitUFQiTAOgtiFKyZNJtcVVglNten
DxvpPib0R97nBTjRxCFwEtJGPbps++E4UhYVtj9b6jqKyq9jH/h13pJpopPs8NY5
XY/wiAK3YSkCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMB
Af8wDQYJKoZIhvcNAQELBQADggEBAJVzeFLomtXB62a3/JBZzjtlTFcAg/2xTxgR
XsXBxoG8A51bqFcsGMdTjmllCAG3FcbxLzs+EdQ5QdIIHDvqGkCZ4JEUU+YyWLXb
j2YZ7kO85uBbw0gY50C/vVx3tbDtt76bR/Q0cqHySlzh/JdNOm9ZY37sY+/u9OJE
XQxYMw9nOGSWHrW1XtCErVXWq3d2QH0JzgCvj9aRt7nPOtzFBUX+fvkemsJ+8l9D
MaF3zpJGdevk5H5a2rCr4oM/UFezwL9HmH+ibl70wDy11idIQkdAYu+8dBbeXGZP
tyHqhBSJge9Oi6bgvNS6fQvOEAflgmRmPa+rMJFS5tyCRJFOCyc=
-----END CERTIFICATE-----
[root@master1 etcd]# cat ca.key
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAyiuyeTQ14814FTgoQcBeJzFYaMO98/3tiherBk8YoydmcKBJ
jiytx6Fpq1xrltSOhdAZEAG13sbhCNL2fLYh2RncFfQ+xY8J9MlvnUYzkL22Reoc
bw79sy+8R3zwY9Wk4prfNL4m5JSHslBRjvOe4X7sdIPCS/e4I1cjNOVAHkwbIqsA
Pmy6hfRjz3e3hz5MwO0AVoW+jIUL+9AAWZLotaBigENBZpt67ZmqipanVtkYUaOK
1QVCJMA6C2IUrJk0m1xVWCU216cPG+k+JvRH3ucFONHEIXAS0kY9umz74ThSFhW2
P1vqOorKr2Mf+HXekmmik+zw1jldj/CIArdhKQIDAQABAoIBAAJ7uOx+NK9Apdn0
36G3IDDxDTn0NZAarWFF2ybvr8jJQhveDCk/6T6LgAXH09Z9c+a24KfurXI4FSmL
ldWAUzgcdjSa1G6OzDuCgel3pEiB3AxNzN2cXIdn7bMfGMDRLf5OkrFOKKIkJOqO
zAGqgmgYrATeXXObblqYxmju6/OzTAPbTV6Wn/DWNouoNQu/jWuIVvm70MTZcXKQ
U2UF/ZnGsB0PKov6sXz2sKMb1yC4xXfOdG9uhWLgTmd+7ETRdFCc+Nuu2BcxmwBK
OCDVTqfEKmv/Qx3tDzd8ILzU+gLKw1vYbbBTPZYW3i8aVhBhp6BnxOFx3//PJZS0
L4ZAmFUCgYEA88h6fI59Fm6UnexymAkwpWbct7bTt4Vae1Sh5Gc98crOPr9HPzRA
KK7GbZBDeIbBVO3FEiVBF1ZmnBJ/wMn2GKqlqBfVO/PQEhGr4nigN1cQpxwhqJ2C
dK9XCUqLxP9VVIALOBT4O8vku42iw2JObEoSmqq7Lf6I2V9U+xgZcOMCgYEA1E1e
PHy86KKdWIglp52qD/NsBxcEO5tK6GwaFm3qjMZ5xG42VIQCcnlIZMsbdJjKbQZl
3isrQtlCoYiFivvZIQHLhATwVI61iP+s2PYPiVoZufdDT5wKgu6rtyclUatCqhLE
/wn5fk6z9vjhYcO4I7bh6VBO4ISkTs+wJg8mf4MCgYASzqSkd1mvIVjV1igBErRu
DkF46uHqhp80ZJMYy947iSngLWGRvrY0bUdhrH+IDN1db/qEK9uZsVC5ObQha3NQ
89lT3oLU3TpwKmzYS/YQTuc5/TGbkIs/9UcBsH6X9BrhKf+zk+qSsmgzD/o+mJb0
Q8KrrABEzB5CptgnhvRvgQKBgES8xA0rifJ8bBt1AVQSzTQa6VgmUJ2H+ynjjlLC
xdVMkbJSyM52a2Bq+lCAHmSS7796+dKEAZ7EPzmTvUExp6xzK1SUUMff6NDxjyI0
EPW0sW2vrCCDcjfQVNKZHxEhNRVhvFyi+x+1FbmZ/UctGlqd5OkoslEpQRWvUuYP
s7RHAoGAUlDNQaVNYZZC+heKVqEOnMHDNl1/RdHCeGoMekZZklI9eIdpaGsQkf38
Zbzl//1wgrQm7gW+ayRGWw4WJvnRTp2xY0aIdSjhnrUoWjCGsOMWhDCM4qiQuZdT
I/+xBNP/ghdho6FiN1pFD71NnAOkDpPNZ4XkuNAOhhKEUC+WLt0=
-----END RSA PRIVATE KEY-----
[root@master1 etcd]# cat healthcheck-client.crt
-----BEGIN CERTIFICATE-----
MIIC+zCCAeOgAwIBAgIIXmaLbxJ/6EQwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDVaMEAxFzAV
BgNVBAoTDnN5c3RlbTptYXN0ZXJzMSUwIwYDVQQDExxrdWJlLWV0Y2QtaGVhbHRo
Y2hlY2stY2xpZW50MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtPxz
xbho/LLxIv3s7LhO/knKOlc/UEt9J/0FlrWhzQRMTUwOXGHgKYXZfYx9Vc0XxUsn
X4YHTNNuh5H6ps/QrVBkzbMRytGzcCmhr2Lymgim1QdSrD3s2A5Jhnhyv6ISXkf7
I4Vx52Gh21698r9KJkzXQoeHCFABdqjFZmdN1HETnGQJx4tSFWI/1gy1d/af3XTv
8OIFHtgO21LwE3PO51uvrhBmkH6EcKn6mikd+qqvERoOz7IrZk54NVivA7ykPJMV
bNuf/wxKJWv8MHRfJ0DAWkVNJxcqcS19gVoaenZMVh71wMK3VdQji7HJFayCe8ig
0MYspJj16P9xNpiPPwIDAQABoycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww
CgYIKwYBBQUHAwIwDQYJKoZIhvcNAQELBQADggEBABxcPcvwyE/mKtHt/sgt2veK
GwBy3rvqSIv7yXeh6A4j8cZ5AwNJn483Va6qkuJnc7+eFfydPZgPmCWJ5JDy0LAA
W8/Gy/Q9MJLW/QZ8TyeZ5Ny5jbwXGwF3a2H1hCmGDHm0bKOIg1kClTVFoxIn8RlM
0gkWGjxZ50/8613qwOIz2Lr1VxQr2TcjQCwN+j9ZP/7jeA304k8OnHH9uJnfHJ/3
duRkIGsgzTHiJM6s7dVpG56ay3Tr8vFO0j/XzlgwK+m6qqNayuQOrgXa3eZB0YOu
hsQpi0XBxs1/GIDCKYlVrmP3U2sWtSX/+CqZ8abv+/AIwQYWqFngM8IRLNDN2bo=
-----END CERTIFICATE-----
[root@master1 etcd]# cat healthcheck-client.key
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAtPxzxbho/LLxIv3s7LhO/knKOlc/UEt9J/0FlrWhzQRMTUwO
XGHgKYXZfYx9Vc0XxUsnX4YHTNNuh5H6ps/QrVBkzbMRytGzcCmhr2Lymgim1QdS
rD3s2A5Jhnhyv6ISXkf7I4Vx52Gh21698r9KJkzXQoeHCFABdqjFZmdN1HETnGQJ
x4tSFWI/1gy1d/af3XTv8OIFHtgO21LwE3PO51uvrhBmkH6EcKn6mikd+qqvERoO
z7IrZk54NVivA7ykPJMVbNuf/wxKJWv8MHRfJ0DAWkVNJxcqcS19gVoaenZMVh71
wMK3VdQji7HJFayCe8ig0MYspJj16P9xNpiPPwIDAQABAoIBAAEFk9m/6sfScs4R
xO6pM7j3za56o57ebjx1jzyElf9EUPH2xfX7j3psiQfObT64w7OXcwd1CEGEyBD3
4ARlE/aGh6spoaYVfP/bHFCTLG92MQru2aajSt0FZ6DcuTkfvx7NJTvUGwqFYJaO
eGAQeGiy8lwry7VeTkPPPB4R4zyZzGX5UoxI3qd+ffQ7kDCPocDgWxio6TZyjaLZ
Sj/felD9mth28IgToRJmVgH7ZZog81gxTv1pSofCcRtjT3zNH3n2kVR4zMFxXpfh
DwKdCnzt1Jv0Hs/PFvJPxK9MDw49S6lhN7tgUbhoON7wqvC0CSQretelwDjZWVRI
J1rtSgECgYEAz//rSUCAwYD2ynk/eqFRlhgloTCOquYipGCdojGbh5uj7AKrc8T8
Y+9fe/rbFVDOtYh6GZJghl7p5KTU0Td6NvwMs+gT4e7m3/hiz8jsW19G+W8AxYVG
y5OSm9fiyReZrw7DMVuz3fRcidivr+RBOcNvuUwI1PFjASJUNF/JGL8CgYEA3sCk
rArnUg7IjcTrYX+W+5TNQDXxMSbho6DQOFIeuHEibdOsN0fwCPXwREiJ0wYwwAHy
KrWDgFLUEDkMQVpd6QqG3rlGM4RMIr0wsBc8h0LpfbpUmw3RO00yaIkf5T6j8psp
LokAQKl2t1adKjvQaGeodqbse0NrzObaiZVTqYECgYBxb+NEGfeekNUHa8Tg/mXe
c+Dh3feQ4N33w/F0aZWnCY0GxBX5l28GmZ/7n74oC+AQRRRCKgCWh+ELn5GpYJY4
spHC9EkTqRUlBPPu2md9FaNBmfZTwvHvSNZmRAEdJs/cFzMBEkAwRnrJevGl/dhM
xneCGSOf7t3N2okN30dvRQKBgQC5248KpYZg50jbUUT8ctLtUzj2rIt0cXavaoyR
kaNkTbFmZck5zuIu99Xjg4rL8kxWyMjgbdctCO88If1hwh69RTVHPNugPHCyQ50O
MDUmvuPHLeNOBHdhvYWjx1Y/lsaAtInl9BWr3jnZu4EjLgk0M9lSNvD14ElgC/an
+Vp3AQKBgQCHsT/bi7f8efrRhwVXE3Vs1zULTesr9QRWz5EOd/l3AhawYt4ntrKG
7XYORKtvIHsN8/1Xxd8euG0fec0kKmnFLT/LRShG1AfcH87hw6mLfvVFuBhmpaPb
zr71f6PJ2oMFTXNutrVY1dg6Su2fQjF01eXYwTfrVtL3ShI6EmMN9g==
-----END RSA PRIVATE KEY-----
[root@master1 etcd]# cat peer.crt
-----BEGIN CERTIFICATE-----
MIIDFDCCAfygAwIBAgIICCVcX9SOfGUwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDRaMBIxEDAO
BgNVBAMTB21hc3RlcjEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDV
LT+aoq/aHNPtcg5bRo1yewrTOSP8Y/q24E9bl+eNYx4wI/PmyQhPxoi3o61R598p
WOTDt6fkOLhSR6o+4ZDhDege+rHfeZ4afwCXkOO5hAanN7YedqJljiahuv5fPPGd
be1tRwRLXK8KcHtYI06wP97QFjoALshixKCJK53vjH9+KcTDqJDNt20FTSuUZB5z
QCvK2Hy8+CtwaxmJqDbjq1CX1q75HnvCENHQ0StZKJRhzu0S6fT2xxwo58YMcVUM
tSkVoW8if++BE2h8GB/UcuQFisc6f3Ps/fvrLrOUhWww32xeGFFGDtttyZ6qt5g8
XVImUUhH7iU2aI0rr2TjAgMBAAGjbjBsMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUE
FjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwOwYDVR0RBDQwMoIHbWFzdGVyMYIJbG9j
YWxob3N0hwTAqEqAhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEB
CwUAA4IBAQCVo26/7HNrnQ3kEb8vr6ME4nHkswLJrIwqe0bemNePqBpZFdeQIImS
M/kWPDt9rgdhgAuABUz8UbIxxz6zOwfpMEAjMqu/EVcnxD4a26Bpma7S7BUaiVXH
GORLMJn6y54dI+Fs5gvDIKk69ueuHmd8nX/z2cMRgNToTVCGaRHqInaV75vZXaB+
g4hLl3HrCnU+toqcmJ0ENy/k6+HDZ6jl1FH8mrj7Xu8uwvHE39tkUodPG+y+AWBa
bEcoME1aMcld/I55jMq4gJuEzH2rUdCJw7u9+Lv4Jx3OlhZeoNXSH/uNjMxR/cv5
VMw8Ekta5MOQ9MtgaoTS8p26th0bFTsC
-----END CERTIFICATE-----
[root@master1 etcd]# cat peer.key
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA1S0/mqKv2hzT7XIOW0aNcnsK0zkj/GP6tuBPW5fnjWMeMCPz
5skIT8aIt6OtUeffKVjkw7en5Di4UkeqPuGQ4Q3oHvqx33meGn8Al5DjuYQGpze2
HnaiZY4mobr+XzzxnW3tbUcES1yvCnB7WCNOsD/e0BY6AC7IYsSgiSud74x/finE
w6iQzbdtBU0rlGQec0Aryth8vPgrcGsZiag246tQl9au+R57whDR0NErWSiUYc7t
Eun09sccKOfGDHFVDLUpFaFvIn/vgRNofBgf1HLkBYrHOn9z7P376y6zlIVsMN9s
XhhRRg7bbcmeqreYPF1SJlFIR+4lNmiNK69k4wIDAQABAoIBAEuZ0na6v3awxo/s
5R6FtOAmtr4WA6ccpet5PWuUQbAouKoF9hegr+vq0s2dpHfprYDyX57xYP9VBjlX
5Q6L3F+UGP/zlGVWsjVfWQxne/ts0Rc4cMP4+rrdYOH2eQO5j05vj8Yza1h2tDUV
kwi87MkgvZo6Z7Ns4+/zH6PF7irnmP1hwkgwPZQ8aGys/SjDMGYo/r6SS872I4go
/I80AuE8OIxvkSt0M2McvtJxoy1BdMY5FTheJkTVQg1lQJsFVicLv11Qql02I38u
eI/+jjW/VDkJlp2QxVaAh0ZKyEJQNxBHbmEiZuomPovtZGvAPZoU1RkSw6UN5R0O
FpcIF9ECgYEA4e+q0D2+OJFsbvxqBpoDPBYs6Onjboz7sUBVAHVe2NvpB4Nw5nsd
tJwx5kGDeo0ABlACVzslmZlGUICnEk4KGiwDkuy8DKmHhzBV0M8Mi9VuhHXN+4aV
2b8Y8+wIP3XQR0WWX+gHfQsvwjmuYefkaCzJ/hIxTOavlsyRD2+40/8CgYEA8Yrw
5K9NY9bv8Fe8YATLXCtoBOprFURXeVjxzuIZGcU9sd1jwsaB+CBLxD8tuwf7gVF2
/IbF5FTOjGePnDxYGujHi4ev1eJfvPVoin9YNw1XUCi7R49DS3SQwBtTJQBeSldP
fvZPeqz4KO4vzwWFkHqFnQSYj66BZATehbfsnx0CgYABJB+9u4IZcQqWKOo0LFT1
2brSVlQSu92NkKCdRvp6p+muYwiP8XE990f9PLl4RfwJDCBm5mKTOwXy5CNz4TcF
2NEPzehJPBX2JdVZH6KVljdfreSjb5OULPXoTXnhMCwkIALZayeWhxbvqTDrR6uM
pyVCBj9/fu7GGTRmWo8ZawKBgQC8vjBs0lsr+Am4Cibl9Pkfxb9bj/4rOSMNbKZP
Xjf0/j6uXOwWiF2JEVuDN0c5zgwGyiyrOXkraeWYq1f54uGJ7Xn4GwgYnvLmyfFt
wAKjyiX/OkTVryoLrUNrCi8XS8liWAWDlV8X4k9sVGtBXvQ2qLb9sliwddEf4fos
DUO2NQKBgDI424TXg6VmGOVBcYyxc2t+Q155u675aC40CDIU/Olev8oIgcv9VSie
01BZNDLPoY+N5DdQxEf7Z6BtBeOrwax0KIu0YPsunef6q4aqUxRrkpZhwLoZOnFX
WcVxMBYbCgi305LRq8xYffkp2CbkRY8SPRJ2+xR4JMxQSZSgJICA
-----END RSA PRIVATE KEY-----
[root@master1 etcd]# cat server.crt
-----BEGIN CERTIFICATE-----
MIIDFDCCAfygAwIBAgIISEsLZ8pQBn8wDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDRaMBIxEDAO
BgNVBAMTB21hc3RlcjEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDI
cZwVY1owRrx3Gw43HcxSj8eULupWx1U99+eS54BcFoyY57rwCyL3jbPXDmv1dili
RnLPeAJmbRgtY2q2/2BHAp3c8Uz2y+OK2pSdo5giWm/Xio+++S3I7hanhg68HzDx
o/oizCSZ40DPyRE9pMkgsIRmxLQFnLENuby5w8/cTyM3XdfsZl7mPnRvlnEqvSKS
U0EH722VV+nzagath1xcW+q6YW34g8PRkP1x/U5h1vUch+ZcmarR5oP8NBYaZ+P5
vUp54m8MYUMzIae2BA8LaHrTR3YGDaJiQvUq1tWkmc3Lu8Yn53ciYIKjZPmt4gLa
D4z/g9h6Glhf4bC7ufudAgMBAAGjbjBsMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUE
FjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwOwYDVR0RBDQwMoIHbWFzdGVyMYIJbG9j
YWxob3N0hwTAqEqAhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEB
CwUAA4IBAQBV723E89RJpLH3QLNgiNU3AEXW+qRJ+1wMOdF33X9ExEhFNPqofjSx
h+timy22AJ9xFKSKIXL/ZEhSff4u3yg0xpEtpmx/lPquN4w27d16H2qPk72Tbl9/
gwoKyl0qDYGkhOsnf4L3oFVg3c8xynn2VTZl4p5VnRJwPETJoKVME7YRMz2xNvvs
gwkqm/u3ktH0hrdS3jUtzkgZLBD7iSFHrMUV/jetLKliTBIP5v1ZStQPprgz55Db
mE107Q2npubxuHF0cPLcPWd5K0igMSgzujZIf9Pe1bO9blxjhbDQHmfvzEpHygcj
zXMi2XoIsawXlcaMMQNbCqLbeIeKDzON
-----END CERTIFICATE-----
[root@master1 etcd]# cat server.key
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAyHGcFWNaMEa8dxsONx3MUo/HlC7qVsdVPffnkueAXBaMmOe6
8Asi942z1w5r9XYpYkZyz3gCZm0YLWNqtv9gRwKd3PFM9svjitqUnaOYIlpv14qP
vvktyO4Wp4YOvB8w8aP6IswkmeNAz8kRPaTJILCEZsS0BZyxDbm8ucPP3E8jN13X
7GZe5j50b5ZxKr0iklNBB+9tlVfp82oGrYdcXFvqumFt+IPD0ZD9cf1OYdb1HIfm
XJmq0eaD/DQWGmfj+b1KeeJvDGFDMyGntgQPC2h600d2Bg2iYkL1KtbVpJnNy7vG
J+d3ImCCo2T5reIC2g+M/4PYehpYX+Gwu7n7nQIDAQABAoIBAAEph3ooRVGaV2Vp
Zr+zEIg6BTI6w2kVZs0hLtqPNRNTniUU0uSpa957l9tbXgziToMfXXMOgxUM9OLu
fKPq/yfqP/gT/hpAPGWFtu7jD/LDC3r4drToxPcxSjhWcqdsluAPz1d8T4oE409R
HyR4XCIwY9Qkt9aAfhZSSWHaXM4uNKju2fMIr8dEf6F7iazqU+ziYwpLC5QzMWd5
BCIvT95eSNzaBx5kunFFRUSEGh1e2lrWLEP6jD+xULUI6hWKSLQBV9EPwVzwax9f
TAS3/U0VAcpDSat3LXBheCSgd6btXv1BOTlGNAV9vEWjisKdSF1GsUpo3ex5OxxU
EHgE7wECgYEA7J5hbe24MbJPF4yJsjC8lSkqgQGk85l9JCeu8LuG7DXEuqIr6IlW
rwEveTvY42viYTf5LqikKaQ5dlRKdBi2PL7UDTnf2A20ZOgUMoF7f376GsCsuep+
MOKcdB7Bft9lOdf6Lo2P+66yPrDDF+ylQGdIY/2kW7F+O5TYMjh0LzECgYEA2Nyv
BQT9H8KRyglWEjVIvnJhY/IF2M7fFNeS7hAEcban39WWjTwUQ/j+y8QKZJRK/otG
kIbyqBcQfUBrkuEb7nIfQQUDprIFwMrFHESyUNkGClU/KLyseeFlej7fQZ7KMSwz
y9CfXgQDTyX1Jl2A0OSXntVNvz+/bJWyub2csC0CgYEArdNYPdqiMxgL1H/w9A+r
qmR4jhc4J6C9dx8T/FO3NbX2VSkn2odyP9Q+HPDjT4cE4mitTSKknta/Q/d+TrWM
wylpPGIk2GKRAIQhuky2/h24/IhJG7dxhtYjG4cwnNTeV1UbvLFQchOPbFCMsfmu
GJcHbjV6VcYZtwmMnbAtYjECgYA7gKHNIMdLNZnG87TYHiKtjrjGMZwFFw4Cq/u2
slJl2RZKxlIewoNU+zb+NfYcDsxc914PPdfK4zk1BL3/eSCu1kVZE8Uisen+MiTP
UtISeNm9cBJ6XPp+Hqg3WJTtbmJQB67Wl5GCvFskFmgjdLhpmK85d5Fzjkw5wQFf
EXWyqQKBgHHgW02OikysXXwtW9rb+xAzP7gfI2jFSsqV5DeLnjhFd952JIn6WJa5
Ercujh9UAR4Gy7lBWWYsp/2hR1OX5hk4WsPmH7/iGG0zdvYKDEnm8Nr/4wPd7YU0
TYQ0ayrqvptmyZzo5687Gie9TW0NLi3pRzcZ8o3mPcYvluGgBq8z
-----END RSA PRIVATE KEY-----

高可用kubeadm安装集群推荐博客:https://www.cnblogs.com/lfl17718347843/p/13417304.html

六、高可用安装

6.1、准备工作

主机初始化参考:1.1和1.2 ,按照2.1和2.2安装docker,kubelet,kubeadm,kubectl

另外增加如下配置:

1、开机加载lvs模块
[root@master1 ~]# vi /etc/sysconfig/modules/ipvs.modules
#!/bin/sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
[root@master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.101 master1 www.mt.com
192.168.56.102 master2
192.168.56.103 master32、安装keepalive-所有master都安装
yum install keepalived haproxy -y
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {router_id k8s
}vrrp_script check_haproxy {script "killall -0 haproxy"interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state MASTER  #除了第一个为MASTER,其他为BACKENDinterface en0s8  #注意替换 virtual_router_id 51priority 250  #不同的节点设置不同的vipadvert_int 1authentication {auth_type PASSauth_pass ceb1b3ec013d66163d6ab}virtual_ipaddress {192.168.56.211  #地址注意替换}track_script {check_haproxy}}
EOFsystemctl enable  keepalived ; systemctl start  keepalived; systemctl status  keepalived尝试停掉k8s-master-01的keepalived服务,查看vip是否能漂移到其他的master,并且重新启动k8s-master-01的keepalived服务,查看vip是否能正常漂移回来,证明配置没有问题。3、安装haproxy
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global# to have these messages end up in /var/log/haproxy.log you will# need to:# 1) configure syslog to accept network log events.  This is done#    by adding the '-r' option to the SYSLOGD_OPTIONS in#    /etc/sysconfig/syslog# 2) configure local2 events to go to the /var/log/haproxy.log#   file. A line like the following can be added to#   /etc/sysconfig/syslog##    local2.*                       /var/log/haproxy.log#log         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemon # turn on stats unix socketstats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaultsmode                    httplog                     globaloption                  httplogoption                  dontlognulloption http-server-closeoption forwardfor       except 127.0.0.0/8option                  redispatchretries                 3timeout http-request    10stimeout queue           1mtimeout connect         10stimeout client          1mtimeout server          1mtimeout http-keep-alive 10stimeout check           10smaxconn                 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiservermode                 tcpbind                 *:16443option               tcplogdefault_backend      kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiservermode        tcpbalance     roundrobinserver      master1   192.168.54.101:6443 checkserver      master2   192.168.54.102:6443 checkserver      master3   192.168.54.103:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen statsbind                 *:1080stats auth           admin:awesomePasswordstats refresh        5sstats realm          HAProxy\ Statisticsstats uri            /admin?stats
EOF
systemctl start haproxy;systemctl enable haproxy;systemctl status haproxy  #三个节点配置文件一样

6.2、初始化

1、master1上进行初始化
[root@master1 ~]# kubeadm config  print init-defaults > kubeadm-config.yaml #然后修改kubeadm-config.yaml
[root@master1 ~]# cat kubeadm-config.yaml
[root@master1 ~]# cat kubeadm-config.yaml
[root@master1 ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.56.101bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: master1
---
apiServer:certSANs:- master1- master2- master3- reg.mt.com- 192.168.56.101- 192.168.56.102- 192.168.56.103- 192.168.56.211- 127.0.0.1- www.mt.comextraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "192.168.56.211:16443"
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

[root@master1 ~]# kubeadm reset --kubeconfig kubeadm-config.yaml 用户init失败后清理现场

[root@master1 ~]# kubeadm config images pull --kubeconfig ./kubeadm-config.yaml  #镜像可以提前下载[root@master1 ~]# kubeadm init --config kubeadm-config.yaml
W0507 18:29:39.557236   29338 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3 reg.mt.com www.mt.com] and IPs [10.96.0.1 192.168.56.101 192.168.56.211 192.168.56.101 192.168.56.102 192.168.56.103 192.168.56.211 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0507 18:29:42.986846   29338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0507 18:29:42.991544   29338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0507 18:29:42.992043   29338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.055860 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 192.168.56.211:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d364 \--control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.56.211:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d364
安装网络
kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl get cs验证cs状态,kubectl get pods --all-namespaces查看集群pod是否被正常调度。验证
[root@master1 ~]# kubectl  get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@master1 ~]# kubectl  get pods -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-7ff77c879f-h6ptq          1/1     Running   0          10m
kube-system   coredns-7ff77c879f-pw97s          1/1     Running   0          10m
kube-system   etcd-master1                      1/1     Running   0          11m
kube-system   kube-apiserver-master1            1/1     Running   0          11m
kube-system   kube-controller-manager-master1   1/1     Running   0          11m
kube-system   kube-flannel-ds-m8s9l             1/1     Running   0          80s
kube-system   kube-proxy-wfrvj                  1/1     Running   0          10m
kube-system   kube-scheduler-master1            1/1     Running   0          11m

6.3、join

# 1、证书拷贝
[root@master1 pki]# ssh root@master2  mkdir /etc/kubernetes/pki/etcd -p
[root@master1 ~]# scp /etc/kubernetes/admin.conf root@master2:/etc/kubernetes/
admin.conf                                                                                                                                             100% 5455     5.6MB/s   00:00
[root@master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master2:/etc/kubernetes/pki
ca.crt                                                                                                                                                 100% 1025   868.7KB/s   00:00
ca.key                                                                                                                                                 100% 1679     2.1MB/s   00:00
sa.key                                                                                                                                                 100% 1675     1.3MB/s   00:00
sa.pub                                                                                                                                                 100%  451   518.4KB/s   00:00
front-proxy-ca.crt                                                                                                                                     100% 1038     1.7MB/s   00:00
front-proxy-ca.key                                                                                                                                     100% 1679     3.1MB/s   00:00
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@master2:/etc/kubernetes/pki/etcd/
ca.crt                                                                                                                                                 100% 1017     1.1MB/s   00:00
ca.key                                                                                                                                                 100% 1675     2.8MB/s   00:00
# 2、master2-join操作
[root@master1 ~]# kubeadm token create --print-join-command  #在master1上执行,获取加入集群命令
W0508 10:05:18.251587   12237 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.56.211:16443 --token mb0joe.3dh8xri1xxdf2c89     --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d3
需要带上参数--control-plane表示把master控制节点加入集群[root@master2 ~]# kubeadm join 192.168.56.211:16443 --token mb0joe.3dh8xri1xxdf2c89     --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d3 --control-plane  #命令缺少参数--apiserver-advertise-address,我本机环境为双网卡一个nat一个桥接。k8s内部通信为192.168.56网段
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3 reg.mt.com www.mt.com] and IPs [10.96.0.1 10.0.2.15 192.168.56.211 192.168.56.101 192.168.56.102 192.168.56.103 192.168.56.211 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0508 10:07:34.128720   30961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0508 10:07:34.132698   30961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0508 10:07:34.133621   30961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2021-05-08T10:07:48.851+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.0.2.15:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.# 3、master3-join操作
[root@master3 ~]# kubeadm join 192.168.56.211:16443 --token mb0joe.3dh8xri1xxdf2c89     --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d3 --control-plane --apiserver-advertise-address 192.168.56.103
...
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0508 10:10:19.785375   30434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://10.0.2.15:2379 with maintenance client: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
到这里失败了,意思为etcd集群状态异常导致。[root@master1 ~]# kubectl describe configmaps kubeadm-config -n kube-system #在master1上检查
Name:         kubeadm-config
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>Data
====
ClusterConfiguration:
----
apiServer:certSANs:- master1- master2- master3- reg.mt.com- 192.168.56.101- 192.168.56.102- 192.168.56.103- 192.168.56.211- 127.0.0.1- www.mt.comextraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.56.211:16443
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}ClusterStatus:
----
apiEndpoints:master1:advertiseAddress: 192.168.56.101   #这里有master1和master2的,但是没有master3的bindPort: 6443master2:advertiseAddress: 10.0.2.15bindPort: 6443
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterStatusEvents:  <none>[root@master1 ~]# kubectl exec -it etcd-master1  -n kube-system /bin/sh
# export ETCDCTL_API=3
# alias etcdctl='etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
# etcdctl member list
181ffe2d394f445b, started, master1, https://192.168.56.101:2380, https://192.168.56.101:2379, false
dfa99d8cfc0ee00f, started, master2, https://10.0.2.15:2380, https://10.0.2.15:2379, false问题在于:master2的etcd应该注册的ip为192.168.56.102(master1-3都是多网卡,注册了其他网卡)
1、master1上kubectl delete master2
2、在master2的初始化参数加上"--apiserver-advertise-address 192.168.56.102" 重新init,并根据报错删除对应文件和
3、验证etcd集群健康
# export ETCDCTL_API=3
# alias etcdctl='etcdctl --endpoints=https://192.168.56.101:2379,https://192.168.56.102:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
# etcdctl member list
181ffe2d394f445b, started, master1, https://192.168.56.101:2380, https://192.168.56.101:2379, false
cc36adc51c559f97, started, master2, https://192.168.56.102:2380, https://192.168.56.102:2379, false
# etcdctl endpoint health
https://192.168.56.101:2379 is healthy: successfully committed proposal: took = 9.486454ms
https://192.168.56.102:2379 is healthy: successfully committed proposal: took = 9.691437ms
# etcdctl endpoint status
https://192.168.56.101:2379, 181ffe2d394f445b, 3.4.3, 2.6 MB, true, false, 15, 18882, 18882,
https://192.168.56.102:2379, cc36adc51c559f97, 3.4.3, 2.6 MB, false, false, 15, 18882, 18882,

注:master2和master3同理,这里只列出master的

  • 集群后续缩容:

    • master:kubectl drain $nodename --delete-local-data --force --ignore-daemonsets;kubectl delete node $nodename
    • node: kubeadm reset
  • 集群后续扩容:
    • kubeadm token create --print-join-command

6.4、检查

# 1、pod创建和测试
[root@master1 ~]# kubectl  create deployment nginx --image=nginx
[root@master1 ~]# kubectl scale deployment nginx --replicas=3
[root@master1 ~]# kubectl  get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
nginx-f89759699-5p4dr   1/1     Running   0          24m   10.244.2.2   master3   <none>           <none>
nginx-f89759699-czvpd   1/1     Running   0          20m   10.244.0.8   master1   <none>           <none>
nginx-f89759699-dc5p9   1/1     Running   0          23m   10.244.1.4   master2   <none>           <none>
问题:在master1ping master2和3上的nginx pod不通。原因在于这里没有指定使用哪一张网卡作为内部通信用的网卡使用[root@master1 ~]# ps -ef |grep flannel
root      6715  6685  0 14:38 ?        00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
[root@master1 ~]# kubectl  edit ds/kube-flannel-ds -n kube-system
...containers:- args:- --ip-masq- --kube-subnet-mgr- --iface=enp0s8command:- /opt/bin/flanneld
...
修改完后可以ping 通2、svc测试
[root@master1 ~]# kubectl  expose deployment  nginx --port=80 --type=NodePort
[root@master1 ~]# kubectl  get svc/nginx
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.106.130.76   <none>        80:31254/TCP   38
[root@master1 ~]# curl  10.106.130.76:80 -Is
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sat, 08 May 2021 07:18:40 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
[root@master1 ~]# ipvsadm -Ln -t 10.106.130.76:80  #在其他节点上也一样
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.106.130.76:80 rr-> 10.244.0.8:80                Masq    1      0          1-> 10.244.1.4:80                Masq    1      0          1-> 10.244.2.2:80                Masq    1      0          13、dns测试
[root@master1 ~]# kubectl  get svc/kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)           AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,9153/TCP   21h
[root@master1 ~]# echo "nameserver 10.96.0.10" >> /etc/resolv.conf
[root@master1 ~]# ping nginx.default.svc.cluster.local  -c1
PING nginx.default.svc.cluster.local (10.106.130.76) 56(84) bytes of data.
64 bytes from nginx.default.svc.cluster.local (10.106.130.76): icmp_seq=1 ttl=64 time=0.019 ms--- nginx.default.svc.cluster.local ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms

回到顶部

七、问题记录

问题1:删除svc后面的pod后,lvs的ip不变


> 修改前,kube-dns这个svc,ipvs对饮过的是10.244.0.5(这个ip也不对)
[root@master1 ~]# kubectl  get svc/kube-dns -n kube-system
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   4d17h
[root@master1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr-> 192.168.56.101:6443          Masq    1      1          0
TCP  10.96.0.10:53 rr-> 10.244.0.5:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr-> 10.244.0.5:9153              Masq    1      0          0
UDP  10.96.0.10:53 rr-> 10.244.0.5:53                Masq    1      0          0
[root@master1 ~]# kubectl  get pods -n kube-system -o wide   |grep dns
coredns-7ff77c879f-nnbm7          1/1     Running   0          5m2s    10.244.1.9       master2   <none>           <none>
[root@master1 ~]# kubectl delete pod/coredns-7ff77c879f-nnbm7 -n kube-system
pod "coredns-7ff77c879f-nnbm7" deleted
[root@master1 ~]# kubectl  get pods -n kube-system -o wide   |grep dns
coredns-7ff77c879f-d976l          1/1     Running   0          19s     10.244.2.10      master3   <none>           <none>
[root@master1 ~]# ipvsadm -Ln  #后面的ip仍然没有变化
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr-> 192.168.56.101:6443          Masq    1      1          0
TCP  10.96.0.10:53 rr-> 10.244.0.5:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr-> 10.244.0.5:9153              Masq    1      0          0
UDP  10.96.0.10:53 rr-> 10.244.0.5:53                Masq    1      0          0使用的是kubeadm安装的集群,尝试删除Kube-proxy组件仍然不行[root@master1 ~]# kubectl logs kube-proxy-fhxsj -n kube-system #报错如下
...
E0513 02:10:16.362685       1 proxier.go:1950] Failed to list IPVS destinations, error: parseIP Error ip=[10 244 0 5 0 0 0 0 0 0 0 0 0 0 0 0]
E0513 02:10:16.362698       1 proxier.go:1192] Failed to sync endpoint for service: 10.96.0.10:9153/TCP, err: parseIP Error ip=[10 244 0 5 0 0 0 0 0 0 0 0 0 0 0 0][root@master1 ~]# kubectl  logs kube-apiserver-master1 -n kube-system  #apiserver报错如下
E0513 01:12:19.425042       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0513 01:12:20.278922       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0513 01:12:20.278948       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0513 01:12:20.292422       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0513 01:12:35.872070       1 controller.go:606] quota admission added evaluator for: endpoints
I0513 01:12:49.649488       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0513 01:56:44.648374       1 log.go:172] http: TLS handshake error from 10.0.2.15:23596: tls: first record does not look like a TLS handshake
I0513 01:57:55.036379       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io影响:pod在不删除的情况下不影响,新申请的pod也不影响。只影响pod和svc创建后。pod被删除后ipvs不会更新,会导致访问不到svc。
临时解法:删除svc(对应的lvs规则也会被清理)。重新建立根本原因为:Kubernetes 使用的 IPVS 模块是比较新的,需要系统内核版本支 持更新kernel1)当前环境信息
[root@master1 ~]# uname -r  #本次测试环境kernel版本和ipvs版本信息
3.10.0-1160.el7.x86_64
[root@master1 ~]# modinfo ip_vs
filename:       /lib/modules/3.10.0-1160.el7.x86_64/kernel/net/netfilter/ipvs/ip_vs.ko.xz
license:        GPL
retpoline:      Y
rhelversion:    7.9
srcversion:     7C6456F1C909656E6093A8F
depends:        nf_conntrack,libcrc32c
intree:         Y
vermagic:       3.10.0-1160.el7.x86_64 SMP mod_unload modversions
signer:         CentOS Linux kernel signing key
sig_key:        E1:FD:B0:E2:A7:E8:61:A1:D1:CA:80:A2:3D:CF:0D:BA:3A:A4:AD:F5
sig_hashalgo:   sha256
parm:           conn_tab_bits:Set connections' hash size (int)2)更新kernel
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#注意如果repo文件中enabled=1,gppcheck=1但是gpgkey文件不存在,yum repo list |grep kernel是出不来的,可以尝试修改gpgcheck=0
yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
yum install kernel-lt  --enablerepo=elrepo-kernel
[root@master1 ~]# cat /boot/grub2/grub.cfg | grep CentOS
menuentry 'CentOS Linux (5.4.118-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.el7.x86_64-advanced-f5f7e6ec-62d1-4002-a6d2-f3864485cf02' {
menuentry 'CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.el7.x86_64-advanced-f5f7e6ec-62d1-4002-a6d2-f3864485cf02' {
menuentry 'CentOS Linux (0-rescue-7042e3de62955144a939ec93d90c2cd7) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-7042e3de62955144a939ec93d90c2cd7-advanced-f5f7e6ec-62d1-4002-a6d2-f3864485cf02' {
[root@master1 ~]#  grub2-set-default  'CentOS Linux (5.4.118-1.el7.elrepo.x86_64) 7 (Core)'ELRepo有两种类型的Linux内核包,kernel-lt和kernel-ml。kernel-ml软件包是根据Linux Kernel Archives的主线稳定分支提供的源构建的。 内核配置基于默认的RHEL-7配置,并根据需要启用了添加的功能。 这些软件包有意命名为kernel-ml,以免与RHEL-7内核发生冲突,因此,它们可以与常规内核一起安装和更新。 kernel-lt包是从Linux Kernel Archives提供的源代码构建的,就像kernel-ml软件包一样。 不同之处在于kernel-lt基于长期支持分支,而kernel-ml基于主线稳定分支。3)验证
reboot
uname -r查看内部已经更新
ipvsadm -Ln也已经变化
其他node节点也同样进行更新操作即可

问题2:如何修改proxy模式,默认模式为iptables

[root@master1 ~]# kubectl  get cm/kube-proxy -n kube-system -o yaml |grep -i mode  #使用edit kube-proxy的configmap然后删除Kube-proxy重新建立detectLocalMode: ""mode: ipvs

问题3:kubelet日志一直报错"Container runtime network not ready",master一直"NotReady" 在内网环境(没有yum)安装时发现的问题

[root@iZvy205vkg85k96l33i7lrZ kubernetes]# journalctl -xe -u kubelet -l
...
May 13 18:33:32 iZvy205vkg85k96l33i7lrZ kubelet[20719]: : [failed to find plugin "flannel" in path [/opt/cni/bin] failed to find plugin "por
May 13 18:33:32 iZvy205vkg85k96l33i7lrZ kubelet[20719]: W0513 18:33:32.703337   20719 cni.go:237] Unable to update cni config: no valid netw
May 13 18:33:33 iZvy205vkg85k96l33i7lrZ kubelet[20719]: E0513 18:33:33.977912   20719 kubelet.go:2187] Container runtime network not ready:
原因为 "kubernetes-cni-0.8.7-0.x86_64" 没有安装[root@master1 ~]# yum deplist kubeadm kubelet kubectl
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.163.com* epel: mirrors.bfsu.edu.cn* extras: mirrors.163.com* updates: mirrors.163.com
软件包:kubeadm.x86_64 1.21.0-0依赖:cri-tools >= 1.13.0provider: cri-tools.x86_64 1.13.0-0依赖:kubectl >= 1.13.0provider: kubectl.x86_64 1.21.0-0依赖:kubelet >= 1.13.0provider: kubelet.x86_64 1.21.0-0依赖:kubernetes-cni >= 0.8.6provider: kubernetes-cni.x86_64 0.8.7-0
软件包:kubectl.x86_64 1.21.0-0此软件包无依赖关系
软件包:kubelet.x86_64 1.21.0-0依赖:conntrackprovider: conntrack-tools.x86_64 1.4.4-7.el7依赖:ebtablesprovider: ebtables.x86_64 2.0.10-16.el7依赖:ethtoolprovider: ethtool.x86_64 2:4.8-10.el7依赖:iprouteprovider: iproute.x86_64 4.11.0-30.el7依赖:iptables >= 1.4.21provider: iptables.x86_64 1.4.21-35.el7provider: iptables.i686 1.4.21-35.el7依赖:kubernetes-cni >= 0.8.7provider: kubernetes-cni.x86_64 0.8.7-0依赖:libc.so.6(GLIBC_2.2.5)(64bit)provider: glibc.x86_64 2.17-324.el7_9依赖:libdl.so.2()(64bit)provider: glibc.x86_64 2.17-324.el7_9依赖:libdl.so.2(GLIBC_2.2.5)(64bit)provider: glibc.x86_64 2.17-324.el7_9依赖:libpthread.so.0()(64bit)provider: glibc.x86_64 2.17-324.el7_9依赖:libpthread.so.0(GLIBC_2.2.5)(64bit)provider: glibc.x86_64 2.17-324.el7_9依赖:libpthread.so.0(GLIBC_2.3.2)(64bit)provider: glibc.x86_64 2.17-324.el7_9依赖:rtld(GNU_HASH)provider: glibc.x86_64 2.17-324.el7_9provider: glibc.i686 2.17-324.el7_9依赖:socatprovider: socat.x86_64 1.7.3.2-2.el7依赖:util-linuxprovider: util-linux.x86_64 2.23.2-65.el7_9.1provider: util-linux.i686 2.23.2-65.el7_9.1

问题4:“OCI runtime create failed: systemd cgroup flag passed, but systemd support for managing cgroups is not available: unknown”

注释/etc/docker/daemon.json ""exec-opts": ["native.cgroupdriver=systemd"],"

问题5:镜像下载和推送失败

[root@master1 test]# docker push www.mt.com:9500/google_containers/flannel:v0.14.0-rc1
The push refers to repository [www.mt.com:9500/google_containers/flannel]
e4c4fde0a196: Layer already exists
8a984b390686: Layer already exists
71b519fcb2d2: Layer already exists
ae1d52bdc861: Layer already exists
2e16188127c8: Layer already exists
815dff9e0b57: Layer already exists
777b2c648970: Layer already exists
v0.14.0-rc1: digest: sha256:124e34e6e2423ba8c839cd34806f125d4daed6d4d98e034a01975f0fd4229b2f size: 1785
Signing and pushing trust metadata
Error: error contacting notary server: tls: oversized record received with length 20527错误2:
[root@node01 ~]# docker push www.mt.com:9500/google_containers/flannel:v0.14.0-rc1
Error: error contacting notary server: tls: first record does not look like a TLS handshake

原因为:

  • DOCKER_CONTENT_TRUST 是哦福
  • docker info 中的HTTP-PROXY和HTTPS-PROXY的参数
  • ~/.docker/config.json

k8s1.18-kubeadm安装手册相关推荐

  1. k8s1.18多master节点高可用集群安装-超详细中文官方文档

    kubernetes安装系列文章 kubernetes1.17.3安装-超详细的安装步骤 安装kubernetes1.17.3多master节点的高可用集群 k8s1.18单master节点高可用集群 ...

  2. Kubernetes 用kubeadm安装kubernetes_v1.18.x

    Kubernetes 用kubeadm安装kubernetes_v1.18.x 安装参考 https://kuboard.cn/install/install-k8s.html#%E7%A7%BB%E ...

  3. K8S介绍并使用kubeadm安装k8s1.26.3-Day 01

    1. 云原生介绍 1.1 云原生简介 1.2 云原生定义 官网地址:https://github.com/cncf/toc/blob/main/DEFINITION.md#%E4%B8%AD%E6%9 ...

  4. kubeadm安装K8S单master双节点集群

    宿主机: master:172.16.40.97 node1:172.16.40.98 node2:172.16.40.99 # 一.k8s初始化环境:(三台宿主机) 关闭防火墙和selinux sy ...

  5. 二进制部署k8s1.18(下)

    二进制部署相对其他部署方式来说要复杂一些,步骤比较多,为了避免篇幅过长,故拆分成了三篇: 二进制部署k8s1.18(上) 二进制部署k8s1.18(中) 二进制部署k8s1.18(下) 部署 kube ...

  6. Tanzu 实践安装手册-图文并茂-最新最全面之一

    Tanzu 实践安装手册-图文并茂-最新最全面之一 安装 工作负载管理 1. 前置条件 1.0 存储打标签 1.1 策略和配置文件 1.2 HA 1.3 DRS 1.4 内容库 1.4.1 订阅方式 ...

  7. kubeadm安装高可用kubernetes v1.14.1

    前言 步骤跟之前安装1.13版本的是一样的 区别就在于kubeadm init的configuration file 目前kubeadm init with configuration file已经处 ...

  8. 熬10天夜,肝出了这个PDF版“软件安装手册”(附下载)

    来源:CodeSheep 全文约1100字,建议阅读18分钟 您可阅读至文末安装~ 标签:编程开发 嗯 ? woc,这个学期过了就要找工作了,之前看羊子发的那个"Java后端开发学习路线图& ...

  9. DataWorks 安装手册

    2019独角兽企业重金招聘Python工程师标准>>> DataWorks 安装手册 tags: gsac [TOC] 1. 环境介绍 系统 : CentOS release 6.8 ...

  10. 使用kubeadm 安装 kuberntes 1.13.3

    # kubeadm安装kubernetes步骤说明: ## 第一步 ### 准备系统环境 - Repo仓库准备 1. docker-ce.repo 2. kubernetes.repo - 停用服务 ...

最新文章

  1. 【转载】究竟啥才是互联网架构“高可用”
  2. 了解java虚拟机—垃圾回收算法(5)
  3. android 字体加下划线,android自定义带下划线EditText解决文字压线的问题
  4. 一种简单的可控并发粒度的TaskScheduler的实现
  5. TortoiseSVN 官网 中文语言包位置
  6. 在pycharm中使用matplotlib时需要点❌才能显示下一张图片的问题
  7. 国内外云服务器运维面板有哪些?运维面板全面汇总
  8. 选择排序(java代码实现)
  9. ShadowGun Deadzone 放出 GM Kit Mod 包
  10. 【产品经理】产品经理进阶之路(六):互联网思维详解
  11. 企业品牌营销型网站搭建需要关注这6大核心要素
  12. java date 年龄_Java 根据年月日精确计算年龄
  13. Teardrop代码编程
  14. D. Masquerade strikes back(思维)
  15. 红米8A 卡刷LineageOS-64位系统,需工具4g内存卡一张
  16. html自动缩放不出现滚动条,HTML页面缩小后显示滚动条的示例代码
  17. 前端ui框架layui——layer弹出层-弹出框方法
  18. Spring 源码分析(一) —— 迈向Spring之路
  19. python如何判断列表是否为空_python简单判断序列是否为空的方法
  20. 基于java+SSM+jsp框架的房屋租赁管理系统的设计和实现(附源码)

热门文章

  1. Android编译时技术(二)ASM 基础使用之代码生成
  2. FeHelper插件安装小结
  3. 复盘:pearson皮尔森相关系数和spearman斯皮尔曼相关系数的区别
  4. 华为鸿蒙主题设计,3W品牌报:2020 华为全球主题设计大赛获奖作品公布;华为鸿蒙 OS 正式上线...
  5. 网络安全基础知识面试题库
  6. 微信UnionID作用
  7. Python 100 例 练习实例1
  8. ad软件画pcb方法总结_PCB各层介绍及AD软件画PCB时的规则
  9. DSGE模型的Stata实现简介
  10. IOS技术分享| 你画我猜小游戏快速实现