文章目录

  • 说明
    • 高可用原理
    • K8S多master节点架构图
    • 测试环境说明
  • 部署高可用
    • 安装包准备【可选】
    • 高可用架构说明
    • 配置haproxy
      • 说明【必看】
      • 安装haproxy
      • 编辑配置文件
    • 配置etcd
      • 安装etcd
      • 编辑配置文件
    • master和work配置【集群配置】
      • 环境配置【master和work都做】
      • 安装docker-ce【master和work都安装】
      • 安装kubelet【master和work都安装】
    • 安装kubeadm【仅在一台master上安装】
      • config文件准备
      • 初始化集群
        • 镜像准备【离线环境必做】
        • 初始化集群并增加变量环境
      • word加入集群
      • master加入集群
        • 说明
        • 镜像准备
        • 准备pki证书
        • 初始化集群并增加变量环境
      • calico安装【所有master和work节点都需要准备镜像】
        • calico安装报错处理
      • kubectl子命令tab设置
  • 验证高可用
    • 数据一致和同步验证
    • 高可用验证
  • 客户端连接haproxy访问高可用测试

说明

高可用原理

  • K8S集群的高可用,保证在主master节点挂掉之后,node节点的kubelet还能访问到另一个主节点的apiserver等组件进行运作。

    • 如:
      配置一台新的master节点,然后在每台node节点上安装nginx,nginx通过内部的负载均衡将node节点上需要通过访问master,kube-apiserver组件的请求,反代到两台k8s-master节点上,这样就可以实现master节点的高可用,当任意一台master节点宕机后,也可以通过nginx负载均衡放文档另一个master节点上。kube-scheduler以及kube-controller-manager高可用则是在两台master配置文件设置leader-elect参数。
  • Kubernetes的管理层服务包括kube-scheduler和kube-controller-manager。kube-scheduer和kube-controller-manager使用一主多从的高可用方案,在同一时刻只允许一个服务处以具体的任务。Kubernetes中实现了一套简单的选主逻辑,依赖Etcd实现scheduler和controller-manager的选主功能。如果scheduler和controller-manager在启动的时候设置了leader-elect参数,它们在启动后会先尝试获取leader节点身份,只有在获取leader节点身份后才可以执行具体的业务逻辑。它们分别会在Etcd中创建kube-scheduler和kube-controller-manager的endpoint,endpoint的信息中记录了当前的leader节点信息,以及记录的上次更新时间。leader节点会定期更新endpoint的信息,维护自己的leader身份。每个从节点的服务都会定期检查endpoint的信息,如果endpoint的信息在时间范围内没有更新,它们会尝试更新自己为leader节点。scheduler服务以及controller-manager服务之间不会进行通信,利用Etcd的强一致性,能够保证在分布式高并发情况下leader节点的全局唯一性。

K8S多master节点架构图

  • 根据前面配置的单节点mater服务,再配置一台新的master节点,然后在搭建负载均衡集群,在负载均衡服务器上安装nginx服务,并设置四层转发功能(如果使用七层需要创建证书验证,使用四层部署相对简便一点),nginx通过内部的负载均衡将node节点上需要通过访问master上kube-apiserver组件的请求,转发到两台k8s-master节点上,这样就可以实现master节点的高可用,当任意一台master节点宕机后,也可以通过nginx负载均衡将请求转发到另一个master节点上。

测试环境说明

  • 我这准备了5台全新的虚拟机做测试,且虚拟机全部在内网环境,没有外网,所以我下面的搭建是在没有外网的环境搭建的,配置这些如下
    此外,自行准备一台全新能使用外网的虚拟机,用于后面下载各种镜像和安装包~
系统 主机名 ip 用途
centos 7.4 worker-165 192.168.59.165 work节点
centos 7.4 haproxy-164 192.168.59.164 haproxy节点
centos 7.4 master1-163 192.168.59.163 master1节点
centos 7.4 master2-162 192.168.59.162 master2节点
centos 7.4 etcd-161 192.168.59.161 etcd1节点
centos 7.4 etcd-160 192.168.59.160 etcd2节点
centos 7.2 ccx 192.168.198.134 通外网服务器,下载包和镜像使用【纯新系统】

部署高可用

安装包准备【可选】

  • 这是下面所有需要用到的命令的安装包,如果你是离线环境,又不想自己去折腾下载,可以直接下载我打包好的【下面有使用方法】,不下载也没关系,不影响正常操作,下面都有说明如何准备这些包的~
    集群高可用所需安装包.rar

高可用架构说明

  • worker:
    用来存放pod使用

  • etcd:
    etcd之间数据是同步的,并且master在etcd里获取数据
    所以etcd也就是共享数据用

  • master
    控制节点,管理集群用
    2个master功能数据完全一致

  • haproxy
    这是负载均衡器,通过rs连接master
    也就是说,用户连接到haproxy,haproxy会通过rs把请求发送到master上
    如果master1出问题了,会自动转发给master2

  • keepalive
    这个主要是怕haproxy也出问题时候做的,首先创建2个haproxy
    然后keepalive会自动生成一个vip,我们后面访问这个vip的时候,keepalive会自动将请求转发到haproxy上,如果haproxy1出问题了,会自动转发给haproxy2上。【功能和haproxy一样,只是这个针对haproxy罢了】

  • 上图是一个完整流程,但我们这不配置keepalive,只要下面配置完毕,高可用就算完成了,感兴趣的小伙伴可以自行百度跟着配置keepalive,这个不难。

  • 再说明一次,我是在内网环境搭建的。

配置haproxy

说明【必看】

  • haproxy也是可以当keepalive使用的,也就是说,我们也可以使用客户端连接haproxy,不用考虑master节点死一个的问题。
    是这个意思,就是我们使用任意集群外的主机,使用kubeconfig的形式连接到haproxy,然后haproxy会自动转发到master,所以只需要连接haproxy,就可以实现集群高可用了【2个master死其中一个无所谓的】
    这个连接测试呢,最下面“客户端连接haproxy访问高可用测试” 里有链接

安装haproxy

  • 外网直接执行下面命令即可
yum -y install haproxy
  • 内网用下面方法
[root@ccx yum.repos.d]# yum -y install haproxy --downloadonly --downloaddir=/root/haproxy
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile* base: mirror.lzu.edu.cn* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-9.el7_9.1 will be installed
--> Processing Dependency: libcrypto.so.10(OPENSSL_1.0.2)(64bit) for package: haproxy-1.5.
18-9.el7_9.1.x86_64--> Running transaction check
---> Package openssl-libs.x86_64 1:1.0.1e-42.el7.9 will be updated
--> Processing Dependency: openssl-libs(x86-64) = 1:1.0.1e-42.el7.9 for package: 1:openssl
-1.0.1e-42.el7.9.x86_64---> Package openssl-libs.x86_64 1:1.0.2k-22.el7_9 will be an update
--> Running transaction check
---> Package openssl.x86_64 1:1.0.1e-42.el7.9 will be updated
---> Package openssl.x86_64 1:1.0.2k-22.el7_9 will be an update
--> Finished Dependency ResolutionDependencies Resolved==========================================================================================Package               Arch            Version                     Repository        Size
==========================================================================================
Installing:haproxy               x86_64          1.5.18-9.el7_9.1            updates          835 k
Updating for dependencies:openssl               x86_64          1:1.0.2k-22.el7_9           updates          494 kopenssl-libs          x86_64          1:1.0.2k-22.el7_9           updates          1.2 MTransaction Summary
==========================================================================================
Install  1 Package
Upgrade             ( 2 Dependent packages)Total download size: 2.5 M
Background downloading packages, then exiting:
No Presto metadata available for updates
warning: /root/haproxy/haproxy-1.5.18-9.el7_9.1.x86_64.rpm.54894.tmp: Header V3 RSA/SHA256Signature, key ID f4a80eb5: NOKEYPublic key for haproxy-1.5.18-9.el7_9.1.x86_64.rpm.54894.tmp is not installed
(1/3): haproxy-1.5.18-9.el7_9.1.x86_64.rpm                         | 835 kB  00:00:06
(2/3): openssl-1.0.2k-22.el7_9.x86_64.rpm                          | 494 kB  00:00:06
(3/3): openssl-libs-1.0.2k-22.el7_9.x86_64.rpm                     | 1.2 MB  00:00:00
------------------------------------------------------------------------------------------
Total                                                     353 kB/s | 2.5 MB  00:00:07
exiting because "Download Only" specified
[root@ccx yum.repos.d]# cd /root/haproxy/
[root@ccx haproxy]# ls
haproxy-1.5.18-9.el7_9.1.x86_64.rpm  openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
openssl-1.0.2k-22.el7_9.x86_64.rpm
[root@ccx haproxy]#
  • 把上面3个包导入到内网环境并安装
[root@haproxy-164 haproxy]# ls
haproxy-1.5.18-9.el7_9.1.x86_64.rpm  openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
openssl-1.0.2k-22.el7_9.x86_64.rpm
[root@haproxy-164 haproxy]# rpm -ivhU * --nodeps --force
准备中...                          ################################# [100%]
正在升级/安装...1:openssl-libs-1:1.0.2k-22.el7_9   ################################# [ 20%]2:haproxy-1.5.18-9.el7_9.1         ################################# [ 40%]3:openssl-1:1.0.2k-22.el7_9        ################################# [ 60%]
正在清理/删除...4:openssl-1:1.0.2k-8.el7           ################################# [ 80%]5:openssl-libs-1:1.0.2k-8.el7      ################################# [100%]
[root@haproxy-164 haproxy]#

编辑配置文件

  • 在配置文件/etc/haproxy/haproxy.cfg最后加上下面5行内容
[root@haproxy-164 haproxy]# tail -n 5 /etc/haproxy/haproxy.cfg
listen k8s-lb *:6443mode tcpbalance roundrobinserver s1 192.168.59.163:6443 weight 1server s2 192.168.59.162:6443 weight 1
[root@haproxy-164 haproxy]# k8s-lb #自定义名称
192.168.59.163/162 # 2个master节点ip
  • 然后重启该服务并加入自启动
[root@haproxy-164 haproxy]# systemctl enable haproxy.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@haproxy-164 haproxy]#
[root@haproxy-164 haproxy]# systemctl is-active haproxy.service
active
[root@haproxy-164 haproxy]#
  • 这时候可以看到6443端口已经被监听了
[root@haproxy-164 haproxy]# netstat -ntlp | grep 6443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      2504/haproxy
[root@haproxy-164 haproxy]#

配置etcd

  • 2台etcd需要同步操作

安装etcd

  • 有外网直接执行
yum -y install etcd
  • 没有外网的,去下载这个包
[root@ccx haproxy]# mkdir /root/etcd
[root@ccx haproxy]# yum -y install etcd --downloadonly --downloaddir=/root/etcd
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile* base: mirror.lzu.edu.cn* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package etcd.x86_64 0:3.3.11-2.el7.centos will be installed
--> Finished Dependency ResolutionDependencies Resolved==========================================================================================Package         Arch              Version                        Repository         Size
==========================================================================================
Installing:etcd            x86_64            3.3.11-2.el7.centos            extras             10 MTransaction Summary
==========================================================================================
Install  1 PackageTotal download size: 10 M
Installed size: 45 M
Background downloading packages, then exiting:
warning: /root/etcd/etcd-3.3.11-2.el7.centos.x86_64.rpm.55125.tmp: Header V3 RSA/SHA256 Si
gnature, key ID f4a80eb5: NOKEYPublic key for etcd-3.3.11-2.el7.centos.x86_64.rpm.55125.tmp is not installed
etcd-3.3.11-2.el7.centos.x86_64.rpm                                |  10 MB  00:07:24
exiting because "Download Only" specified
[root@ccx haproxy]# cd /root/etcd/
[root@ccx etcd]# ls
etcd-3.3.11-2.el7.centos.x86_64.rpm
[root@ccx etcd]#
  • 然后导入内网并安装
    两台etcd均需要安装
[root@etcd-161 etcd]# ls
etcd-3.3.11-2.el7.centos.x86_64.rpm
[root@etcd-161 etcd]#
[root@etcd-161 etcd]# rpm -ivhU * --nodeps --force
准备中...                          ################################# [100%]
正在升级/安装...1:etcd-3.3.11-2.el7.centos         ################################# [100%]
[root@etcd-161 etcd]#
[root@etcd-161 etcd]# scp etcd-3.3.11-2.el7.centos.x86_64.rpm  192.168.59.160:~
The authenticity of host '192.168.59.160 (192.168.59.160)' can't be established.
ECDSA key fingerprint is SHA256:zRtVBoNePoRXh9aA8eppKwwduS9Rjjr/kT5a7zijzjE.
ECDSA key fingerprint is MD5:b8:53:cc:da:86:2a:97:dc:bd:64:6b:b1:d0:f3:02:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.59.160' (ECDSA) to the list of known hosts.
root@192.168.59.160's password:
Permission denied, please try again.
root@192.168.59.160's password:
etcd-3.3.11-2.el7.centos.x86_64.rpm                    100%   10MB  38.8MB/s   00:00
[root@etcd-161 etcd]# #另一台
[root@etcd-160 ~]# mkdir etcd
[root@etcd-160 ~]# mv etcd-3.3.11-2.el7.centos.x86_64.rpm etcd
[root@etcd-160 ~]# cd etcd
[root@etcd-160 etcd]# ls
etcd-3.3.11-2.el7.centos.x86_64.rpm
[root@etcd-160 etcd]#
[root@etcd-160 etcd]# rpm -ivhU * --nodeps --force
准备中...                          ################################# [100%]
正在升级/安装...1:etcd-3.3.11-2.el7.centos         ################################# [100%]
[root@etcd-160 etcd]#

编辑配置文件

  • 两台都需要编辑,注意看主机名【需要对应修改ip】
    看不懂的去我之前对etcd的安装说明博客,里面有详细介绍,我这就不做说明了
    k8s的核心组件etcd的安装使用、快照说明及etcd命令详解【含单节点,多节点和新节点加入说明】
  • 编辑配置文件
    记得修改ip和ETCD_NAME行
[root@etcd-161 ~]# ip a | grep 59inet 192.168.59.161/24 brd 192.168.59.255 scope global ens32
[root@etcd-161 ~]#
[root@etcd-161 ~]# cat /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.59.161:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.59.161:2379,http://localhost:2379"
ETCD_NAME="etcd-161"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.59.161:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.59.161:2379"
ETCD_INITIAL_CLUSTER="etcd-161=http://192.168.59.161:2380,etcd-160=http://192.168.59.160:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@etcd-161 ~]# # 另一台
[root@etcd-160 etcd]# ip a | grep 59inet 192.168.59.160/24 brd 192.168.59.255 scope global ens32
[root@etcd-160 etcd]#
[root@etcd-160 etcd]# cat /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.59.160:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.59.160:2379,http://localhost:2379"
ETCD_NAME="etcd-160"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.59.160:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.59.160:2379"
ETCD_INITIAL_CLUSTER="etcd-161=http://192.168.59.161:2380,etcd-160=http://192.168.59.160:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@etcd-160 etcd]#
  • 然后启动etcd服务
[root@etcd-161 ~]# systemctl start etcd
[root@etcd-161 ~]# systemctl is-active etcd
active
[root@etcd-161 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@etcd-161 ~]#[root@etcd-160 etcd]# systemctl start etcd
[root@etcd-160 etcd]# systemctl is-active etcd
active
[root@etcd-160 etcd]#  systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@etcd-160 etcd]#

master和work配置【集群配置】

环境配置【master和work都做】

  • 这个呢,如果对集群配置不熟悉,去看看这篇文章
    【kubernetes】k8s集群的搭建安装详细说明【创建集群、加入集群、踢出集群、重置集群…】【含离线搭建方法】

  • 解析设置
    master和node节点解析配置成一致,且互相之间都需要加上。

[root@master1-163 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.59.163 master1-163
192.168.59.162 master2-162
192.168.59.165 worker-165
[root@master1-163 ~]#
[root@master1-163 ~]# scp /etc/hosts 192.168.59.162:/etc/hosts
The authenticity of host '192.168.59.162 (192.168.59.162)' can't be established.
ECDSA key fingerprint is SHA256:zRtVBoNePoRXh9aA8eppKwwduS9Rjjr/kT5a7zijzjE.
ECDSA key fingerprint is MD5:b8:53:cc:da:86:2a:97:dc:bd:64:6b:b1:d0:f3:02:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.59.162' (ECDSA) to the list of known hosts.
root@192.168.59.162's password:
hosts                                                  100%  238   297.8KB/s   00:00
[root@master1-163 ~]#
[root@master1-163 ~]# scp /etc/hosts 192.168.59.165:/etc/hosts
The authenticity of host '192.168.59.165 (192.168.59.165)' can't be established.
ECDSA key fingerprint is SHA256:zRtVBoNePoRXh9aA8eppKwwduS9Rjjr/kT5a7zijzjE.
ECDSA key fingerprint is MD5:b8:53:cc:da:86:2a:97:dc:bd:64:6b:b1:d0:f3:02:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.59.165' (ECDSA) to the list of known hosts.
root@192.168.59.165's password:
hosts                                                  100%  238   294.3KB/s   00:00
[root@master1-163 ~]#
  • 关闭swap
    全部都要做
[root@master1-163 ~]# swapoff -a ; sed -i '/swap/d' /etc/fstab
[root@master1-163 ~]#
[root@master1-163 ~]# swapon -s[root@worker-165 ~]# swapon -s
文件名                          类型            大小    已用    权限
/dev/sda2                               partition       10485756        0       -1
[root@worker-165 ~]#
[root@worker-165 ~]# swapoff -a ; sed -i '/swap/d' /etc/fstab
[root@worker-165 ~]# swapon -s
[root@worker-165 ~]# [root@master2-162 ~]# swapon -s
文件名                          类型            大小    已用    权限
/dev/sda2                               partition       10485756        0       -1
[root@master2-162 ~]# swapoff -a ; sed -i '/swap/d' /etc/fstab
[root@master2-162 ~]#
[root@master2-162 ~]# swapon -s
[root@master2-162 ~]#
  • 关闭防火墙
    master和node都需要执行
[root@master1-163 ~]# systemctl stop firewalld.service ; systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master1-163 ~]#
  • 关闭selinux
    master和node都需要执行
[root@master2-162 docker-ce]# cat /etc/sysconfig/selinux | grep dis
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
[root@master2-162 docker-ce]#
[root@master2-162 docker-ce]# getenforce
Disabled
[root@master2-162 docker-ce]#
  • 配置加速器
    master和node都需要执行
[root@master2-162 docker-ce]# cat > /etc/docker/daemon.json <<EOF
> {
> "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
> }
> EOF
[root@master2-162 docker-ce]# systemctl restart docker
[root@master2-162 docker-ce]#
  • 设置内核参数
    master和node都需要执行
[root@worker-165 docker-ce]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@worker-165 docker-ce]#
[root@worker-165 docker-ce]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@worker-165 docker-ce]#

安装docker-ce【master和work都安装】

  • 需要先配置一个yum源
[root@ccx etcd]# wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
--2021-11-26 16:47:07--  ftp://ftp.rhce.cc/k8s/*=> ‘/etc/yum.repos.d/.listing’
Resolving ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41
...
  • 如果有外网,直接执行

  • 没有外网的,去下载这个包
[root@ccx etcd]# yum -y install docker-ce --downloadonly --downloaddir=/root/docker-ce
Loaded plugins: fastestmirror, langpacks
docker-ce-stable                                                   | 3.5 kB  00:00:00
epel                                                               | 4.7 kB  00:00:00
kubernetes/signature                                               |  844 B  00:00:00
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x307EA071:Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_0....大量输出
(30/30): docker-ce-cli-20.10.11-3.el7.x86_64.rpm                   |  29 MB  00:00:24
------------------------------------------------------------------------------------------
Total                                                     2.4 MB/s | 110 MB  00:00:44
exiting because "Download Only" specified
[root@ccx etcd]# cd /root/docker-ce/
[root@ccx docker-ce]# ls
containerd.io-1.4.12-3.1.el7.x86_64.rpm
container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
cryptsetup-2.0.3-6.el7.x86_64.rpm
cryptsetup-libs-2.0.3-6.el7.x86_64.rpm
cryptsetup-python-2.0.3-6.el7.x86_64.rpm
docker-ce-20.10.11-3.el7.x86_64.rpm
docker-ce-cli-20.10.11-3.el7.x86_64.rpm
docker-ce-rootless-extras-20.10.11-3.el7.x86_64.rpm
docker-scan-plugin-0.9.0-3.el7.x86_64.rpm
fuse3-libs-3.6.1-4.el7.x86_64.rpm
fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
libgudev1-219-78.el7_9.3.x86_64.rpm
libseccomp-2.3.1-4.el7.x86_64.rpm
libselinux-2.5-15.el7.x86_64.rpm
libselinux-python-2.5-15.el7.x86_64.rpm
libselinux-utils-2.5-15.el7.x86_64.rpm
libsemanage-2.5-14.el7.x86_64.rpm
libsemanage-python-2.5-14.el7.x86_64.rpm
libsepol-2.5-10.el7.x86_64.rpm
lz4-1.8.3-1.el7.x86_64.rpm
policycoreutils-2.5-34.el7.x86_64.rpm
policycoreutils-python-2.5-34.el7.x86_64.rpm
selinux-policy-3.13.1-268.el7_9.2.noarch.rpm
selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm
setools-libs-3.3.8-4.el7.x86_64.rpm
slirp4netns-0.4.3-4.el7_8.x86_64.rpm
systemd-219-78.el7_9.3.x86_64.rpm
systemd-libs-219-78.el7_9.3.x86_64.rpm
systemd-python-219-78.el7_9.3.x86_64.rpm
systemd-sysv-219-78.el7_9.3.x86_64.rpm
[root@ccx docker-ce]#
  • 导入到内网并安装
    master和word都需要安装
[root@master1-163 docker-ce]# ls | wc -l
30
[root@master1-163 docker-ce]# du -sh
110M    .
[root@master1-163 docker-ce]#
[root@master1-163 docker-ce]# rpm -ivhU * --nodeps --force
警告:containerd.io-1.4.12-3.1.el7.x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID 621e9f35: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...1:libsepol-2.5-10.el7              ################################# [  2%]2:libselinux-2.5-15.el7            ################################# [  5%]3:libsemanage-2.5-14.el7           ################################# [  7%]4:lz4-1.8.3-1.el7                  ################################# [  9%]5:systemd-libs-219-78.el7_9.3      ################################# [ 12%]6:libseccomp-2.3.1-4.el7           ################################# [ 14%]7:cryptsetup-libs-2.0.3-6.el7      ################################# [ 16%]8:systemd-219-78.el7_9.3           ################################# [ 19%]9:libselinux-utils-2.5-15.el7      ################################# [ 21%]10:policycoreutils-2.5-34.el7       ################################# [ 23%]11:selinux-policy-3.13.1-268.el7_9.2################################# [ 26%]12:docker-scan-plugin-0:0.9.0-3.el7 ################################# [ 28%]13:docker-ce-cli-1:20.10.11-3.el7   ################################# [ 30%]14:selinux-policy-targeted-3.13.1-26################################# [ 33%]15:slirp4netns-0.4.3-4.el7_8        ################################# [ 35%]16:libsemanage-python-2.5-14.el7    ################################# [ 37%]17:libselinux-python-2.5-15.el7     ################################# [ 40%]18:setools-libs-3.3.8-4.el7         ################################# [ 42%]19:policycoreutils-python-2.5-34.el7################################# [ 44%]20:container-selinux-2:2.119.2-1.911################################# [ 47%]
setsebool:  SELinux is disabled.21:containerd.io-1.4.12-3.1.el7     ################################# [ 49%]22:fuse3-libs-3.6.1-4.el7           ################################# [ 51%]23:fuse-overlayfs-0.7.2-6.el7_8     ################################# [ 53%]24:docker-ce-rootless-extras-0:20.10################################# [ 56%]25:docker-ce-3:20.10.11-3.el7       ################################# [ 58%]26:systemd-python-219-78.el7_9.3    ################################# [ 60%]27:systemd-sysv-219-78.el7_9.3      ################################# [ 63%]28:cryptsetup-2.0.3-6.el7           ################################# [ 65%]29:cryptsetup-python-2.0.3-6.el7    ################################# [ 67%]30:libgudev1-219-78.el7_9.3         ################################# [ 70%]
正在清理/删除...31:selinux-policy-targeted-3.13.1-16################################# [ 72%]32:selinux-policy-3.13.1-166.el7    ################################# [ 74%]33:systemd-sysv-219-42.el7          ################################# [ 77%]34:policycoreutils-2.5-17.1.el7     ################################# [ 79%]35:systemd-219-42.el7               ################################# [ 81%]36:libselinux-utils-2.5-11.el7      ################################# [ 84%]37:libsemanage-2.5-8.el7            ################################# [ 86%]38:systemd-libs-219-42.el7          ################################# [ 88%]39:libselinux-python-2.5-11.el7     ################################# [ 91%]40:libselinux-2.5-11.el7            ################################# [ 93%]41:libsepol-2.5-6.el7               ################################# [ 95%]42:cryptsetup-libs-1.7.4-3.el7      ################################# [ 98%]43:libseccomp-2.3.1-3.el7           ################################# [100%]
[root@master1-163 docker-ce]#
[root@master1-163 docker-ce]# scp * master2-162:/root/docker-ce
[root@master1-163 docker-ce]# scp * worker-165:/root/docker-ce#另一个master
[root@master2-162 docker-ce]# ls | wc -l
30
[root@master2-162 docker-ce]# du -sh
110M    .
[root@master2-162 docker-ce]#
[root@master2-162 docker-ce]# rpm -ivhU * --nodeps --force# node节点
[root@worker-165 docker-ce]# ls | wc -l
30
[root@worker-165 docker-ce]# du -sh
110M    .
[root@worker-165 docker-ce]#
[root@worker-165 docker-ce]# rpm -ivhU * --nodeps --force
  • 全部节点启动docker服务
[root@worker-165 docker-ce]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@worker-165 docker-ce]#
[root@worker-165 docker-ce]# systemctl is-active docker
active
[root@worker-165 docker-ce]# [root@master1-163 docker-ce]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master1-163 docker-ce]# systemctl is-active docker
active
[root@master1-163 docker-ce]# [root@master2-162 docker-ce]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master2-162 docker-ce]# systemctl is-active docker
active
[root@master2-162 docker-ce]#

安装kubelet【master和work都安装】

  • 有外网的直接执行下面命令
    我这安装的是1.21.1版本
yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes
  • 没有外网的去下载包
[root@ccx kubelet]# yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --di
sableexcludes=kubernetes  --downloadonly --downloaddir=/root/kubelet
...
(11/11): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm             |  18 kB  00:00:05
------------------------------------------------------------------------------------------
Total                                                     4.8 MB/s |  64 MB  00:00:13
exiting because "Download Only" specified
[root@ccx docker-ce]# cd /root/kubelet/
[root@ccx kubelet]# ls
3944a45bec4c99d3489993e3642b63972b62ed0a4ccb04cc7655ce0467fddfef-kubectl-1.21.1-0.x86_64.r
pm67ffa375b03cea72703fe446ff00963919e8fce913fbc4bb86f06d1475a6bdf9-cri-tools-1.19.0-0.x86_64
.rpmc47efa28c5935ed2ffad234e2b402d937dde16ab072f2f6013c71d39ab526f40-kubelet-1.21.1-0.x86_64.r
pmconntrack-tools-1.4.4-7.el7.x86_64.rpm
db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x8
6_64.rpme0511a4d8d070fa4c7bcd2a04217c80774ba11d44e4e0096614288189894f1c5-kubeadm-1.21.1-0.x86_64.r
pmlibnetfilter_conntrack-1.0.6-1.el7_3.x86_64.rpm
libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
  • 然后导入到内网环境并安装,然后加入自启动和启动kubelet服务
    3个节点都重复下面步骤安装,我下面只放一个了。
[root@master1-163 kubelet]# systemctl restart kubelet.service ;systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master1-163 kubelet]#
[root@master1-163 kubelet]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since 五 2021-11-26 18:03:50 CST; 7s agoDocs: https://kubernetes.io/docs/Process: 2532 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)Main PID: 2532 (code=exited, status=1/FAILURE)11月 26 18:03:50 master1-163 systemd[1]: Unit kubelet.service entered failed state.
11月 26 18:03:50 master1-163 systemd[1]: kubelet.service failed.
[root@master1-163 kubelet]# systemctl is-active kubelet.service
active
[root@master1-163 kubelet]#

安装kubeadm【仅在一台master上安装】

  • 我现在安装在:master1-163这台master节点上

config文件准备

  • 找一台之前安装好的master节点,直接导出现有配置文件即可
[root@master ~]# kubeadm config view
Command "view" is deprecated, This command is deprecated and will be removed in a future release, please use 'kubectl get cm -o yaml -n kube-system kubeadm-config' to get the kubeadm config directly.
apiServer:extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@master ~]# kubeadm config view > config.yaml
Command "view" is deprecated, This command is deprecated and will be removed in a future release, please use 'kubectl get cm -o yaml -n kube-system kubeadm-config' to get the kubeadm config directly.
[root@master ~]# scp config.yaml 192.168.59.163:~
The authenticity of host '192.168.59.163 (192.168.59.163)' can't be established.
ECDSA key fingerprint is SHA256:zRtVBoNePoRXh9aA8eppKwwduS9Rjjr/kT5a7zijzjE.
ECDSA key fingerprint is MD5:b8:53:cc:da:86:2a:97:dc:bd:64:6b:b1:d0:f3:02:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.59.163' (ECDSA) to the list of known hosts.
root@192.168.59.163's password:
config.yaml                                            100%  491   355.0KB/s   00:00
[root@master ~]#
  • 现在,我master1上已经有这个config文件了
[root@master1-163 kubelet]# ls /root/ | grep config
config.yaml
[root@master1-163 kubelet]#
  • 对config文件进行更改
    因为我们现在不使用单独的etcd方法了,需要改为控制器的方式,所以需要增加一些配置

#修改前
[root@master1-163 ~]# cat config.yaml
apiServer:extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@master1-163 ~]# #修改后
[root@master1-163 ~]# cat config.yaml
apiServer:extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2 #版本
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "192.168.59.164:6443" #haproxy的ip
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:external:endpoints:- "http://192.168.59.161:2379" #etcd1的ip- "http://192.168.59.160:2379" #etcd2的ip
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.1 #这个要改为上面安装的版本一样
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@master1-163 ~]# apiVersion:# 可以确定下现在的master是不是这个版本,这个可以替换为和你现在已有master一样的版本【kubeadm config view查看】

初始化集群

镜像准备【离线环境必做】

  • 自己去有外网的环境搭一套环境,下载v1.21.1所需所有镜像
    不想折腾的可以直接打包我这已经下载好了的【包含v1.21.1和v1.21.0两个版本】,导出来有如下图中的所有镜像
    kubeadm_images.rar(含v1.21.0和v1.21.1初始化所需所有镜像包,解压出来1.2G)
  • 导入到环境中
    导入到所有master和node节点上,外网环境不用准备是因为其他节点会自动下载相关所需镜像的,离线环境必须手动把这些镜像导入到所有节点
[root@master1-163 ~]# du -sh kubeadm_images.tar
1.2G    kubeadm_images.tar
[root@master1-163 ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@master1-163 ~]#
[root@master1-163 ~]# docker load -i kubeadm_images.tar
417cb9b79ade: Loading layer  3.062MB/3.062MB
b50131762317: Loading layer   1.71MB/1.71MB
1e6ed7621dee: Loading layer  122.1MB/122.1MB
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.1
28699c71935f: Loading layer  3.062MB/3.062MB
c300f5fa3d7b: Loading layer   1.71MB/1.71MB
aa42f0ff58e4: Loading layer  116.3MB/116.3MB
Loaded image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
5714c0da2d88: Loading layer  47.11MB/47.11MB
Loaded image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
915e8870f7d1: Loading layer  684.5kB/684.5kB
Loaded image: registry.aliyuncs.com/google_containers/pause:3.4.1
225df95e717c: Loading layer  336.4kB/336.4kB
69ae2fbf419f: Loading layer  42.24MB/42.24MB
Loaded image: registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
d72a74c56330: Loading layer  3.031MB/3.031MB
d61c79b29299: Loading layer   2.13MB/2.13MB
1a4e46412eb0: Loading layer  225.3MB/225.3MB
bfa5849f3d09: Loading layer   2.19MB/2.19MB
bb63b9467928: Loading layer  21.98MB/21.98MB
Loaded image: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
a55f783b98c2: Loading layer  122.1MB/122.1MB
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
077075ef2723: Loading layer  47.11MB/47.11MB
Loaded image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.1
13fb781d48d3: Loading layer  53.89MB/53.89MB
8c9959c71363: Loading layer  22.25MB/22.25MB
8cf972e62246: Loading layer  4.894MB/4.894MB
a388391a5fc4: Loading layer  4.608kB/4.608kB
93c6e9a2ab1e: Loading layer  8.192kB/8.192kB
21d84a192aca: Loading layer  8.704kB/8.704kB
84ce83eaa4cd: Loading layer  43.14MB/43.14MB
Loaded image: registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
105a75f15167: Loading layer  62.35MB/62.35MB
3cf1454ee1bf: Loading layer  22.33MB/22.33MB
1e16c975f0a9: Loading layer  4.884MB/4.884MB
9fb8c8a7b75b: Loading layer  4.608kB/4.608kB
72682df7cfdd: Loading layer  8.192kB/8.192kB
6fb6238e9c25: Loading layer  8.704kB/8.704kB
016157fde486: Loading layer  43.14MB/43.14MB
Loaded image: registry.aliyuncs.com/google_containers/kube-proxy:v1.21.1
954115f32d73: Loading layer  91.22MB/91.22MB
Loaded image: registry.cn-hangzhou.aliyuncs.com/kube-iamges/dashboard:v2.0.0-beta8
89ac18ee460b: Loading layer  238.6kB/238.6kB
878c5d3194b0: Loading layer  39.87MB/39.87MB
1dc71700363a: Loading layer  2.048kB/2.048kB
Loaded image: registry.cn-hangzhou.aliyuncs.com/kube-iamges/metrics-scraper:v1.0.1
2c68eaf5f19f: Loading layer  116.3MB/116.3MB
Loaded image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.1
[root@master1-163 ~]#
[root@master1-163 ~]#
[root@master1-163 ~]# docker images
REPOSITORY                                                        TAG            IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.1        771ffcf9ca63   6 months ago    126MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.1        a4183b88f6e6   6 months ago    50.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.1        e16544fd47b0   6 months ago    120MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.1        4359e752b596   6 months ago    131MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.0        4d217480042e   7 months ago    126MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.0        38ddd85fe90e   7 months ago    122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.0        62ad3129eca8   7 months ago    50.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.0        09708983cc37   7 months ago    120MB
registry.aliyuncs.com/google_containers/pause                     3.4.1          0f8457a4c2ec   10 months ago   683kB
registry.aliyuncs.com/google_containers/coredns/coredns           v1.8.0         296a6d5035e2   13 months ago   42.5MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0       0369cf4303ff   15 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/kube-iamges/dashboard           v2.0.0-beta8   eb51a3597525   24 months ago   90.8MB
registry.cn-hangzhou.aliyuncs.com/kube-iamges/metrics-scraper     v1.0.1         709901356c11   2 years ago     40.1MB
[root@master1-163 ~]#

初始化集群并增加变量环境

  • 外网环境直接执行下面命令即可,会自动下载镜像,但需要提前导入coredns镜像【提前下载这一个,否则会报错。
    内网环境,把上面所有镜像都导入以后,再执行下面命令。
kubeadm init --config=config.yaml
  • 执行过程如下图【内网和外网最后显示均一样才是正常的】
    虽然都是初始化集群,但可以和之前单纯初始化的时候最后的提示是不一样的,可以自行对比一下,最重要的初始化完成后最下面这个多了个--control-plane参数呢,是以控制权限的形式加入【加入多master时用】
[root@master1-163 ~]# kubeadm init --config=config.yaml
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1-163] and IPs [10.96.0.1 192.168.59.163 192.168.59.164]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.542204 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1-163 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1-163 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nej47e.xlge9gc2usn6sky7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 192.168.59.164:6443 --token nej47e.xlge9gc2usn6sky7 \--discovery-token-ca-cert-hash sha256:2e4d2ca7c162f57de5edc80b89b75ad9493874746cc7c596af7f29ba91eccffe \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.59.164:6443 --token nej47e.xlge9gc2usn6sky7 \--discovery-token-ca-cert-hash sha256:2e4d2ca7c162f57de5edc80b89b75ad9493874746cc7c596af7f29ba91eccffe
[root@master1-163 ~]#
  • 执行环境变量创建【上面其实有提示的,自行看一下】
    可以直接复制你上面提示中的这3个命令哈
[root@master1-163 ~]# mkdir -p $HOME/.kube
[root@master1-163 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1-163 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master1-163 ~]#

word加入集群

  • 命令:就是master上执行kubeadm token create --print-join-command后的结果。
    也可以是初始化后最后显示的内容【命令查看和初始化最后中间有一个值不一样,无所谓的】
  • 我这使用的是初始化后最后的结果了
    注:work不能有--control-plane
[root@worker-165 kubelet]# kubeadm join 192.168.59.164:6443 --token nej47e.xlge9gc2usn6sky7  --discovery-token-ca-cert-hash sha256:2e4d2ca7c162f57de5edc80b89b75ad9493874746cc7c596af7f29ba91eccffe --control-plane y7  --discovery-token-ca-cert-hash sha256:2e4d2ca7c162f57de5edc80b89b75ad9493874746cc7c596af7f29ba91eccffe
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@worker-165 kubelet]#
  • 加完以后,回到master,就可以看到这个节点信息了
[root@master1-163 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE    VERSION
master1-163   NotReady   control-plane,master   37m    v1.21.1
worker-165    NotReady   <none>                 4m6s   v1.21.1
[root@master1-163 ~]#

master加入集群

说明

  • 注:我上面安装kubeadm只安装的master1,我这是用master2加入的,加入方式和wordk形式一样。
  • 但是master形式加入需要在后面加--control-plane
    但是直接加会报错,如下
    是因为master节点需要pki证书的,而这里面是没有的,所以需要准备好各种证书才能以master的形式加入。
[root@master2-162 kubelet]# kubeadm join 192.168.59.164:6443 --token nej47e.xlge9gc2usn6sky7 --discovery-token-ca-cert-hash sha256:2e4d2ca7c162f57de5edc80b89b75ad9493874746cc7c596af7f29ba91eccffe  --control-plane
...删除了部分内容
To see the stack trace of this error execute with --v=5 or higher
[root@master2-162 kubelet]#
[root@master2-162 kubelet]# ls /etc/kubernetes/pki
ls: 无法访问/etc/kubernetes/pki: 没有那个文件或目录
[root@master2-162 kubelet]#

镜像准备

因为这个是用来做master节点的,所以也需要准备master所需的所有镜像,方法见上面初始化集群时候的镜像准备,因为我这内网环境,所以我这直接导入镜像了,导入后如下

[root@master2-162 ~]# docker images
REPOSITORY                                                        TAG            IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.1        771ffcf9ca63   6 months ago    126MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.1        4359e752b596   6 months ago    131MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.1        a4183b88f6e6   6 months ago    50.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.1        e16544fd47b0   6 months ago    120MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.0        4d217480042e   7 months ago    126MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.0        38ddd85fe90e   7 months ago    122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.0        62ad3129eca8   7 months ago    50.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.0        09708983cc37   7 months ago    120MB
registry.aliyuncs.com/google_containers/pause                     3.4.1          0f8457a4c2ec   10 months ago   683kB
registry.aliyuncs.com/google_containers/coredns/coredns           v1.8.0         296a6d5035e2   13 months ago   42.5MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0       0369cf4303ff   15 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/kube-iamges/dashboard           v2.0.0-beta8   eb51a3597525   24 months ago   90.8MB
registry.cn-hangzhou.aliyuncs.com/kube-iamges/metrics-scraper     v1.0.1         709901356c11   2 years ago     40.1MB
[root@master2-162 ~]#

准备pki证书

  • 这个方法也适用于master节点执行reset之后,想重新加入以master身份加入集群。
  • 必须以这种形式拷贝,scp拷贝是不能用的。
    在master1上【上面初始化集群那个master节点】随便编辑一个文件,写入下面内容
#需要打包的名称
[root@master1-163 ~]# cat cert.txt
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
[root@master1-163 ~]# # 打包
[root@master1-163 ~]# tar czf cert.tar.gz -T cert.txt
tar: 从成员名中删除开头的“/”
[root@master1-163 ~]# # 查看打包内容
[root@master1-163 ~]# tar tf cert.tar.gz
etc/kubernetes/pki/ca.crt
etc/kubernetes/pki/ca.key
etc/kubernetes/pki/sa.key
etc/kubernetes/pki/sa.pub
etc/kubernetes/pki/front-proxy-ca.crt
etc/kubernetes/pki/front-proxy-ca.key
[root@master1-163 ~]# #把打包的文件拷贝到master2上去
[root@master1-163 ~]# scp cert.tar.gz master2-162:~
root@master2-162's password:
cert.tar.gz                                            100% 5363     4.5MB/s   00:00
[root@master1-163 ~]#
  • 去master2上导入pki文件
    因为打包原因,所以需要解压到/下。
[root@master2-162 ~]# tar tf cert.tar.gz
etc/kubernetes/pki/ca.crt
etc/kubernetes/pki/ca.key
etc/kubernetes/pki/sa.key
etc/kubernetes/pki/sa.pub
etc/kubernetes/pki/front-proxy-ca.crt
etc/kubernetes/pki/front-proxy-ca.key
[root@master2-162 ~]#
[root@master2-162 ~]# tar zxvf cert.tar.gz -C /
etc/kubernetes/pki/ca.crt
etc/kubernetes/pki/ca.key
etc/kubernetes/pki/sa.key
etc/kubernetes/pki/sa.pub
etc/kubernetes/pki/front-proxy-ca.crt
etc/kubernetes/pki/front-proxy-ca.key
[root@master2-162 ~]#
[root@master2-162 ~]# ls /etc/kubernetes/pki/
ca.crt  ca.key  front-proxy-ca.crt  front-proxy-ca.key  sa.key  sa.pub
[root@master2-162 ~]#

初始化集群并增加变量环境

  • 这个加入方式是用work的形式哈
    全部过程如下
[root@master2-162 ~]# kubeadm join 192.168.59.164:6443 --token nej47e.xlge9gc2usn6sky7 --discovery-token-ca-cert-hash sha256:2e4d2ca7c162f57de5edc80b89b75ad9493874746cc7c596af7f29ba91eccffe  --control-plane
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2-162] and IPs [10.96.0.1 192.168.59.162 192.168.59.164]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master2-162 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2-162 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.[root@master2-162 ~]#
  • 执行环境变量创建【上面其实有提示的,自行看一下】
    可以直接复制你上面提示中的这3个命令哈
[root@master2-162 ~]#  mkdir -p $HOME/.kube
[root@master2-162 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2-162 ~]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master2-162 ~]#
  • 现在回到master1上,就可以看到这2个master信息了
[root@master1-163 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
master1-163   NotReady   control-plane,master   3h29m   v1.21.1
master2-162   NotReady   control-plane,master   149m    v1.21.1
worker-165    NotReady   <none>                 176m    v1.21.1
[root@master1-163 ~]#

calico安装【所有master和work节点都需要准备镜像】

  • 因为上面master状态都是NotReady嘛,所以2个master节点都需要安装一个calico
    文件和镜像准备去看这篇博客吧,我这就不多做说明了
    【kubernetes】k8s集群的搭建安装详细说明【创建集群、加入集群、踢出集群、重置集群…】【含离线搭建方法】
  • 如下,2个master和一个work节点我这已经导入了所有calico所需镜像和yaml文件【除了master1,其他可以不要yaml文件,只需要镜像即可】
[root@master1-163 ~]# docker images | grep ca
calico/node                                                       v3.19.1        c4d75af7e098   6 months ago    168MB
calico/pod2daemon-flexvol                                         v3.19.1        5660150975fb   6 months ago    21.7MB
calico/cni                                                        v3.19.1        5749e8b276f9   6 months ago    146MB
calico/kube-controllers                                           v3.19.1        5d3d5ddc8605   6 months ago    60.6MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.1        771ffcf9ca63   6 months ago    126MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.0        62ad3129eca8   7 months ago    50.6MB
[root@master1-163 ~]# ls | grep calico.ya
calico.yaml
calico.yaml.0
[root@master1-163 ~]#
  • 部署【仅在master1上即可】
    kubectl apply -f calico.yaml
[root@master1-163 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[root@master1-163 ~]#
[root@master1-163 ~]#
  • 然后状态就为Ready
[root@master1-163 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
master1-163   Ready    control-plane,master   5h55m   v1.21.1
master2-162   Ready    control-plane,master   4h56m   v1.21.1
worker-165    Ready    <none>                 65m     v1.21.1
[root@master1-163 ~]#

calico安装报错处理

  • 直接去这篇文章,看看有没有你遇到的报错
    coredns状态为pending和部署calico报错Init:0/3或Init:RunContainerError

kubectl子命令tab设置

  • 所有master节点都搞一下吧
[root@master2-162 ~]#  kubectl --help | grep bashcompletion    Output shell completion code for the specified shell (bash or zsh)
[root@master2-162 ~]#  vi /etc/profile
[root@master2-162 ~]#  head -n2 /etc/profile
# /etc/profile
source <(kubectl completion bash)>
[root@master2-162 ~]# source /etc/profile
[root@master2-162 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
master1-163   Ready    control-plane,master   21h   v1.21.1
master2-162   Ready    control-plane,master   20h   v1.21.1
worker-165    Ready    <none>                 16h   v1.21.1
[root@master2-162 ~]#
  • 至此,master高可用就配置完毕了哦

验证高可用

数据一致和同步验证

  • 现在需要验证的是,看2个master的数据是否一致,是否会自动同步

  • 首先直接查看集群和在2个master上执行一些常用命令,看是否都能成功
    并且查看node信息,看是否一致

[root@master1-163 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
master1-163   Ready    control-plane,master   21h   v1.21.1
master2-162   Ready    control-plane,master   20h   v1.21.1
worker-165    Ready    <none>                 16h   v1.21.1
[root@master1-163 ~]#[root@master2-162 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
master1-163   Ready    control-plane,master   21h   v1.21.1
master2-162   Ready    control-plane,master   20h   v1.21.1
worker-165    Ready    <none>                 16h   v1.21.1
[root@master2-162 ~]#
  • 数据同步测试
    如:我现在在master1上创建一个ns,然后去master2上如果能看到master1上创建的ns信息,那么高可用就一切正常了
#1上创建
[root@master1-163 ~]# kubectl create ns ns1
namespace/ns1 created
[root@master1-163 ~]#
[root@master1-163 ~]# kubectl get ns | grep ns1
ns1               Active   8s
[root@master1-163 ~]# #2上能看到,没问题
[root@master2-162 ~]# kubectl get ns | grep ns1
ns1               Active   33s
[root@master2-162 ~]# # 在反过去验证一下吧,2上创建,1上查看
[root@master2-162 ~]#
[root@master2-162 ~]# kubectl create ns ns2
namespace/ns2 created
[root@master2-162 ~]#
[root@master2-162 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   21h
kube-node-lease   Active   21h
kube-public       Active   21h
kube-system       Active   21h
ns1               Active   90s
ns2               Active   5s
[root@master2-162 ~]# #2上也能看到,ok
[root@master1-163 ~]# kubectl get ns | grep ns2
ns2               Active   35s
[root@master1-163 ~]#

高可用验证

  • 既然是高可用,初衷肯定就是其中一台坏了,另外一台能正常使用了
    现在我们模拟这个场景,把master1关机,然后看master2是否能正常使用,如果能正常使用,则一切正常。
#1关机
[root@master1-163 ~]# poweroff

  • 去2测试
    如果出现下面报错别慌,等哈再执行即可
[root@master2-162 ~]# kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
[root@master2-162 ~]#
[root@master2-162 ~]#
[root@master2-162 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE   VERSION
master1-163   NotReady   control-plane,master   21h   v1.21.1
master2-162   Ready      control-plane,master   20h   v1.21.1
worker-165    Ready      <none>                 17h   v1.21.1
[root@master2-162 ~]#
[root@master2-162 ~]# ping master1-163
PING master1-163 (192.168.59.163) 56(84) bytes of data.
^C
--- master1-163 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3000ms[root@master2-162 ~]#

客户端连接haproxy访问高可用测试

  • 该篇文章内容太多,这个内容去下面这篇博客吧

    【kubernetes】k8s使用客户端连接haproxy访问高可用集群流程详细说明【使用kubeconfig连接haproxy】【kubeconfig配置全部流程】

【kubernetes】k8s集群高可用部署安装和概念详细说明【含离线部署】,客户端连接haproxy访问高可用流程相关推荐

  1. 【kubernetes】k8s使用客户端连接haproxy访问高可用集群流程详细说明【使用kubeconfig连接haproxy】【kubeconfig配置全部流程】

    文章目录 master高可用部署流程 客户端连接haproxy访问高可用集群 环境确认与准备[必看] 客户端连接happroxy说明 kubeconfig配置[master上操作] 客户端测试 说明 ...

  2. Centos7 安装部署Kubernetes(k8s)集群过程

    1.系统环境 服务器版本 docker软件版本 CPU架构 CentOS Linux release 7.9 Docker version 20.10.12 x86_64 2.前言 如下图描述了软件部 ...

  3. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录

    0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...

  4. 机器从零到 K8S 集群 Worker 节点的安装过程

    最近基于 Hyper-V 虚拟机搭了一个单节点的 K8S,过程没有记录下来 本次实践从零开始搭建一个 K8S Slave 节点 机器从零到 K8S 集群 Slave 节点的安装过程 实践环境 安装 L ...

  5. Kubernetes(k8s)集群安装(需要3台centos7)

    k8s安装的命令 1:关闭防火墙,关闭selinux 2:修改主机名 3:修改hosts文件 4:禁用swap内存交换 5:安装docker 6:上传k8s.repo 7:初始化集群 8:初始化k8s ...

  6. Kubernetes(k8s)集群部署七、k8s网络通信+service扩展ingress(TLS,认证,地址重写)calico网络插件(允许指定pod访问服务,禁止其他namespace访问服务)

    k8s网络通信 k8s网络通信 1.容器间通信 2.pod之间的通信 2.1同一节点的pod 2.2不同节点的pod之间的通信 flannel网络原理 flannel支持多种后端: 3.pod和ser ...

  7. 云原生 | Kubernetes - k8s集群搭建(kubeadm)(持续收录报错中)

    目录 前置 1.实现效果 2.环境准备 3.系统初始化 关闭防火墙 关闭 selinux 关闭 swap 主机名 配置hosts 将桥接的 IPv4 流量传递到 iptables 的链 时间同步 部署 ...

  8. 如何在CentOS7上创建Kubernetes k8s集群

    https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on ...

  9. kubernetes(k8s)集群安装calico

    添加hosts解析 cat /etc/hosts 10.39.7.51 k8s-master-51 10.39.7.57 k8s-master-57 10.39.7.52 k8s-master-52 ...

最新文章

  1. 【每日一题】航班预订统计
  2. android adb 控制手机,adb 控制手机动作
  3. anaconda怎么使用python包_Anaconda中python包的介绍与使用方法
  4. 如何用CSS让一个容器水平垂直居中?
  5. java内存泄漏和内存溢出_Java和内存泄漏
  6. foxitreadersdk 打开远程文件_一种最不为人知最简单最方便的用电脑操作手机上的文件...
  7. 计时装饰器python_使用python装饰器制作计时函数
  8. java实现输出下一秒_编写一个函数,要求输入年月日时分秒,输出该年月日时分秒的下一...
  9. Python AES
  10. C++中,int a = 10的后面的操作
  11. linux之git高级命令
  12. C语言基础:用快速排序实现输出最大数
  13. IIS 部署,发布 报错 500.19
  14. 以色列Aladdin HASP SRM(AES-128)加密狗破解经验分享
  15. 拼多多商家券和平台优惠券的相互叠加
  16. 三维立体坐标系 html5,三维坐标系
  17. 1.python程序图标制作
  18. react-redux中Connect方法
  19. 湖北文理学院数学与计算机科学学院,以德为本严要求 以勤为先勇创新——记湖北文理学院数学与计算机科学学院执行院长 吴 钊-湖北文理学院校报电子版《湖北文理学院报》...
  20. 计算机图标在任务栏如何取消,电脑任务栏的图标为什么从任务栏取消不了

热门文章

  1. 《Git学习记录》—— git和svn的区别
  2. HTML和HTML5,css和css3的区别,ES5和ES6的区别有那些?
  3. 听说5月11日起,你将拥有超能力
  4. 流星雨分屏软件 —— VC写的分屏工具
  5. Omni Converter全能转换器 for Mac(全能视频转换工具)v1.0.3中文BAN的特点
  6. 【Docker系列】Docker Compose 环境变量
  7. Java写一个黄金矿工小游戏
  8. css图标代码 星,15个纯CSS实现的响应式土豪金星球类应用动画图标
  9. ACM-动态规划21-三角形最长路径问题
  10. 从高德地图看它的wmts服务