kubeadm部署k8s高可用集群

  • 1、设备清单
  • 2、各节点下载docker源
  • 3、各节点安装docker服务并加入开机启动
  • 4、各节点配置docker加速器并修改成k8s驱动
  • 5、各节点重启docker服务
  • 6、各节点配置hosts文件
  • 7、更改各节点主机名
  • 8、关闭各个节点防火墙
  • 9、关闭各节点SElinux
  • 10、关闭各节点swap分区
  • 11、各节点重启服务器
  • 12、各节点同步的时间
  • 13、各节点内核调整,将桥接的IPv4流量传递到iptables的链
  • 14、配置各节点k8s的yum源
  • 15、各节点安装ipset服务
  • 16、各节点开启ipvs模块
  • 17、所有master节点安装haproxy和keepalived服务
  • 18、修改master1节点keepalived配置文件
  • 19、修改master2节点keepalived配置文件
  • 20、修改master3节点keepalived配置文件
  • 21、三台master节点haproxy配置都一样
  • 22、每台master节点编写健康监测脚本
  • 23、master节点给脚本增加执行权限
  • 24、master节点启动keepalived和haproxy服务并加入开机启动
  • 25、查看vip IP地址
  • 26、每个节点安装kubeadm,kubelet和kubectl # 安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!
  • 27、获取默认配置文件
  • 28、修改初始化配置文件
  • 29、下载相关镜像
  • 30、初始化集群
  • 31、在其它两个master节点创建以下目录
  • 32、把主master节点证书分别复制到其他master节点
  • 33、把 master 主节点的 admin.conf 复制到其他 node 节点
  • 34、master节点加入集群执行以下命令
  • 35、node节点加入集群执行以下命令
  • 36、所有master节点执行以下命令,node节点随意
  • 37、查看所有节点状态
  • 38、安装网络插件
  • 39、查看节点状态
  • 40、下载etcdctl客户端命令行工具
  • 41、解压并加入环境变量
  • 42、验证etcdctl是否能用,出现以下结果代表已经成功了
  • 43、查看etcd高可用集群健康状态
  • 44、查看etcd高可用集群列表
  • 45、查看etcd高可用集群leader
  • 46、部署k8s的dashboard
    • 1.1、下载recommended.yaml文件
    • 1.2、修改recommended.yaml文件
    • 1.3、创建证书
    • 1.4、安装dashboard (如果报错:Error from server (AlreadyExists): error when creating "./recommended.yaml": namespaces "kubernetes-dashboard" already exists这个忽略不计,不影响。)
    • 1.5、查看安装结果
    • 1.6、创建dashboard管理员
    • 1.7、部署dashboard-admin.yaml文件
    • 1.8、为用户分配权限
    • 1.9、查看并复制用户Token
    • 2.0、复制token并访问dashboard ip:30000

1、设备清单

设备 IP
master1 10.10.220.10
master2 10.10.220.11
master3 10.10.220.12
slave1 10.10.220.20
slave2 10.10.220.21
vip 10.10.220.29

2、各节点下载docker源

[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]# yum makecache fast

3、各节点安装docker服务并加入开机启动

yum -y install docker-ce && systemctl start docker && systemctl enable docker

4、各节点配置docker加速器并修改成k8s驱动

vi /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]
}

5、各节点重启docker服务

systemctl restart docker &&  systemctl status docker

6、各节点配置hosts文件

vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.220.10 master1
10.10.220.11 master2
10.10.220.12 master3
10.10.220.20 slave1
10.10.220.21 slave2

7、更改各节点主机名

hostnamectl set-hostname 主机名

8、关闭各个节点防火墙

systemctl stop firewalld && systemctl disable firewalld

9、关闭各节点SElinux

[root@master1 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled        # 改成disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

10、关闭各节点swap分区

[root@master1 ~]# vi /etc/fstab #
# /etc/fstab
# Created by anaconda on Wed Dec 30 15:01:07 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=7321cb15-9220-4cc2-be0c-a4875f6d8bbc /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0         # 注释这行

11、各节点重启服务器

reboot

12、各节点同步的时间

timedatectl set-timezone Asia/Shanghai && chronyc -a makestep

13、各节点内核调整,将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1sysctl -p /etc/sysctl.d/k8s.conf

14、配置各节点k8s的yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

15、各节点安装ipset服务

yum -y install ipvsadm ipset sysstat conntrack libseccomp

16、各节点开启ipvs模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

17、所有master节点安装haproxy和keepalived服务

yum -y install haproxy keepalived

18、修改master1节点keepalived配置文件

vi /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL# 添加如下内容script_user rootenable_script_security
}vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state MASTER            # MASTERinterface eth0         # 本机网卡名virtual_router_id 51priority 100             # 权重100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.10.220.29      # 虚拟IP}track_script {check_haproxy       # 模块}
}

19、修改master2节点keepalived配置文件

vi  /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL# 添加如下内容script_user rootenable_script_security
}vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径interval 3weight -2 fall 10rise 2
}vrrp_instance VI_1 {state BACKUP            # BACKUPinterface eth0        # 本机网卡名  virtual_router_id 51priority 99             # 权重99advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.10.220.29     # 虚拟IP}track_script {check_haproxy       # 模块}
}

20、修改master3节点keepalived配置文件

vi  /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL# 添加如下内容script_user rootenable_script_security
}vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径interval 3weight -2 fall 10rise 2
}vrrp_instance VI_1 {state BACKUP            # BACKUPinterface eth0        # 本机网卡名  virtual_router_id 51priority 99             # 权重99advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.10.220.29     # 虚拟IP}track_script {check_haproxy       # 模块}
}

21、三台master节点haproxy配置都一样

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global# to have these messages end up in /var/log/haproxy.log you will# need to:## 1) configure syslog to accept network log events.  This is done#    by adding the '-r' option to the SYSLOGD_OPTIONS in#    /etc/sysconfig/syslog## 2) configure local2 events to go to the /var/log/haproxy.log#   file. A line like the following can be added to#   /etc/sysconfig/syslog##    local2.*                       /var/log/haproxy.log#log         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemon# turn on stats unix socketstats socket /var/lib/haproxy/stats#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaultsmode                    httplog                     globaloption                  httplogoption                  dontlognulloption http-server-closeoption forwardfor       except 127.0.0.0/8option                  redispatchretries                 3timeout http-request    10stimeout queue           1mtimeout connect         10stimeout client          1mtimeout server          1mtimeout http-keep-alive 10stimeout check           10smaxconn                 3000#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiservermode                        tcpbind                        *:16443option                      tcplogdefault_backend             kubernetes-apiserver#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
listen statsbind            *:1080stats auth      admin:adminstats refresh   5sstats realm     HAProxy\ Statisticsstats uri       /admin?stats#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiservermode        tcpbalance     roundrobinserver  master1 10.10.220.10:6443 checkserver  master2 10.10.220.11:6443 checkserver  master3 10.10.220.12:6443 check

22、每台master节点编写健康监测脚本

vi /etc/keepalived/check_haproxy.sh
#!/bin/sh
# HAPROXY down
A=`ps -C haproxy --no-header | wc -l`
if [ $A -eq 0 ]
then
systmectl start haproxy
if [ ps -C haproxy --no-header | wc -l -eq 0 ]
then
killall -9 haproxy
echo "HAPROXY down" | mail -s "haproxy"
sleep 3600
fi fi

23、master节点给脚本增加执行权限

chmod +x /etc/keepalived/check_haproxy.sh

24、master节点启动keepalived和haproxy服务并加入开机启动

systemctl start keepalived && systemctl enable keepalived
systemctl start haproxy && systemctl enable haproxy

25、查看vip IP地址

[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 0c:da:41:1d:2a:17 brd ff:ff:ff:ff:ff:ffinet 10.10.220.10/24 brd 10.10.220.255 scope global eth0valid_lft forever preferred_lft foreverinet 10.10.220.29/32 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::eda:41ff:fe1d:2a17/64 scope link valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:08:75:6c:9a brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever

26、每个节点安装kubeadm,kubelet和kubectl # 安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!

yum -y install kubeadm-1.23.4 kubelet-1.23.4 kubectl-1.23.4
systemctl enable kubelet && systemctl daemon-reload

27、获取默认配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

28、修改初始化配置文件

piVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 10.10.220.10     # 本机IPbindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: master1        # 本主机名taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.10.220.29:16443"    # 虚拟IP和haproxy端口
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.23.4     # k8s版本
networking:dnsDomain: cluster.localpodSubnet: "10.244.0.0/16"serviceSubnet: 10.96.0.0/12
scheduler: {}---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:SupportIPVSProxyMode: true

29、下载相关镜像

[root@master1 ~]# kubeadm config images pull --config kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

30、初始化集群

[root@master1 ~]# kubeadm config images pull --config kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master1 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.23.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 10.10.220.10 10.10.220.29]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.10.220.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.10.220.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.019473 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 10.10.220.29:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:22ade41b08d214f20b37e0612612ff3a3a3b6927928679f776488bb0ec25d427 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.10.220.29:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:22ade41b08d214f20b37e0612612ff3a3a3b6927928679f776488bb0ec25d427

31、在其它两个master节点创建以下目录

mkdir -p /etc/kubernetes/pki/etcd

32、把主master节点证书分别复制到其他master节点

scp /etc/kubernetes/pki/ca.* root@10.10.220.11:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@10.10.220.11:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@10.10.220.11:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@10.10.220.11:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@10.10.220.11:/etc/kubernetes/
scp /etc/kubernetes/pki/ca.* root@10.10.220.12:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@10.10.220.12:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@10.10.220.12:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@10.10.220.12:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@10.10.220.12:/etc/kubernetes/

33、把 master 主节点的 admin.conf 复制到其他 node 节点

scp /etc/kubernetes/admin.conf root@10.10.220.20:/etc/kubernetes/
scp /etc/kubernetes/admin.conf root@10.10.220.21:/etc/kubernetes/

34、master节点加入集群执行以下命令

kubeadm join 10.10.220.29:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:22ade41b08d214f20b37e0612612ff3a3a3b6927928679f776488bb0ec25d427 \--control-plane

加完后

[root@master3 ~]# kubeadm join 10.10.220.29:16443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:22ade41b08d214f20b37e0612612ff3a3a3b6927928679f776488bb0ec25d427 \
>     --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [10.10.220.12 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [10.10.220.12 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master3] and IPs [10.96.0.1 10.10.220.12 10.10.220.29]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.

35、node节点加入集群执行以下命令

[root@slave2 ~]# kubeadm join 10.10.220.29:16443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:22ade41b08d214f20b37e0612612ff3a3a3b6927928679f776488bb0ec25d427
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

36、所有master节点执行以下命令,node节点随意

root用户执行以下命令

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile

非root用户执行以下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

37、查看所有节点状态

[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   4m54s   v1.18.2
master2   NotReady   master   2m27s   v1.18.2
master3   NotReady   master   93s     v1.18.2
node1     NotReady   <none>   76s     v1.18.2

38、安装网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

39、查看节点状态

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master1   Ready    control-plane,master   41m     v1.23.4
master2   Ready    control-plane,master   12m     v1.23.4
master3   Ready    control-plane,master   10m     v1.23.4
slave1    Ready    <none>                 8m41s   v1.23.4
slave2    Ready    <none>                 7m46s   v1.23.4

40、下载etcdctl客户端命令行工具

wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz

41、解压并加入环境变量

tar -zxf etcd-v3.4.14-linux-amd64.tar.gz
mv etcd-v3.4.14-linux-amd64/etcdctl /usr/local/bin
chmod +x /usr/local/bin/

42、验证etcdctl是否能用,出现以下结果代表已经成功了

[root@master1 ~]# etcdctl
NAME:etcdctl - A simple command line client for etcd3.USAGE:etcdctl [flags]VERSION:3.4.14API VERSION:3.4COMMANDS:alarm disarm       Disarms all alarmsalarm list        Lists all alarmsauth disable        Disables authenticationauth enable      Enables authenticationcheck datascale       Check the memory usage of holding data for different workloads on a given server endpoint.check perf        Check the performance of the etcd clustercompaction     Compacts the event history in etcddefrag            Defragments the storage of the etcd members with given endpointsdel         Removes the specified key or range of keys [key, range_end)elect            Observes and participates in leader electionendpoint hashkv     Prints the KV history hash for each endpoint in --endpointsendpoint health      Checks the healthiness of endpoints specified in `--endpoints` flagendpoint status        Prints out the status of endpoints specified in `--endpoints` flagget         Gets the key or a range of keyshelp         Help about any commandlease grant       Creates leaseslease keep-alive  Keeps leases alive (renew)lease list        List all active leaseslease revoke      Revokes leaseslease timetolive  Get lease informationlock           Acquires a named lockmake-mirror        Makes a mirror at the destination etcd clustermember add        Adds a member into the clustermember list       Lists all members in the clustermember promote      Promotes a non-voting member in the clustermember remove        Removes a member from the clustermember update      Updates a member in the clustermigrate          Migrates keys in a v2 store to a mvcc storemove-leader      Transfers leadership to another etcd cluster member.put         Puts the given key into the storerole add       Adds a new rolerole delete      Deletes a rolerole get      Gets detailed information of a rolerole grant-permission    Grants a key to a rolerole list     Lists all rolesrole revoke-permission   Revokes a key from a rolesnapshot restore   Restores an etcd member snapshot to an etcd directorysnapshot save      Stores an etcd node backend snapshot to a given filesnapshot status     Gets backend snapshot status of a given filetxn         Txn processes all the requests in one transactionuser add       Adds a new useruser delete      Deletes a useruser get      Gets detailed information of a useruser grant-role      Grants a role to a useruser list        Lists all usersuser passwd      Changes password of useruser revoke-role    Revokes a role from a userversion           Prints the version of etcdctlwatch          Watches events stream on keys or prefixesOPTIONS:--cacert=""             verify certificates of TLS-enabled secure servers using this CA bundle--cert=""                  identify secure client using this TLS certificate file--command-timeout=5s         timeout for short running command (excluding dial timeout)--debug[=false]              enable client-side debug logging--dial-timeout=2s              dial timeout for client connections-d, --discovery-srv=""            domain name to query for SRV records describing cluster endpoints--discovery-srv-name=""         service name to query when using DNS discovery--endpoints=[127.0.0.1:2379]     gRPC endpoints-h, --help[=false]               help for etcdctl--hex[=false]              print byte strings as hex encoded strings--insecure-discovery[=true]       accept insecure SRV records describing cluster endpoints--insecure-skip-tls-verify[=false] skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)--insecure-transport[=true]     disable transport security for client connections--keepalive-time=2s           keepalive time for client connections--keepalive-timeout=6s            keepalive timeout for client connections--key=""                 identify secure client using this TLS key file--password=""              password for authentication (if this option is used, --user option shouldn't include password)--user=""                 username[:password] for authentication (prompt if password is not supplied)-w, --write-out="simple"          set the output format (fields, json, protobuf, simple, table)

43、查看etcd高可用集群健康状态

[root@master1 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.200.3:2379,192.168.200.4:2379,192.168.200.5:2379 endpoint health
+--------------------+--------+-------------+-------+
|      ENDPOINT      | HEALTH |    TOOK     | ERROR |
+--------------------+--------+-------------+-------+
| 192.168.200.3:2379 |   true | 60.655523ms |       |
| 192.168.200.4:2379 |   true |  60.79081ms |       |
| 192.168.200.5:2379 |   true | 63.585221ms |       |
+--------------------+--------+-------------+-------+

44、查看etcd高可用集群列表

[root@master1 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.200.3:2379,192.168.200.4:2379,192.168.200.5:2379 member list
+------------------+---------+---------+----------------------------+----------------------------+------------+
|        ID        | STATUS  |  NAME   |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+---------+----------------------------+----------------------------+------------+
| 4a8537d90d14a19b | started | master1 | https://192.168.200.3:2380 | https://192.168.200.3:2379 |      false |
| 4f48f36de1949337 | started | master2 | https://192.168.200.4:2380 | https://192.168.200.4:2379 |      false |
| 88fb5c8676da6ea1 | started | master3 | https://192.168.200.5:2380 | https://192.168.200.5:2379 |      false |
+------------------+---------+---------+----------------------------+----------------------------+------------+

45、查看etcd高可用集群leader

[root@master1 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.200.3:2379,192.168.200.4:2379,192.168.200.5:2379 endpoint status
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.200.3:2379 | 4a8537d90d14a19b |   3.4.3 |  2.8 MB |      true |      false |         7 |       2833 |               2833 |        |
| 192.168.200.4:2379 | 4f48f36de1949337 |   3.4.3 |  2.7 MB |     false |      false |         7 |       2833 |               2833 |        |
| 192.168.200.5:2379 | 88fb5c8676da6ea1 |   3.4.3 |  2.7 MB |     false |      false |         7 |       2833 |               2833 |        |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

46、部署k8s的dashboard

1.1、下载recommended.yaml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

1.2、修改recommended.yaml文件

---
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #增加ports:- port: 443targetPort: 8443nodePort: 30000 #增加selector:k8s-app: kubernetes-dashboard
---

1.3、创建证书

mkdir dashboard-certscd dashboard-certs/#创建命名空间
kubectl create namespace kubernetes-dashboard# 创建key文件
openssl genrsa -out dashboard.key 2048#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt

1.4、安装dashboard (如果报错:Error from server (AlreadyExists): error when creating “./recommended.yaml”: namespaces “kubernetes-dashboard” already exists这个忽略不计,不影响。)

kubectl apply -f recommended.yaml

1.5、查看安装结果

NAMESPACE              NAME                                         READY   STATUS    RESTARTS      AGE     IP             NODE      NOMINATED NODE   READINESS GATES
kube-system            coredns-6d8c4cb4d-hx88z                      1/1     Running   0             50m     10.244.4.3     slave2    <none>           <none>
kube-system            coredns-6d8c4cb4d-zngd4                      1/1     Running   0             71m     10.244.4.2     slave2    <none>           <none>
kube-system            etcd-master1                                 1/1     Running   1 (30h ago)   2d22h   10.10.220.10   master1   <none>           <none>
kube-system            etcd-master2                                 1/1     Running   1 (30h ago)   2d22h   10.10.220.11   master2   <none>           <none>
kube-system            etcd-master3                                 1/1     Running   1 (30h ago)   2d22h   10.10.220.12   master3   <none>           <none>
kube-system            kube-apiserver-master1                       1/1     Running   1 (30h ago)   2d22h   10.10.220.10   master1   <none>           <none>
kube-system            kube-apiserver-master2                       1/1     Running   1 (30h ago)   2d22h   10.10.220.11   master2   <none>           <none>
kube-system            kube-apiserver-master3                       1/1     Running   1 (30h ago)   2d22h   10.10.220.12   master3   <none>           <none>
kube-system            kube-controller-manager-master1              1/1     Running   4 (30h ago)   2d22h   10.10.220.10   master1   <none>           <none>
kube-system            kube-controller-manager-master2              1/1     Running   2 (20h ago)   2d22h   10.10.220.11   master2   <none>           <none>
kube-system            kube-controller-manager-master3              1/1     Running   2 (30h ago)   2d22h   10.10.220.12   master3   <none>           <none>
kube-system            kube-flannel-ds-2l7f5                        1/1     Running   0             44m     10.10.220.11   master2   <none>           <none>
kube-system            kube-flannel-ds-6ljmp                        1/1     Running   0             44m     10.10.220.20   slave1    <none>           <none>
kube-system            kube-flannel-ds-6vdgh                        1/1     Running   0             44m     10.10.220.12   master3   <none>           <none>
kube-system            kube-flannel-ds-7jbwh                        1/1     Running   0             44m     10.10.220.10   master1   <none>           <none>
kube-system            kube-flannel-ds-krg2b                        1/1     Running   0             44m     10.10.220.21   slave2    <none>           <none>
kube-system            kube-proxy-6zggm                             1/1     Running   0             74m     10.10.220.21   slave2    <none>           <none>
kube-system            kube-proxy-bzgwm                             1/1     Running   0             75m     10.10.220.20   slave1    <none>           <none>
kube-system            kube-proxy-g6rqq                             1/1     Running   0             75m     10.10.220.12   master3   <none>           <none>
kube-system            kube-proxy-gpts8                             1/1     Running   0             74m     10.10.220.11   master2   <none>           <none>
kube-system            kube-proxy-hk9xh                             1/1     Running   0             75m     10.10.220.10   master1   <none>           <none>
kube-system            kube-scheduler-master1                       1/1     Running   3 (30h ago)   2d22h   10.10.220.10   master1   <none>           <none>
kube-system            kube-scheduler-master2                       1/1     Running   1 (30h ago)   2d22h   10.10.220.11   master2   <none>           <none>
kube-system            kube-scheduler-master3                       1/1     Running   3             2d22h   10.10.220.12   master3   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-rl9v7   1/1     Running   0             71s     10.244.3.3     slave1    <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7577b7545-vbslg         1/1     Running   0             71s     10.244.3.2     slave1    <none>           <none>

1.6、创建dashboard管理员

vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: dashboard-adminnamespace: kubernetes-dashboard

1.7、部署dashboard-admin.yaml文件

kubectl apply -f dashboard-admin.yaml

1.8、为用户分配权限

vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: dashboard-admin-bind-cluster-rolelabels:k8s-app: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboardkubectl apply -f dashboard-admin-bind-cluster-role.yaml

1.9、查看并复制用户Token

[root@master1 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-f5ljr
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 95741919-e296-498e-8e10-233c4a34b07aType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Iko5TVI0VVQ2TndBSlBLc2Rxby1CWGxSNHlxYXREWGdVOENUTFVKUmFGakEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZjVsanIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTU3NDE5MTktZTI5Ni00OThlLThlMTAtMjMzYzRhMzRiMDdhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.tP3EFBeOIJg7_cgJ-M9SDqTtPcfmoJU0nTTGyb8Sxag6Zq4K-g1lCiDqIFbVgrd-4nM7cOTMBfMwyKgdf_Xz573omNNrPDIJCTYkNx2qFN0qfj5qp8Txia3JV8FKRdrmqsap11ItbGD9a7uniIrauc6JKPgksK_WvoXZbKglEUla98ZU9PDm5YXXq8STyUQ6egi35vn5EYCPa-qkUdecE-0N06ZbTFetIYsHEnpswSu8LZZP_Zw7LEfnX9IVdl1147i4OpF4ET9zBDfcJTSr-YE7ILuv1FDYvvo1KAtKawUbGu9dJxsObLeTh5fHx_JWyqg9cX0LB3Gd1ZFm5z5s4g

2.0、复制token并访问dashboard ip:30000

问题:

namespaces is forbidden: User \"system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope"

生产环境kubeadm部署k8s(1.23)高可用集群相关推荐

  1. 通过rancher部署loki-distributed loki日志高可用集群 helm方式部署

    本人最近一直在找loki高可用部署的文档,一直没找到有效的部署文档,后面依照官方文档和自己摸索,部署出了以下一套loki高可用方案. Distributor:Distributor是Loki日志写入的 ...

  2. Redis高可用集群-哨兵模式(Redis-Sentinel)搭建配置教程【Windows环境】

    ================================================= 人工智能教程.零基础!通俗易懂!风趣幽默!大家可以看看是否对自己有帮助! 点击查看高清无码教程 == ...

  3. Redis创建高可用集群教程【Windows环境】

    模仿的过程中,加入自己的思考和理解,也会有进步和收获. 在这个互联网时代,在高并发和高流量可能随时爆发的情况下,单机版的系统或者单机版的应用已经无法生存,越来越多的应用开始支持集群,支持分布式部署了. ...

  4. k8s1.18多master节点高可用集群安装-超详细中文官方文档

    kubernetes安装系列文章 kubernetes1.17.3安装-超详细的安装步骤 安装kubernetes1.17.3多master节点的高可用集群 k8s1.18单master节点高可用集群 ...

  5. PostgreSQL高可用集群在360的落地实战

    本文主要从以下几个方面介绍PostgreSQL高可用集群在360的落地实战 为什么选择Patroni + Etcd + PostgreSQL高可用集群方案 PostgreSQL高可用集群在360的落地 ...

  6. 零基础带你一步步搭建Nacos高可用集群(史上最详细,赛过教科书!)为此我准备了三台云服务器+云数据库

    容我先说一句:节日快乐!永远九岁的我们当然不能错过,奥里给!{容我先说一句:节日快乐!永远九岁的我们当然不能错过,奥里给!}容我先说一句:节日快乐!永远九岁的我们当然不能错过,奥里给! 如果你不懂jd ...

  7. 12. 搭建高可用集群

    文章目录 12.1 Keepalived+Nginx 高可用集群(主从模式) 12.1.1 集群架构图 12.1.2 具体搭建步骤 12.1.2.1 搭建高可用集群基础环境 12.1.2.2 完成高可 ...

  8. [K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署

    Kubeadm HA 1.9 高可用 集群 本地离线部署 k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,易宝支付,北森等等. kubernet ...

  9. Kubernetes — 使用 kubeadm 部署高可用集群

    目录 文章目录 目录 Kubernetes 在生产环境中架构 高可用集群部署拓扑 1.网络代理配置 2.Load Balancer 环境准备 3.Kubernetes Cluster 环境准备 安装 ...

最新文章

  1. C# hashtable
  2. [翻译]用 Puppet 搭建易管理的服务器基础架构(3)
  3. leetcode Candy
  4. weka矿产分布文件_石材人注意!北方暴雪将至,货车停运,石材停止发货!(附北方石材分布介绍)...
  5. 高效编程之hashmap你不看就会忘记的知识点
  6. http://www.cnblogs.com/peida/archive/2013/05/31/3070790.html深入理解Java:SimpleDateFormat安全的时间格式化...
  7. 2022年工作室暑期培训
  8. unity下载与安装
  9. 数字信号音频采集及时域频域加噪设计滤波器处理项目入门
  10. Python requests常用的浏览器头部
  11. 微信小游戏代码热更(转载)
  12. 创强教师办公用计算机配备要求,教师办公室电脑使用与管理有哪些规定
  13. java web 登陆验证 弹窗_带你玩转JavaWeb开发之四 -如何用JS做登录注册页面校验
  14. java上传视频并播放_javaweb中上传视频,并且播放,用上传视频信息为例
  15. 考研复试之路:不努力怎敢轻易言弃
  16. 服务器c盘空间不够解决
  17. Gitosis不能拉取代码,报错 ERROR:gitosis.serve.main:Repository read access denied fatal
  18. mysql自然连接的例题详解_基于 MySQL 的数据库实践(自然连接)
  19. JSON格式转MAP的6种方法
  20. LTE下行物理层传输机制(3)-PHICH信道

热门文章

  1. ubuntu18远程桌面
  2. C语言:输入三个整数,从小到大排序!
  3. Fluent UDF 实现用Newmark-β方法计算圆柱绕流流固耦合时的位移振动响应
  4. UVA 1630 Folding
  5. 分布式计算、云计算与大数据第四章
  6. 微信小程序商城项目实战(第十一篇:商品收藏+历史浏览管理)
  7. vue -- 初级(二)
  8. SpringBoot整合微信支付开发在线教育视频网站(完整版)
  9. 欲与青龙重得水,来年再战不周山
  10. 浅谈人生中的失败与成功