目录

一、环境准备

二、安装Docker

三、配置环境变量

四、所有master节点安装keepalived和haproxy服务

五、部署集群

六、部署k8s的dashboard


本文采用的是etcd、master、HA混合部署方式,当然也可以把etcd cluster独立出来部署也是可以

本文中k8s高可用主要体现在对master节点组件及etcd存储的高可用,文中使用到的服务器ip及角色对应如下:

一、环境准备

CentOS Linux release 7.7.1908 (Core)  3.10.0-1062.el7.x86_64

kubeadm-1.22.3-0.x86_64
kubelet-1.22.3-0.x86_64
kubectl-1.22.3-0.x86_64
kubernetes-cni-0.8.7-0.x86_64

主机名 IP VIP
k8s-master01 192.168.30.106 192.168.30.115
k8s-master02 192.168.30.107
k8s-master03 192.168.30.108
k8s-node01 192.168.30.109
k8s-node02 192.168.30.110

二、安装Docker

1、各节点下载docker源

cd /etc/yum.repos.d/
wget https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce

2、各节点配置docker加速器并修改成k8s驱动
daemon.json文件如果没有自己创建

cat >/etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]
}
EOF

3、重启Docker服务

systemctl enable docker
systemctl restart docker

4、配置各节点hosts文件,实际生产环境中,可以规划好内网dns,每台机器可以做一下主机名解析,就不需要配hosts文件

cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.106 k8s-master01
192.168.30.107 k8s-master02
192.168.30.108 k8s-master03
192.168.30.109 k8s-node01
192.168.30.110 k8s-node02
EOF

三、配置环境变量

1、关掉各节点防火墙,安装相关软件

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git lrzsz
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

2、关闭各节点selinux

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

3、关闭各节点swap分区

swapoff -a && sed -i  '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab

4、同步各节点的时间,这个时间服务,如果机器可以直通外网,那就按以下命令执行就行。

如果机器无法通外网,需要做一台时间服务器,然后别的服务器全部从这台时间服务器同步时间。

yum -y install chrony
systemctl start chronyd.service
systemctl enable chronyd.service
timedatectl set-timezone Asia/Shanghai
chronyc -a makestep

5、各节点内核调整

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_nonlocal_bind = 1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720EOFsysctl -p /etc/sysctl.d/k8s.conf

6、配置各节点k8s的yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

7、各节点开启ipvs模块

cat >/etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
#modprobe -- nf_conntrack_ipv4 #4以上的内核就没有ipv4
modprobe -- nf_conntrack
EOF
 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

8、设置 rsyslogd 和 systemd journald

mkdir /var/log/journa
mkdir -p  /etc/systemd/journald.conf.d/cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOFsystemctl restart systemd-journald

10、系统内核升级到最新

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!

#升级内核

yum --enablerepo=elrepo-kernel install -y kernel-lt

# 设置开机从新内核启动,这地方要注意一下,看你当时最新的版本是多少,括号里就填多少

grub2-set-default 'CentOS Linux (5.4.159-1.el7.elrepo.x86_64) 7 (Core)'

9、重启服务器(reboot)

四、所有master节点安装keepalived和haproxy服务

1、各master节点安装服务

yum -y install haproxy keepalived

2、修改k8s-master01配置文件

第一台master为master,后面两台master为backup

# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL# 添加如下内容script_user rootenable_script_security
}vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state MASTER            # MASTERinterface ens192         # 本机网卡名virtual_router_id 51priority 100             # 权重100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.30.115      # 虚拟IP}track_script {check_haproxy       # 模块}
}

3、修改k8s-master02配置文件

把这个/etc/keepalived/keepalived.conf配置文件scp 到k8s-master02和k8s-master03,只需要修改

state MASTER            # MASTER

priority 100             # 权重100

# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL# 添加如下内容script_user rootenable_script_security
}vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state BACKUP            # BACKUPinterface ens192         # 本机网卡名virtual_router_id 51priority 99             # 权重99advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.30.115      # 虚拟IP}track_script {check_haproxy       # 模块}
}

4、修改k8s-master03配置文件

# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL# 添加如下内容script_user rootenable_script_security
}vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径interval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state BACKUP            # BACKUPinterface ens192         # 本机网卡名virtual_router_id 51priority 98             # 权重98advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.30.115      # 虚拟IP}track_script {check_haproxy       # 模块}
}

5、配置三台haproxy.cfg配置文件,三台g配置文件完全一样‘

vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global# to have these messages end up in /var/log/haproxy.log you will# need to:## 1) configure syslog to accept network log events.  This is done#    by adding the '-r' option to the SYSLOGD_OPTIONS in#    /etc/sysconfig/syslog## 2) configure local2 events to go to the /var/log/haproxy.log#   file. A line like the following can be added to#   /etc/sysconfig/syslog##    local2.*                       /var/log/haproxy.log#log         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemon# turn on stats unix socketstats socket /var/lib/haproxy/stats#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaultsmode                    httplog                     globaloption                  httplogoption                  dontlognulloption http-server-closeoption forwardfor       except 127.0.0.0/8option                  redispatchretries                 3timeout http-request    10stimeout queue           1mtimeout connect         10stimeout client          1mtimeout server          1mtimeout http-keep-alive 10stimeout check           10smaxconn                 3000#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiservermode                        tcpbind                        *:16443option                      tcplogdefault_backend             kubernetes-apiserver#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
listen statsbind            *:1080stats auth      admin:111111stats refresh   5sstats realm     HAProxy\ Statisticsstats uri       /admin?stats#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiservermode        tcpbalance     roundrobinserver  k8s-master01 192.168.30.106:6443 checkserver  k8s-master02 192.168.30.107:6443 checkserver  k8s-master03 192.168.30.108:6443 check

6、配置keepalived的haproxy检测脚本,三台master都是一样

vim /etc/keepalived/check_haproxy.sh
#!/bin/sh
# HAPROXY down
pid=`ps -C haproxy --no-header | wc -l`
if [ $pid -eq 0 ]
thensystemctl start haproxyif [ `ps -C haproxy --no-header | wc -l` -eq 0 ]thenkillall -9 haproxy#这里大家可以自已决定事件处理方法,例如可以发邮件,发短信等等echo "HAPROXY down" >>/tmp/haproxy_check.logsleep 10fifi

7、给检测脚本添加执行权限

chmod 755 /etc/keepalived/check_haproxy.sh

8、启动haproxy和keepalived服务

systemctl start keepalived && systemctl enable keepalived
systemctl start haproxy && systemctl enable haproxy

9、查看vip地址

这边配的是k8s-master01是master,所以只能在这台机器上看

 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:aa:ad:f2 brd ff:ff:ff:ff:ff:ffinet 192.168.30.106/24 brd 192.168.30.255 scope global ens192valid_lft forever preferred_lft foreverinet 192.168.30.115/32 scope global ens192  #VIPvalid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feaa:adf2/64 scope linkvalid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group defaultlink/ether 02:42:66:67:ee:0b brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever

五、部署集群

# 安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!

1、安装k8s软件包,每个节点都需要安装这些软件

#直接装最新的
yum install -y kubeadm kubectl kubeletsystemctl enable kubelet && systemctl daemon-reload

2、获取默认配置文件,登录到k8s-master01机器

kubeadm config print init-defaults > kubeadm-config.yaml

3、修改配置文件

vim  kubeadm-config.yaml
--------------------------------------------------------------------
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.30.106     # 本机IPbindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-master01        # 本主机名taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.30.115:16443"    # VIP和haproxy端口
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd# 镜像仓库源要根据自己实际情况修改
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.3     # k8s版本
networking:dnsDomain: cluster.localpodSubnet: "10.244.0.0/16"serviceSubnet: 10.96.0.0/12
scheduler: {}---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
#featureGates:   #1.20版本以上已经不支持featureGates特性了,注掉就可以
#  SupportIPVSProxyMode: true
mode: ipvs

4、下载相关镜像文件

kubeadm config images pull --config kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.5
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.0-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.4

5、初始化集群

# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.22.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.30.106 192.168.30.115]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.30.106 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.30.106 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.041373 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:dc58e29a96a7bd4c9c9682f93089a0fef39ee18975b41e9d54512d1989e5a07d \--control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:dc58e29a96a7bd4c9c9682f93089a0fef39ee18975b41e9d54512d1989e5a07d

6、在其它两个master节点要创建etcd目录

mkdir -p /etc/kubernetes/pki/etcd

7、把主master节点证书分别复制到其他2个master节点


scp /etc/kubernetes/pki/ca.* root@192.168.30.107:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.* root@192.168.30.108:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@192.168.30.107:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@192.168.30.108:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.30.107:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.30.108:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* 192.168.30.107:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.* 192.168.30.108:/etc/kubernetes/pki/etcd/#这个文件master和node节点都需要
scp  /etc/kubernetes/admin.conf 192.168.30.107:/etc/kubernetes/
scp  /etc/kubernetes/admin.conf 192.168.30.108:/etc/kubernetes/
scp  /etc/kubernetes/admin.conf 192.168.30.109:/etc/kubernetes/
scp  /etc/kubernetes/admin.conf 192.168.30.110:/etc/kubernetes/

8、另外两个master节点加入集群

kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:47f35fcf0584b3ca586d041057753f2be84dd389004f42a3f92b6c4eb5a42eb1 \--control-plane

9、两个node节点加入集群

kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:47f35fcf0584b3ca586d041057753f2be84dd389004f42a3f92b6c4eb5a42eb1

如果随着集群的运行,以后有需要扩容的,扩容node节点的方法

#在master节点执行
kubeadm token create --print-join-command#出现以下结果,然后在部署好的的node节点上执行以下命令
kubeadm join 192.168.10.1115:16443 --token eqxv7y.mu56z0sqg1tzsixp --discovery-token-ca-cert-hash sha256:57070746c239d77ec07c7c85d9c9b3cfe18f5873a652f4938a8a535cbd45812c

10、所有master节点执行以下命令,node节点随意

#root用户

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile

#非root用户

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11、在k8s-master01上查看所有节点状态

# kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k8s-master01   Ready    control-plane,master   171m   v1.22.3
k8s-master02   Ready    control-plane,master   167m   v1.22.3
k8s-master03   Ready    control-plane,master   167m   v1.22.3
k8s-node01     Ready    <none>                 166m   v1.22.3
k8s-node02     Ready    <none>                 166m   v1.22.3

12、安装网络插件,在k8s-master01机器上执行

#如果没有翻墙有可能下不来

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN', 'NET_RAW']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
rules:
- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: quay.io/coreos/flannel:v0.15.1command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.15.1command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

13、再查看节点状态

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS       AGE
kube-system   coredns-7f6cbbb7b8-shbl8               1/1     Running   0              177m
kube-system   coredns-7f6cbbb7b8-w6cnn               1/1     Running   0              177m
kube-system   etcd-k8s-master01                      1/1     Running   2              177m
kube-system   etcd-k8s-master02                      1/1     Running   1              174m
kube-system   etcd-k8s-master03                      1/1     Running   0              173m
kube-system   kube-apiserver-k8s-master01            1/1     Running   2              177m
kube-system   kube-apiserver-k8s-master02            1/1     Running   1              174m
kube-system   kube-apiserver-k8s-master03            1/1     Running   2 (173m ago)   174m
kube-system   kube-controller-manager-k8s-master01   1/1     Running   4 (173m ago)   177m
kube-system   kube-controller-manager-k8s-master02   1/1     Running   1              174m
kube-system   kube-controller-manager-k8s-master03   1/1     Running   1              174m
kube-system   kube-flannel-ds-49bv5                  1/1     Running   0              172m
kube-system   kube-flannel-ds-68wq2                  1/1     Running   0              172m
kube-system   kube-flannel-ds-bc686                  1/1     Running   0              172m
kube-system   kube-flannel-ds-cgwrl                  1/1     Running   0              172m
kube-system   kube-flannel-ds-sxn2h                  1/1     Running   0              172m
kube-system   kube-proxy-7dpmx                       1/1     Running   0              173m
kube-system   kube-proxy-7n7pl                       1/1     Running   0              173m
kube-system   kube-proxy-d9z59                       1/1     Running   0              177m
kube-system   kube-proxy-j8fgg                       1/1     Running   0              174m
kube-system   kube-proxy-k7qsm                       1/1     Running   0              174m
kube-system   kube-scheduler-k8s-master01            1/1     Running   4 (173m ago)   177m
kube-system   kube-scheduler-k8s-master02            1/1     Running   1              174m
kube-system   kube-scheduler-k8s-master03            1/1     Running   1              174m

#一定是所有的状态为Running才是正常的,如果有以下状态 ,就要去看一下日志(tail -f /var/log/message),分析一下问题,一般的问题 是ipvs环境变量不对等等

kube-system   kube-flannel-ds-28jks                  0/1     Error               1          28s
kube-system   kube-flannel-ds-4w9lz                  0/1     Error               1          28s
kube-system   kube-flannel-ds-8rflb                  0/1     Error               1          28s
kube-system   kube-flannel-ds-wfcgq                  0/1     Error               1          28s
kube-system   kube-flannel-ds-zgn46                  0/1     Error               1          28s
kube-system   kube-proxy-b8lxm                       0/1     CrashLoopBackOff    4          2m15s
kube-system   kube-proxy-bmf9q                       0/1     CrashLoopBackOff    7          14m
kube-system   kube-proxy-bng8p                       0/1     CrashLoopBackOff    6          7m31s
kube-system   kube-proxy-dpkh4                       0/1     CrashLoopBackOff    6          10m
kube-system   kube-proxy-xl45p                       0/1     CrashLoopBackOff    4          2m30s

14、下载etcdctl客户端工具,并安装,在k8s-master01上安装就行

wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gztar -zxf etcd-v3.4.14-linux-amd64.tar.gz
mv etcd-v3.4.14-linux-amd64/etcdctl /usr/local/bin
chmod +x /usr/local/bin/

15、验证etcdctl是否能用,只要能输出以下这些日志,说明是没有问题的

# etcdctl
NAME:etcdctl - A simple command line client for etcd3.USAGE:etcdctl [flags]VERSION:3.4.14API VERSION:3.4COMMANDS:alarm disarm            Disarms all alarmsalarm list              Lists all alarmsauth disable            Disables authenticationauth enable             Enables authenticationcheck datascale         Check the memory usage of holding data for different workloads on a given server endpoint.check perf              Check the performance of the etcd clustercompaction              Compacts the event history in etcddefrag                  Defragments the storage of the etcd members with given endpointsdel                     Removes the specified key or range of keys [key, range_end)elect                   Observes and participates in leader electionendpoint hashkv         Prints the KV history hash for each endpoint in --endpointsendpoint health         Checks the healthiness of endpoints specified in `--endpoints` flagendpoint status         Prints out the status of endpoints specified in `--endpoints` flagget                     Gets the key or a range of keyshelp                    Help about any commandlease grant             Creates leaseslease keep-alive        Keeps leases alive (renew)

16、查看etcd集群的各种状态

#查看etcd集群健康状态

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.30.106:2379,192.168.30.107:2379,192.168.30.108:2379 endpoint health
+---------------------+--------+-------------+-------+
|      ENDPOINT       | HEALTH |    TOOK     | ERROR |
+---------------------+--------+-------------+-------+
| 192.168.30.106:2379 |   true | 35.474753ms |       |
| 192.168.30.107:2379 |   true | 39.358382ms |       |
| 192.168.30.108:2379 |   true | 47.269479ms |       |
+---------------------+--------+-------------+-------+

#查看etcd集群可用列表

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.30.106:2379,192.168.30.107:2379,192.168.30.108:2379 member list
+------------------+---------+--------------+-----------------------------+-----------------------------+------------+
|        ID        | STATUS  |     NAME     |         PEER ADDRS          |        CLIENT ADDRS         | IS LEARNER |
+------------------+---------+--------------+-----------------------------+-----------------------------+------------+
| 33aecdbe33accd41 | started | k8s-master01 | https://192.168.30.106:2380 | https://192.168.30.106:2379 |      false |
| 6bdbdbf3772b7e2c | started | k8s-master02 | https://192.168.30.107:2380 | https://192.168.30.107:2379 |      false |
| ce323eca1d06e307 | started | k8s-master03 | https://192.168.30.108:2380 | https://192.168.30.108:2379 |      false |
+------------------+---------+--------------+-----------------------------+-----------------------------+------------+

#查看etcd集群leader状态

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.30.106:2379,192.168.30.107:2379,192.168.30.108:2379 endpoint status
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.30.106:2379 | 33aecdbe33accd41 |   3.5.0 |  3.3 MB |      true |      false |         3 |      31405 |              31405 |        |
| 192.168.30.107:2379 | 6bdbdbf3772b7e2c |   3.5.0 |  3.2 MB |     false |      false |         3 |      31405 |              31405 |        |
| 192.168.30.108:2379 | ce323eca1d06e307 |   3.5.0 |  3.3 MB |     false |      false |         3 |      31405 |              31405 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

六、部署k8s的dashboard

注:我这边k8s的api版本已经是1.22.3版本了,所有dashboard必需升级到v2.3以上,不能再用v.2.0版本

1、下载recommended.yaml文件,如果没有翻墙,估计下不来

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

#这边我就把下载的文件直接贴出来,以下有两个地方做了修改,就是增加了NodePort模式和端口,看注释就行。

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort   #NodePort模式ports:- port: 443targetPort: 8443nodePort: 30000  #用的是30000端口selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-
metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.4.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.7ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

#安装

kubectl apply -f recommended.yaml

2、 查看安装结果


ingress-nginx          ingress-nginx-admission-create--1-d5xsw     0/1     Completed   0                4h22m
ingress-nginx          ingress-nginx-admission-patch--1-t975b      0/1     Completed   1                4h22m
ingress-nginx          ingress-nginx-controller-55888bbc94-82vsl   1/1     Running     0                4h22m
ingress-nginx          ingress-nginx-controller-55888bbc94-z2f97   1/1     Running     0                4h22m
kube-system            coredns-7f6cbbb7b8-8n9vq                    1/1     Running     0                3h44m
kube-system            coredns-7f6cbbb7b8-jqxc2                    1/1     Running     0                3h44m
kube-system            etcd-k8s-master01                           1/1     Running     8 (3h49m ago)    3d23h
kube-system            etcd-k8s-master02                           1/1     Running     2                3d23h
kube-system            etcd-k8s-master03                           1/1     Running     1                3d23h
kube-system            kube-apiserver-k8s-master01                 1/1     Running     8 (3h49m ago)    3d23h
kube-system            kube-apiserver-k8s-master02                 1/1     Running     2                3d23h
kube-system            kube-apiserver-k8s-master03                 1/1     Running     4 (3d23h ago)    3d23h
kube-system            kube-controller-manager-k8s-master01        1/1     Running     11 (3h49m ago)   3d23h
kube-system            kube-controller-manager-k8s-master02        1/1     Running     2                3d23h
kube-system            kube-controller-manager-k8s-master03        1/1     Running     2                3d23h
kube-system            kube-flannel-ds-hqhc7                       1/1     Running     1 (2d23h ago)    3d22h
kube-system            kube-flannel-ds-k2kgk                       1/1     Running     8 (2d23h ago)    3d23h
kube-system            kube-flannel-ds-s7sxm                       1/1     Running     6 (3h49m ago)    3d23h
kube-system            kube-flannel-ds-t7l8t                       1/1     Running     0                3d23h
kube-system            kube-flannel-ds-vthj9                       1/1     Running     0                3d23h
kube-system            kube-proxy-6wk8x                            1/1     Running     1 (2d23h ago)    3d1h
kube-system            kube-proxy-gxjrr                            1/1     Running     8 (2d23h ago)    3d1h
kube-system            kube-proxy-q9w7m                            1/1     Running     0                3d1h
kube-system            kube-proxy-trq2p                            1/1     Running     0                3d1h
kube-system            kube-proxy-w8lfw                            1/1     Running     4 (3h49m ago)    3d1h
kube-system            kube-scheduler-k8s-master01                 1/1     Running     11 (3h49m ago)   3d23h
kube-system            kube-scheduler-k8s-master02                 1/1     Running     2                3d23h
kube-system            kube-scheduler-k8s-master03                 1/1     Running     2                3d23h
kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-ckhcl   1/1     Running     0                9m28s
kubernetes-dashboard   kubernetes-dashboard-576cb95f94-jnv7f       1/1     Running     0                9m28s

3、 查看dashboard服务

kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE    SELECTOR
dashboard-metrics-scraper   ClusterIP   10.96.145.101   <none>        8000/TCP        132m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.110.14.77    <none>        443:30000/TCP   132m   k8s-app=kubernetes-dashboard

4、 创建dashoard管理员

vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: dashboard-adminnamespace: kubernetes-dashboard

#部署

kubectl apply -f dashboard-admin.yaml

5、 为管理员分配权限

vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: dashboard-admin-bind-cluster-rolelabels:k8s-app: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard

#部署

kubectl apply -f dashboard-admin-bind-cluster-role.yaml

6、 查看管理员Token

# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-tzp2d
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 37a23381-007c-4bab-a07b-42767a56d859Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InRFRF9MWlhDLVZ2MkJjT2tXUXQ4QlRhWVowOTVTRTBkZ2tDcF9xaE5qOFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdHpwMmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzdhMjMzODEtMDA3Yy00YmFiLWEwN2ItNDI3NjdhNTZkODU5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.TIWkVlu7SrwK9GetIC9eE32sgzuta0Zy52Ta3KkPmlQaINgqZx38I3nrFJ1u_641tENNu_60T3PjCbZweiqmpPTiyazL9Lw8uSQ5sbX3hauSzC5xOA1CX4AH1KEUnBYwWhuI-1VpXeXX-nVn7PoDElNoHBdXZ2l3NNLx2KmmaFoXHiVXAiIzTvSGY4DxJ9y6g2Tyz7GFOlOfOgpKYbVZlKufqrXEiO5SoUE_WndJSlt65UydQZ_zwmhA_6zWSxTDj2jF1o76eYXjpMLT0ioM51k-OzgljnRKZU7Jy67XJzj5VdJuDUdTZ0KADhF2XAkh-Vre0tjMk0867VHq0K_Big

7、 打开浏览器,输入:http://192.168.30.115:30000

#选择Token,输入前面查到的Token

到此高可用的k8s服务已经部署完成,测试了一下把k8s-master01关掉,服务一直可用,只有dashboard页面会闪断。

#如果运行一段时间,我们想再加一个新node,方法是:

1、先在主节点上执行

#  kubeadm token create --print-join-command --ttl 0
kubeadm join 192.168.30.115:16443 --token lppxf9.7t5yfk8ruq69hpi0 --discovery-token-ca-cert-hash sha256:dc58e29a96a7bd4c9c9682f93089a0fef39ee18975b41e9d54512d1989e5a07d

2、然后在要新加的node节点执行

kubeadm join 192.168.30.115:16443 --token lppxf9.7t5yfk8ruq69hpi0 --discovery-token-ca-cert-hash sha256:dc58e29a96a7bd4c9c9682f93089a0fef39ee18975b41e9d54512d1989e5a07d

最新版Kubernetes(k8s)-v1.22.3版本高可用集群相关推荐

  1. 内网环境下手动部署kubernetes(v1.26.3)高可用集群

    这篇博客主要是记录了手动部署一个高可用的Kubernetes集群的过程.旨在帮助自己及初学者学习kubernetes,并记录下具体的操作过程和总结的知识点.文中可能存在一些问题或不足之处,仅供参考. ...

  2. 生产环境kubeadm部署k8s(1.23)高可用集群

    kubeadm部署k8s高可用集群 1.设备清单 2.各节点下载docker源 3.各节点安装docker服务并加入开机启动 4.各节点配置docker加速器并修改成k8s驱动 5.各节点重启dock ...

  3. Kubernetes1.24版本高可用集群环境搭建(二进制方式)

    背景: 虽然kubeadm方式安装集群更加简单些,配置相对比较少,但是生产环境还是建议二进制的方式安装,因为二进制的方式kubernetes的kube-apiserver.kube-controlle ...

  4. k8s:概念以及搭建高可用集群

    一.k8s概念和架构 1.k8s概述 k8s是谷歌在2014年开源的容器化集群管理系统 使用k8s进行容器化应用部署 使用k8s利于应用扩展 k8s目标实施让容器化应用程序更加简洁高效 2.特性 (1 ...

  5. 【Kubernetes 企业项目实战】09、Rancher 2.6 管理 k8s-v1.23 及以上版本高可用集群

    目录 一.Rancher 介绍 1.1Rancher简介 1.2 Rancher 和 k8s 的区别 1.3 Rancher 企业使用案例 二.安装 Rancher 2.1 初始化环境 2.2 安装 ...

  6. kubeadm安装kubernetes 1.13.2多master高可用集群

    1. 简介 Kubernetes v1.13版本发布后,kubeadm才正式进入GA,可以生产使用,用kubeadm部署kubernetes集群也是以后的发展趋势.目前Kubernetes的对应镜像仓 ...

  7. kubeadm部署k8s多master节点的高可用集群

    外部etcd集群部署可参考:https://www.cnblogs.com/zhangmingcheng/p/13625664.html Nginx+Keepalived集群搭建 参考:https:/ ...

  8. Kubernetes — 使用 kubeadm 部署高可用集群

    目录 文章目录 目录 Kubernetes 在生产环境中架构 高可用集群部署拓扑 1.网络代理配置 2.Load Balancer 环境准备 3.Kubernetes Cluster 环境准备 安装 ...

  9. s24.基于 Kubernetes v1.25 (二进制) 和 Docker部署高可用集群

    1.安装说明 本文章将演示二进制方式安装高可用k8s 1.17+,相对于其他版本,二进制安装方式并无太大区别,只需要区分每个组件版本的对应关系即可. 生产环境中,建议使用小版本大于5的Kubernet ...

最新文章

  1. “机器换人”没什么可抱怨
  2. 用户视角看百度移动:从流量集散地到流量目的地
  3. 蓝桥杯-填空题-门牌制作
  4. mysql中的trigger
  5. 基于shiro实现session持久化和分布式共享
  6. apscheduler Trigger
  7. date字段 http 头文件_http头文件信息
  8. 我的成长笔记20210325(一天写了247条用例)
  9. AS3 BitmapData中获取非透明区域对应矩阵
  10. kmemleak的使用
  11. Linux内核部件分析 设备驱动模型的基石kobject
  12. python生成图文并茂的pdf--财务报表(三)--页面布局和排版
  13. 【搜索力】提高你搜索能力的必备技巧
  14. 前端第一天,第六十五天
  15. 可以查看计算机主要自启动项的技术,电脑中怎么查看启动项
  16. 7-13 大家一起来玩游戏 (20 分)
  17. 检测昵称是否含有敏感词汇
  18. 解决D3.zoom()缩放和平移初始化时图形位置会跳跃的问题(v6版本)
  19. vant 引进单个样式_记一次webpack打包样式加载问题
  20. UnityHDRP贴图clipping方法

热门文章

  1. 毫米波雷达人体感应器,智能感知静止存在,人体存在检测应用
  2. python3字典详解_Python3字典操作详解 Python3字典操作大全
  3. 华附计算机学神,【学习】时隔13年,华附两牛娃杀进奥数国家队,父母亲述学霸成长史!...
  4. 芯片商洗牌战拉开序幕 指纹识别技术独霸市场
  5. (九)STM32——Systemlnit初始化函数讲解
  6. ant java macrodef_为大型项目提供的 Ant 1.6 新特性
  7. AI初探——百度、阿里、腾讯开放平台OCR功能解析
  8. spring事务5个隔离界别和7钟传播行为
  9. FPGA图像处理的一些基础知识,FPGA是如何实现最高实时性的?相比于GPU的优势在哪?
  10. FinClip小程序+Rust(三):一个加密钱包