目录:

  • 1. 设备清单
  • 2. 初始环境准备
    • 2.1 主机名解析
    • 2.2 关闭防火墙及SElinux
    • 2.3 关闭NetworkManager
    • 2.4 进行时间同步
    • 2.5 升级系统(可选,我没做)命令如下:
    • 2.6 升级内核
    • 2.7 设置安装源
      • 2.7.1 设置k8s源
      • 2.7.2 设置docker源
    • 2.8 安装ipvs模块
    • 2.9 修改内核参数
    • 2.10 关闭swap分区
    • 2.11 master01节点免密登录其他节点
    • 2.12 配置limit
  • 3 安装基本组件
    • 3.1 安装containerd
    • 3.2 安装k8s组件
    • 3.3 安装docker
    • 3.4 配置kubelet使用阿里云的pause镜像
  • 4. 安装高可用组件
    • 4.1 安装HAproxy和keepalived服务
    • 4.2 修改haproxy配置文件
    • 4.3 修改keepalived配置文件
      • 4.3.1 修改master01配置文件
      • 4.3.2 修改master02配置文件
    • 4.4 创建健康检查脚本
    • 4.5 启动haproxy和keepalived服务
  • 5 部署k8s集群
    • 5.1 获取初始化文件并修改
    • 5.2 拉取所需镜像
    • 5.2 初始化主master节点(master01)
    • 5.3 配置环境变量
    • 5.4 安装网络插件
    • 5.5 查看主master状态及系统组件状态
    • 5.6 节点加入集群
      • 5.6.1 master节点加入集群
      • 5.6.2 node节点加入集群
      • 6.7.3 在master查看集群节点
      • 6.7.4 如果忘记Token值
  • 7. Metrics部署
  • 8.Dashboard部署

1. 设备清单

因为条件问题本次实验只准备了三台虚拟机。两台master节点,一台node节点。
同时高可用部署也是在两台master节点上进行。
有条件的可以自行把环境扩到5台多master节点多node节点,甚至可以7台把高可用单独分出来,部署过程不会有影响。

设备 IP
k8s-master01 192.168.1.3
k8s-master02 192.168.1.4
k8s-node01 192.168.1.3
vip 192.168.1.100

2. 初始环境准备

(所有节点都要进行的操作:)

2.1 主机名解析

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.3 k8s-master01
192.168.1.4 k8s-master02
192.168.1.5 k8s-node01

2.2 关闭防火墙及SElinux

[root@k8s-master01 ~]# systemctl stop firewall && systemctl disable firewall
[root@k8s-master01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-master01 ~]# cat /etc/sysconfig/selinux
SELINUX=disabled

2.3 关闭NetworkManager

[root@k8s-master01 ~]# systemctl disable --now NetworkManager
[root@k8s-master01 ~]# systemctl restart network
[root@k8s-master01 ~]# ping www.baidu.com
#确定网络是通的

2.4 进行时间同步

[root@k8s-master01 ~]# ntpdate ntp.aliyun.com

2.5 升级系统(可选,我没做)命令如下:

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 –y
yum update -y --exclude=kernel* && reboot

2.6 升级内核

[root@k8s-master01 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8s-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
准备中...                          ################################# [100%]
正在升级/安装...1:elrepo-release-7.0-4.el7.elrepo  ################################# [100%]
[root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y
[root@k8s-master01 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
[root@k8s-master01 ~]# uname -r
5.8.8-1.el7.elrepo.x86_64

2.7 设置安装源

2.7.1 设置k8s源

[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

2.7.2 设置docker源

[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-master01 ~]# yum makecache fast

2.8 安装ipvs模块

[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp –y
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-master01 ~]# cat /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service

2.9 修改内核参数

[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-iptables = 1
> fs.may_detach_mounts = 1
> vm.overcommit_memory=1
> vm.panic_on_oom=0
> fs.inotify.max_user_watches=89100
> fs.file-max=52706963
> fs.nr_open=52706963
> net.netfilter.nf_conntrack_max=2310720
>
> net.ipv4.tcp_keepalive_time = 600
> net.ipv4.tcp_keepalive_probes = 3
> net.ipv4.tcp_keepalive_intvl =15
> net.ipv4.tcp_max_tw_buckets = 36000
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_max_orphans = 327680
> net.ipv4.tcp_orphan_retries = 3
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.ip_conntrack_max = 65536
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.tcp_timestamps = 0
> net.core.somaxconn = 16384
> EOF
[root@k8s-master01 ~]# sysctl --system

2.10 关闭swap分区

[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# vim /etc/fstab
[root@k8s-master01 ~]# cat /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2.11 master01节点免密登录其他节点

[root@k8s-master01 ~]# ssh-keygen -t rsa
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-node01;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

2.12 配置limit

[root@k8s-master01 ~]# ulimit -SHn 65535

3 安装基本组件

(所有节点都要安装:)

3.1 安装containerd

[root@k8s-master01 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm
[root@k8s-master01 ~]# yum -y install containerd.io-1.2.13-3.2.el7.x86_64.rpm

3.2 安装k8s组件

[root@k8s-master01 ~]# yum -y install kubeadm kubelet kubectl --disableexcludes=kubernetes
已安装:kubeadm.x86_64 0:1.19.1-0  kubectl.x86_64 0:1.19.1-0  kubelet.x86_64 0:1.19.1-0 作为依赖被安装:cri-tools.x86_64 0:1.13.0-0            kubernetes-cni.x86_64 0:0.8.7-0          socat.x86_64 0:1.7.3.2-2.el7
[root@k8s-master01 ~]# systemctl enable --now kubelet

3.3 安装docker

[root@k8s-master01 ~]# yum -y install docker-ce
已安装:docker-ce.x86_64 3:19.03.12-3.el7
[root@k8s-master01 ~]# systemctl start docker && systemctl enable docker
[root@k8s-master01 ~]# vim /etc/docker/daemon.json
[root@k8s-master01 ~]# cat /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors":["https://655dds7u.mirror.aliyuncs.com"]
}
[root@k8s-master01 ~]# systemctl restart docker

3.4 配置kubelet使用阿里云的pause镜像

[root@k8s-master01 ~]# cat >/etc/sysconfig/kubelet<<EOF
> KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
> EOF

4. 安装高可用组件

(所有master节点安装:)

4.1 安装HAproxy和keepalived服务

[root@k8s-master01 ~]# yum install keepalived haproxy –y
已安装:haproxy.x86_64 0:1.5.18-9.el7          keepalived.x86_64 0:1.3.5-16.el7

4.2 修改haproxy配置文件

(所有master节点配置相同:)

[root@k8s-master01 ~]# cd /etc/haproxy/
[root@k8s-master01 haproxy]# cp haproxy.cfg haproxy.cfg.bak
[root@k8s-master01 haproxy]# vim haproxy.cfg
[root@k8s-master01 haproxy]# cat haproxy.cfg
globalmaxconn  2000ulimit-n  16384log  127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode  httpoption  httplogtimeout connect 5000timeout client  50000timeout server  50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorlisten statsbind    *:8006mode    httpstats   enablestats   hide-versionstats   uri       /statsstats   refresh   30sstats   realm     Haproxy\ Statisticsstats   auth      admin:adminfrontend k8s-masterbind 0.0.0.0:16443bind 127.0.0.1:16443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server master01  192.168.1.3:6443  checkserver master02  192.168.1.4:6443  check

4.3 修改keepalived配置文件

4.3.1 修改master01配置文件

[root@k8s-master01 ~]# cd /etc/keepalived/
[root@k8s-master01 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@k8s-master01 keepalived]# vim keepalived.conf
[root@k8s-master01 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 2weight -5fall 3  rise 2
}
vrrp_instance VI_1 {state MASTERinterface eth0mcast_src_ip 192.168.1.3virtual_router_id 51priority 150advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.100/24}track_script {chk_apiserver}

4.3.2 修改master02配置文件

[root@k8s-master02 ~]# cd /etc/keepalived/
[root@k8s-master02 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@k8s-master02 keepalived]# vim keepalived.conf
[root@k8s-master02 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 2weight -5fall 3  rise 2
}
vrrp_instance VI_1 {state BACKUPinterface eth0mcast_src_ip 192.168.1.4virtual_router_id 51priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.100/24}track_script {chk_apiserver}
}

4.4 创建健康检查脚本

(所有master节点:)

[root@k8s-master01 ~]# vim /etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]# cat /etc/keepalived/check_apiserver.sh
#!/bin/basherr=0
for k in $(seq 1 5)
docheck_code=$(pgrep kube-apiserver)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 5continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi

4.5 启动haproxy和keepalived服务

[root@k8s-master01 ~]# systemctl start haproxy
[root@k8s-master01 ~]# systemctl enable haproxy
[root@k8s-master01 ~]# systemctl start keepalived
[root@k8s-master01 ~]# systemctl enable keepalived

5 部署k8s集群

5.1 获取初始化文件并修改

(所有master节点:)

[root@k8s-master01 ~]# kubeadm config print init-defaults > init.default.yaml
[root@k8s-master01 ~]# vim init.default.yaml
[root@k8s-master01 ~]# cat init.default.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.1.3bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-master01taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:certSANs:- 192.168.1.100timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.100:16443
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:dnsDomain: cluster.localpodSubnet: 172.168.0.0/16serviceSubnet: 10.96.0.0/12
scheduler: {}

5.2 拉取所需镜像

(所有master节点:)

[root@k8s-master01 ~]# kubeadm config images pull --config /root/init.default.yaml

5.2 初始化主master节点(master01)

[root@k8s-master01 ~]# kubeadm init --config /root/init.default.yaml --upload-certs

部分初始化内容:

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb \--control-plane --certificate-key d092072bb3f05ad103537a3d371b429edc84f38d55cfe57e6f62d80e79b7455dPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb

按照要求创建目录:

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

5.3 配置环境变量

(所有master节点:)

[root@k8s-master01 ~]# cat <<EOF >> /root/.bashrc
> export KUBECONFIG=/etc/kubernetes/admin.conf
> EOF
[root@k8s-master01 ~]# source /root/.bashrc

5.4 安装网络插件

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O% Total    % Received % Xferd  Average Speed   Time    Time     Time  CurrentDload  Upload   Total   Spent    Left  Speed
100  182k  100  182k    0     0  37949      0  0:00:04  0:00:04 --:--:-- 42388
[root@k8s-master01 ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDRvalue: "172.168.0.0/16"
[root@k8s-master01 ~]# kubectl apply -f calico.yaml

5.5 查看主master状态及系统组件状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   26m   v1.19.1
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-c9784d67d-52ksp   1/1     Running   0          3m23s   172.168.32.130   k8s-master01   <none>           <none>
calico-node-947h8                         1/1     Running   0          3m23s   192.168.1.3      k8s-master01   <none>           <none>
coredns-6c76c8bb89-qqc5h                  1/1     Running   0          26m     172.168.32.131   k8s-master01   <none>           <none>
coredns-6c76c8bb89-wwtgh                  1/1     Running   0          26m     172.168.32.129   k8s-master01   <none>           <none>
etcd-k8s-master01                         1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-proxy-f4fgp                          1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>

5.6 节点加入集群

5.6.1 master节点加入集群

[root@k8s-master02 ~]# kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb     --control-plane --certificate-key d092072bb3f05ad103537a3d371b429edc84f38d55cfe57e6f62d80e79b7455d

5.6.2 node节点加入集群

[root@k8s-node01 ~]# kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb

6.7.3 在master查看集群节点

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   Ready      master   34m     v1.19.1
k8s-master02   Ready      master   4m42s   v1.19.1
k8s-node01     NotReady   <none>   60s     v1.19.1

此时查看所有pod:

[root@k8s-master01 ~]#  kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-c9784d67d-52ksp   1/1     Running   1          123m   172.168.32.134   k8s-master01   <none>           <none>
calico-node-947h8                         1/1     Running   1          123m   192.168.1.3      k8s-master01   <none>           <none>
calico-node-9hcfm                         1/1     Running   0          117m   192.168.1.4      k8s-master02   <none>           <none>
calico-node-cfzc2                         1/1     Running   0          113m   192.168.1.5      k8s-node01     <none>           <none>
coredns-6c76c8bb89-qqc5h                  1/1     Running   1          147m   172.168.32.133   k8s-master01   <none>           <none>
coredns-6c76c8bb89-wwtgh                  1/1     Running   1          147m   172.168.32.132   k8s-master01   <none>           <none>
etcd-k8s-master01                         1/1     Running   1          147m   192.168.1.3      k8s-master01   <none>           <none>
etcd-k8s-master02                         1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   2          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-apiserver-k8s-master02               1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   3          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-controller-manager-k8s-master02      1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-proxy-f4fgp                          1/1     Running   1          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-proxy-lgdzm                          1/1     Running   0          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-proxy-v8jm5                          1/1     Running   0          113m   192.168.1.5      k8s-node01     <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   3          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-scheduler-k8s-master02               1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
metrics-server-769bd9c6f4-nndxf           1/1     Running   21         83m    172.168.85.193   k8s-node01     <none>           <none>

6.7.4 如果忘记Token值

[root@k8s-master01 ~]# kubeadm token create --print-join-command
注:生成后node节点可以使用
[root@k8s-master01 ~]# kubeadm init phase upload-certs  --upload-certs
注:生成后的值加上上条命令的值可以供master节点加入集群使用

7. Metrics部署

(master01节点:)

[root@k8s-master01 ~]# git clone https://github.com/dotbalo/k8s-ha-install.git
[root@k8s-master01 ~]# cd k8s-ha-install/metrics-server-3.6.1/
[root@k8s-master01 metrics-server-3.6.1]# kubectl create -f .

8.Dashboard部署

[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

查看Dashboard

[root@k8s-master01 ~]# kubectl get po -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-4zhb5   1/1     Running   0          92s
kubernetes-dashboard-665f4c5ff-cbtst         1/1     Running   0          92s
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.98.122.54   <none>        8000/TCP   2m16s
kubernetes-dashboard        ClusterIP   10.97.164.10   <none>        443/TCP    2m17s
[root@k8s-master01 ~]#  kubectl edit svc kubernetes-dashboard -n !$kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard editedtype: NodePort
注:倒数第三行,修改成NodePort
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.98.122.54   <none>        8000/TCP        5m23s
kubernetes-dashboard        NodePort    10.97.164.10   <none>        443:30424/TCP   5m24s
注:记住30424这个端口,等下会用到

创建管理员用户

[root@k8s-master01 ~]# vim admin.yaml
[root@k8s-master01 ~]# cat admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system
[root@k8s-master01 ~]# kubectl create -f admin.yaml
serviceaccount/admin-user created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

查看token值

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-cdj9z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 69946572-41f4-4a72-9e84-025f6d9e0d67Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImItbnNRd2NqaEJTUkNHX25WYlQwUVdReDU1ODFZWHgzbkV0bHhrdlB0eGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNkajl6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2OTk0NjU3Mi00MWY0LTRhNzItOWU4NC0wMjVmNmQ5ZTBkNjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.hB8Ij0nHvH_iumG89ejqcZLaLGupSc-SuLyIpEKcdrYJhgXcT0EvNIC7LamQJdqr12b8p2LTYuiqVzz04yOYmXB0_MxmrXVpbySjWoqfKQHMon_ew9EAN-1oHgBHkzAbD9jiLaJyPduHajwaBQz6LtIC7QPn9cwRMYZvUXVqys3q7mU2EEFt3I4TS6CReg7oZ-WezTF-7ggRFjpe5E0pZrcovICIJTiG2e8y5d0Tflyx3oWhsEqDywSfkw3CJICJmKG_-TgxmMJuFcHY_1EzZEmvpsrnBIWTii6-6-3vAmC2irUs4TRBzldIt4gzcF4Dw2iXn1r8Tn_u77Wh9uReMA
注:这个值登录网页会用到

在浏览器登录测试
https://192.168.1.00:30424


能看到这里说面此次部署算是成功了。

Kubeadm部署高可用K8S集群相关推荐

  1. 《Kubernetes部署篇:基于docker使用kubespray工具部署高可用K8S集群(国内互联网方案四)》

    文章目录 一.部署背景简介 二.部署工具介绍 三.部署方案介绍 四.部署环境信息 五.部署资源下载 六.部署准备工作 6.1.系统内核升级 6.2.设置主机名 6.3.环境初始化 6.4.ssh多机互 ...

  2. 《Kubernetes部署篇:基于docker使用kubespray工具离线部署高可用K8S集群(国内专网方案)》

    文章目录 一.部署背景简介 二.部署工具介绍 三.部署方案介绍 四.部署环境信息 五.部署资源下载 六.部署准备工作 6.1.系统内核升级 6.2.设置主机名 6.3.环境初始化 6.4.ssh多机互 ...

  3. 《Kubernetes部署篇:基于docker使用kubespray工具部署高可用K8S集群(国内互联网方案三)》

    文章目录 一.部署背景简介 二.部署工具介绍 三.部署方案介绍 四.部署环境信息 五.部署资源下载 六.部署准备工作 6.1.系统内核升级 6.2.设置主机名 6.3.环境初始化 6.4.ssh多机互 ...

  4. Kubernetes学习-K8S安装篇-Kubeadm安装高可用K8S集群

    Kubernetes学习-K8S安装篇-Kubeadm高可用安装K8S集群 1. Kubernetes 高可用安装 1.1 kubeadm高可用安装k8s集群1.23.1 1.1.1 基本环境配置 1 ...

  5. 二、《云原生 | Kubernetes篇》Kubernetes学习-K8S安装篇-Kubeadm安装高可用K8S集群--生产环境

    目录 1. Kubernetes 高可用安装 1.1.1实验环境规划 高可用Kubernetes集群规划

  6. k8s高可用集群多个主节点_部署高可用k8s集群

    高可用集群指 1个lb + 3个master(etcd) + n个node,生产环境都推荐这种安装方式新版的k8s,etcd节点已经可以完美和master节点共存于同一台服务器上: etcd有3种方式 ...

  7. 二进制部署高可用k8s集群

    ip地址规划表 k8s-master1 192.168.2.190 包含etcd存储此为etc主节点 k8s-master2 192.168.2.191 k8s-node1 192.168.2.192 ...

  8. centos系统 用kubeadm 搭建高可用k8s集群

    官网教程(部分国外镜像源下载会超时) 1. 安装前调整系统配置 主机规划 主节点1 hostnamectl set-hostname k8s-master01 && bash 主节点2 ...

  9. 局域网使用kubeadm安装高可用k8s集群

    主机列表: ip 主机名 节点 cpu 内存 192.168.23.100 k8smaster01 master 2核 2G 192.168.23.101 k8smaster02 node 2核 2G ...

最新文章

  1. 编写程序,输出所有3位数的水仙花数
  2. python将txt转换为csv_Python Pandas 三行代码将 txt 文件转换成 csv 文件
  3. JDK8特性--Stream(求和,过滤,排序)
  4. 真正的取真实IP地址及利弊
  5. mvn install时跳过Test
  6. 速度一半永远追不上_您将永远不会知道自己应该怎么做的一半-没关系。
  7. Spring中bean的五个作用域简介(转载)
  8. rabbitmq报错:PRECONDITION_FAILED - parameters for queue ‘test-1‘ in vhost ‘/‘ not equivalent
  9. 四处建实验室的Facebook说,我们没想跟学术界抢人啊
  10. 设置开机不自动进入锁屏状态
  11. 写的将skb copy/clone后转发到源地址的一段代码
  12. 配音鸭是什么软件?使用方法能详细说明下吗?
  13. php 中文手册下载
  14. echarts数字云
  15. Bouncy Castle Java 平台轻量级密码术包
  16. PHP代码审计工具——rips
  17. 【在线仿真】Arduino 超声波测距+LCD1602显示
  18. Bloom Filter算法优化
  19. 详解数字美元白皮书:可能和你想的不一样
  20. 人民网报道金雅福集团董事长黄仕坤

热门文章

  1. vue实现textarea框,文字高度自适应
  2. Open vSwitch概述
  3. Locust 压力测试工具学习(一)
  4. 浮点数定化--altera 乘除法ip使用FPGA学习笔记
  5. DES和RSA加密解密实例
  6. ftp - Internet 文件传输程序 (file transfer program)
  7. Android逐帧动画——让图片动起来
  8. CTF.show:新春红包题wp
  9. 代码改变生活-使用You-Get下载bilibili的视频【一】
  10. 【Axure教程】中继器表格自动合计模板