通过kubeadm部署高可用的k8s集群
1环境准备
注意:
禁用swap
关闭selinux
关闭iptable
优化内核参数限制参数
root@kubeadm-master1:~# sysctl -p
net.ipv4.ip_forward = 1 #开启路由转发
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时会被宿主机IP tables的forward规则匹配
net.bridge.bridge-nf-call-ip6tables = 1root@kubeadm-master1:~# free -mtotal used free shared buff/cache available
Mem: 3921 239 2678 10 1003 3444
Swap: 0 0 0#Ubuntu 18.04删除swap虚拟内存
#首先输入以下命令停用 SWAP 空间:
sudo swapoff -v /swapfile
#在 /etc/fstab 文件中删除有效 swap 的行
#最后执行以下命令删除 swapfile 文件:
sudo rm /swapfil
2在master和node上安装docker
root@kubeadm-master1:~# cat install_ubuntu18.04_docker.sh
#/bin/bash
sudo apt-get remove docker docker-engine docker.io
sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \"deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \$(lsb_release -cs) \stable"sudo apt-get updateapt install -y docker-ce=5:19.03.15~3-0~ubuntu-bionic docker-ce-cli=5:19.03.15~3-0~ubuntu-bionic
3安装harbor镜像仓库
root@harbor1:/usr/local/src# chmod a+x docker-compose-Linux-x86_64
root@harbor1:/usr/local/src# cp docker-compose-Linux-x86_64 /usr/bin/docker-compose
root@harbor1:/usr/local/src# ls
docker-compose-Linux-x86_64 harbor-offline-installer-v2.2.3.tgz
root@harbor1:/usr/local/src# tar xvf harbor-offline-installer-v2.2.3.tgz
root@harbor1:/usr/local/src# ln -sv /usr/local/src/harbor /usr/local/
'/usr/local/harbor' -> '/usr/local/src/harbor'
root@harbor1:/usr/local/src# cd /usr/local/harbor
root@harbor1:/usr/local/harbor# ls
common.sh harbor.v2.2.3.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
root@harbor1:/usr/local/harbor# cp harbor.yml.tmpl harbor.yml
#新加200G硬盘作为挂载为数据目录
root@harbor1:~# fdisk -l
Disk /dev/sda: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x43a5cfdeDevice Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1953791 1951744 953M 83 Linux
/dev/sda2 1953792 125827071 123873280 59.1G 83 LinuxDisk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@harbor1:~#
root@harbor1:~#
root@harbor1:~# fdisk /dev/sdbWelcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8594bd70.Command (m for help): n
Partition typep primary (0 primary, 0 extended, 4 free)e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-419430399, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-419430399, default 419430399): Created a new partition 1 of type 'Linux' and of size 200 GiB.Command (m for help): p
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8594bd70Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 419430399 419428352 200G 83 LinuxCommand (m for help): t
Selected partition 1
Hex code (type L to list all codes): L0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT-4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignmente W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT
10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto
1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep
1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'.Command (m for help): p
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8594bd70Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 419430399 419428352 200G 8e Linux LVMCommand (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.root@harbor1:~# fdisk -l
Disk /dev/sda: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x43a5cfdeDevice Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1953791 1951744 953M 83 Linux
/dev/sda2 1953792 125827071 123873280 59.1G 83 LinuxDisk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8594bd70Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 419430399 419428352 200G 8e Linux LVMroot@harbor1:~# mkfs.xfs /dev/sdb1
root@harbor1:~# mkdir /data/harbor -p
root@harbor1:~# vim /etc/fstab
#最后添加下面这行
/dev/sdb1 /data/harbor/ xfs defaults 0 0
root@harbor1:~# mount -a
root@harbor1:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 393M 13M 380M 4% /run
/dev/sda2 ext4 58G 3.7G 52G 7% /
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sda1 ext4 922M 149M 710M 18% /boot
tmpfs tmpfs 393M 0 393M 0% /run/user/0
/dev/sdb1 xfs 200G 237M 200G 1% /data/harbor
#自签证书,不能创在软链接下面目录
root@harbor1:/usr/local# mkdir certs
root@harbor1:/usr/local# cd certs/
root@harbor1:/usr/local/certs# openssl genrsa -out harbor-ca.key
Generating RSA private key, 2048 bit long modulus (2 primes)
..........................................+++++
...................+++++
e is 65537 (0x010001)
root@harbor1:/usr/local/certs# ll
total 12
drwxr-xr-x 2 root root 4096 Jul 23 22:24 ./
drwxr-xr-x 3 root root 4096 Jul 23 22:23 ../
-rw------- 1 root root 1675 Jul 23 22:24 harbor-ca.key
root@harbor1:/usr/local/certs# touch /root/.rnd
root@harbor1:/usr/local/certs# openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=harbor.yzil.cn" -days 7120 -out harbor-ca.crt #CN=必须与harbor名字一致
root@harbor1:/usr/local/certs# ll
total 16
drwxr-xr-x 2 root root 4096 Jul 23 22:28 ./
drwxr-xr-x 3 root root 4096 Jul 23 22:23 ../
-rw-r--r-- 1 root root 1127 Jul 23 22:28 harbor-ca.crt
-rw------- 1 root root 1675 Jul 23 22:24 harbor-ca.key
#编辑harbor.yml
root@harbor1:/usr/local/harbor# vim harbor.yml
hostname: harbor.yzil.cn
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config
https:
# https related config
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /usr/local/certs/harbor-ca.crtprivate_key: /usr/local/certs/harbor-ca.keyharbor_admin_password: 123456
data_volume: /data/harborroot@harbor1:/usr/local/harbor# ./install.sh --with-trivy
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating redis ... done
Creating harbor-db ... done
Creating registry ... done
Creating harbor-portal ... done
Creating registryctl ... done
Creating trivy-adapter ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating nginx ... done
✔ ----Harbor has been installed and started successfully.---root@harbor1:~# vim /etc/hosts
10.0.0.16 harbor.yzil.cn#宿主机上面hosts里面解析
浏览器输入https://harbor.yzil.cn
harbor https实现上传和下载
#主从节点创建,上传下载镜像的每个节点都要做,后面名字必须和harbor名字一致
root@kubeadm-master1:~# mkdir /etc/docker/certs.d/harbor.yzil.cn -p#从harbor上把公钥传送过去
root@harbor1:~# scp /usr/local/certs/harbor-ca.crt 10.0.0.10:/etc/docker/certs.d/harbor.yzil.cnroot@kubeadm-master1:~# ll /etc/docker/certs.d/harbor.yzil.cn/harbor-ca.crt
-rw-r--r-- 1 root root 1127 Jul 24 09:43 /etc/docker/certs.d/harbor.yzil.cn/harbor-ca.crtroot@kubeadm-master1:~# vim /etc/hosts
10.0.0.16 harbor.yzil.cnroot@kubeadm-master1:~# docker login harbor.yzil.cn
Username: admin
Password: #密码123456
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeededroot@kubeadm-master1:~# docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
5843afab3874: Pull complete
Digest: sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latestroot@kubeadm-master1:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest d4ff818577bc 5 weeks ago 5.6MBroot@kubeadm-master1:~# docker tag alpine harbor.yzil.cn/yzil/alpine
root@kubeadm-master1:~# docker push harbor.yzil.cn/yzil/alpine
The push refers to repository [harbor.yzil.cn/yzil/alpine]
72e830a4dff5: Pushed
latest: digest: sha256:1775bebec23e1f3ce486989bfc9ff3c4e951690df84aa9f926497d82f2ffca9d size: 528#都做好之后,node3测试是否可以下载之前上传的镜像
root@kubeadm-node3:~# docker pull harbor.yzil.cn/yzil/alpine
Using default tag: latest
latest: Pulling from yzil/alpine
5843afab3874: Pull complete
Digest: sha256:1775bebec23e1f3ce486989bfc9ff3c4e951690df84aa9f926497d82f2ffca9d
Status: Downloaded newer image for harbor.yzil.cn/yzil/alpine:latest
harbor.yzil.cn/yzil/alpine:latestroot@kubeadm-node3:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.yzil.cn/yzil/alpine latest d4ff818577bc 5 weeks ago 5.6MB
几台主机基于密钥的登录方式
root@harbor1:~# ssh-keygen
root@harbor1:~# ssh-copy-id 127.0.0.1
root@harbor1:~# rsync -av .ssh 10.0.0.11:/root/
4安装haproxy+keepalived实现高可用
#更改系统参数
root@hake1:~# vim /etc/sysctl.conf
root@hake1:~# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
root@hake1:~# apt install -y haproxy keepalived#keepalived
root@hake1:~# find / -name keepalived.conf*root@hake1:~# ls /usr/share/doc/keepalived/samples/keepalived.conf.vrrp
/usr/share/doc/keepalived/samples/keepalived.conf.vrrproot@hake1:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.confroot@hake1:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS_DEVEL
}vrrp_instance VI_1 {state MASTERinterface eth0garp_master_delay 10smtp_alertvirtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.0.0.100 dev eth0 label eth0:0}
}
root@hake1:~# systemctl restart keepalived
root@hake1:~# systemctl enable keepalivedroot@hake1:~# hostname -I
10.0.0.20 10.0.0.100 root@hake1:~# ping 10.0.0.100
PING 10.0.0.100 (10.0.0.100) 56(84) bytes of data.
64 bytes from 10.0.0.100: icmp_seq=1 ttl=64 time=0.014 ms
64 bytes from 10.0.0.100: icmp_seq=2 ttl=64 time=0.025 ms
#haproxy
root@hake1:~# vim /etc/haproxy/haproxy.cfg
#最后加上listen k8s-6443bind 10.0.0.100:6443mode tcpserver 10.0.0.10 10.0.0.10:30002 check inter 3s fall 3 rise 5server 10.0.0.11 10.0.0.11:30002 check inter 3s fall 3 rise 5server 10.0.0.12 10.0.0.12:30002 check inter 3s fall 3 rise 5root@hake1:~# systemctl restart haproxy
root@hake1:~# systemctl enable haproxy
5安装kubadm等组件
在master和node节点安装kubeadm、kubelet、kubectl等组件,harbor负载均衡不需要安装
#阿里镜像源:https://developer.aliyun.com/mirror/
Debian / Ubuntu
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl#清华镜像源:https://mirror.tuna.tsinghua.edu.cn/help/kubernetes/
新建 /etc/apt/sources.list.d/kubernetes.list,内容为
deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main
#每台都要执行
root@kubeadm-master1:~# apt-get update && apt-get install -y apt-transport-https
root@kubeadm-master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - % Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed
100 2537 100 2537 0 0 10107 0 --:--:-- --:--:-- --:--:-- 10107
OKroot@kubeadm-master1:~# vim /etc/apt/sources.list
root@kubeadm-master1:~# cat /etc/apt/sources.list
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse# 预发布软件源,不建议启用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu bionic stable
# deb-src [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu bionic stable#添加下面这行
deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main#或者
root@kubeadm-master1:~# vim /etc/apt/sources.list.d/kubernetes.list
root@kubeadm-master1:~# cat /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainroot@kubeadm-master1:~# scp /etc/apt/sources.list.d/kubernetes.list 10.0.0.11:/etc/apt/sources.list.droot@kubeadm-master1:~# apt update
root@kubeadm-master1:~# apt-cache madison kubeadm
#每个master都需要安装kubeadm kubelet kubectl
#node可以不需要装kubctlroot@kubeadm-master1:~# apt install kubeadm=1.20.5-00 kubelet=1.20.5-00 kubectl=1.20.5-00root@kubeadm-node1:~# apt install kubeadm=1.20.5-00 kubelet=1.20.5-00 root@kubeadm-master1:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since Sat 2021-07-24 11:48:56 CST; 67Docs: https://kubernetes.io/docs/home/Process: 10435 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBMain PID: 10435 (code=exited, status=255)Jul 24 11:48:56 kubeadm-master1.yzl.cn systemd[1]: kubelet.service: Failed with result 'exit-c
root@kubeadm-master1:~#
#kubeadm命令补全
root@kubeadm-master1:~# mkdir /data/scripts -p
root@kubeadm-master1:~# kubeadm completion bash > /data/scripts/kubeadm_completion.sh
root@kubeadm-master1:~# source /data/scripts/kubeadm_completion.sh root@kubeadm-master1:~# vim /etc/profile
#添加在最后一行
source /data/scripts/kubeadm_completion.sh
6高可用master初始化
6.1master节点镜像下载
#镜像准备
#查看默认是去谷歌下载镜像
root@kubeadm-master1:~# kubeadm config images list
I0724 13:04:41.046144 23100 version.go:254] remote version is much newer: v1.21.3; falling back to: stable-1.20
k8s.gcr.io/kube-apiserver:v1.20.9
k8s.gcr.io/kube-controller-manager:v1.20.9
k8s.gcr.io/kube-scheduler:v1.20.9
k8s.gcr.io/kube-proxy:v1.20.9
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0#改为阿里源下载
root@kubeadm-master1:~# cat images-down.sh
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0root@kubeadm-master1:~# bash images-down.sh root@kubeadm-master3:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.20.5 5384b1650507 4 months ago 118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.20.5 d7e24aeb3b10 4 months ago 122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.20.5 6f0c3da8c99e 4 months ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.20.5 8d13f1db8bfb 4 months ago 47.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 11 months ago 253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 13 months ago 45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 17 months ago 683kB
6.2基于命令初始化高可用master
#命令
#定义--pod-network-cidr和--service-cidr时候规划网段要记得和宿主机中已有的网段不能冲突
kubeadm init --apiserver-advertise-address=10.0.0.10 --control-plane-endpoint=10.0.0.100 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=zilong.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
6.3初始化过程
root@kubeadm-master1:~# kubeadm init --apiserver-advertise-address=10.0.0.10 --control-plane-endpoint=10.0.0.100 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=zilong.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap#初始化成功,下面信息保存好,用于后期添加节点
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:#下面两行用于添加master节点使用:kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 \--discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15 \--control-plane Then you can join any number of worker nodes by running the following on each as root:#下面两行用于添加node节点:
kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 \--discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15
6.4配置kube-config文件及网络组件
6.4.1kube-config文件
kube-config文件中包含kube-apiserver地址及相关认证信息root@kubeadm-master1:~# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?root@kubeadm-master1:~# mkdir -p $HOME/.kube
root@kubeadm-master1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kubeadm-master1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/configroot@kubeadm-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.yzl.cn NotReady control-plane,master 19m v1.20.5
浏览器打开https://kubernetes.io/docs/concepts/cluster-administration/addons/
选Flannel
For Kubernetes v1.17+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@kubeadm-master1:~# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml#用这个源下载镜像,然后pull到harbor
root@kubeadm-master1:~# docker pull quay.io/coreos/flannel:v0.14.0root@kubeadm-master1:~# docker tag quay.io/coreos/flannel:v0.14.0 harbor.yzil.cn/yzil/flannel:v0.14.0root@kubeadm-master1:~# docker push harbor.yzil.cn/yzil/flannel:v0.14.0root@kubeadm-master1:~# mkdir yyy
root@kubeadm-master1:~# mv kube-flannel.yml yyy/
root@kubeadm-master1:~# cd yyy/
root@kubeadm-master1:~/yyy# vim kube-flannel.yml net-conf.json: |{"Network": "10.244.0.0/16",
#把上面地址改为初始化时候的
#Network": "10.100.0.0/16",- name: install-cniimage: quay.io/coreos/flannel:v0.14.0containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.14.0
#把上面的镜像源改为本地仓库,下载快image: harbor.yzil.cn/yzil/flannel:v0.14.0command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: harbor.yzil.cn/yzil/flannel:v0.14.0
root@kubeadm-master1:~/yyy# kubectl apply -f kube-flannel.yml root@kubeadm-master1:~/yyy# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-54d67798b7-f64mq 1/1 Running 0 6m46s
kube-system coredns-54d67798b7-s4m85 1/1 Running 0 6m46s
kube-system etcd-kubeadm-master1.yzl.cn 1/1 Running 0 6m53s
kube-system kube-apiserver-kubeadm-master1.yzl.cn 1/1 Running 0 6m53s
kube-system kube-controller-manager-kubeadm-master1.yzl.cn 1/1 Running 0 6m52s
kube-system kube-flannel-ds-4tn2p 1/1 Running 0 4m29s
kube-system kube-proxy-9s44x 1/1 Running 0 6m46s
kube-system kube-scheduler-kubeadm-master1.yzl.cn 1/1 Running 0 6m53s
6.4.2当前master生成证数用于添加新控制节点
#保存下来,密钥用于添加master节点
root@kubeadm-master1:~# kubeadm init phase upload-certs --upload-certs
I0724 19:28:03.346470 54829 version.go:254] remote version is much newer: v1.21.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
40469bbc0b2a852cf257cef9538ddd3fabc0c4177a75db29330c49e35ad0728b
6.5添加节点到k8s集群
#在node1上执行
#master1初始化完成最后一行root@kubeadm-node1:~# kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 \
> --discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15root@kubeadm-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.yzl.cn Ready control-plane,master 13m v1.20.5
kubeadm-node1.yzl.cn Ready <none> 59s v1.20.5root@kubeadm-node2:~# kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 --discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15root@kubeadm-node3:~# kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 --discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15root@kubeadm-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.yzl.cn Ready control-plane,master 21m v1.20.5
kubeadm-node1.yzl.cn Ready <none> 9m3s v1.20.5
kubeadm-node2.yzl.cn Ready <none> 6m26s v1.20.5
kubeadm-node3.yzl.cn Ready <none> 6m24s v1.20.5
#在master上执行
#master初始化倒数第二个提示 加上 --certificate-key master生成证数用于添加新控制节点密钥root@kubeadm-master2:~# kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 --discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15 --control-plane --certificate-key 40469bbc0b2a852cf257cef9538ddd3fabc0c4177a75db29330c49e35ad0728broot@kubeadm-master3:~# kubeadm join 10.0.0.100:6443 --token uj9thr.h0y9jnv9ivkdw3k6 --discovery-token-ca-cert-hash sha256:4141c7f3fb14a2cac70625db98fabe0b6236ca242810e3f86c47dd8dc142db15 --control-plane --certificate-key 40469bbc0b2a852cf257cef9538ddd3fabc0c4177a75db29330c49e35ad0728b
6.6k8s创建容器并测试内部网络
root@kubeadm-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.yzl.cn Ready control-plane,master 42m v1.20.5
kubeadm-master2.yzl.cn Ready control-plane,master 15m v1.20.5
kubeadm-master3.yzl.cn Ready control-plane,master 7m53s v1.20.5
kubeadm-node1.yzl.cn Ready <none> 29m v1.20.5
kubeadm-node2.yzl.cn Ready <none> 27m v1.20.5
kubeadm-node3.yzl.cn Ready <none> 27m v1.20.5root@kubeadm-master1:~# kubectl run net-test1 --image=alpine sleep 500000
pod/net-test1 created
root@kubeadm-master1:~# kubectl run net-test2 --image=alpine sleep 500000
pod/net-test2 created
root@kubeadm-master1:~# kubectl run net-test3 --image=alpine sleep 500000
pod/net-test3 createdroot@kubeadm-master1:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1 1/1 Running 0 22s
net-test2 1/1 Running 0 14s
net-test3 0/1 ContainerCreating 0 10sroot@kubeadm-master1:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1 1/1 Running 0 68s
net-test2 1/1 Running 0 60s
net-test3 1/1 Running 0 56sroot@kubeadm-master1:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test1 1/1 Running 0 88s 10.100.3.2 kubeadm-node3.yzl.cn <none> <none>
net-test2 1/1 Running 0 80s 10.100.2.2 kubeadm-node2.yzl.cn <none> <none>
net-test3 1/1 Running 0 76s 10.100.1.2 kubeadm-node1.yzl.cn <none> <none>root@kubeadm-master1:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 16:5F:D9:18:2F:39 inet addr:10.100.3.2 Bcast:10.100.3.255 Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1RX packets:11 errors:0 dropped:0 overruns:0 frame:0TX packets:1 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0 RX bytes:938 (938.0 B) TX bytes:42 (42.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0UP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)/ # ping 10.100.2.2
PING 10.100.2.2 (10.100.2.2): 56 data bytes
64 bytes from 10.100.2.2: seq=0 ttl=62 time=2.089 ms
64 bytes from 10.100.2.2: seq=1 ttl=62 time=0.575 ms
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.14): 56 data bytes
64 bytes from 39.156.66.14: seq=0 ttl=127 time=26.530 ms
64 bytes from 39.156.66.14: seq=1 ttl=127 time=53.099 ms
7部署dashboard
GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters
7.1部署dashboard v2.3.1
root@kubeadm-master1:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yamlroot@kubeadm-master1:~# mv recommended.yaml dashboard-v2.3.1.yamlroot@kubeadm-master1:~# vim dashboard-v2.3.1.yaml
spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30002selector:k8s-app: kubernetes-dashboardroot@kubeadm-master1:~# kubectl apply -f dashboard-v2.3.1.yaml root@kubeadm-master1:~# kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 93m
kube-system kube-dns ClusterIP 10.200.0.10 <none> 53/UDP,53/TCP,9153/TCP 93m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.200.230.131 <none> 8000/TCP 65s
kubernetes-dashboard kubernetes-dashboard NodePort 10.200.171.71 <none> 443:30002/TCP 66s#每个节点都会监听3002
root@kubeadm-master2:~# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 64 0.0.0.0:2049 0.0.0.0:*
LISTEN 0 128 0.0.0.0:30002 0.0.0.0:*
root@kubeadm-master1:~# cat admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboardroot@kubeadm-master1:~# kubectl apply -f admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user createdroot@kubeadm-master1:~# kubectl get secret -A |grep admin
kubernetes-dashboard admin-user-token-r6ghv kubernetes.io/service-account-token 3 3m2sroot@kubeadm-master1:~# kubectl describe secret admin-user-token-r6ghv -n kubernetes-dashboard
Name: admin-user-token-r6ghv
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 4a52aad7-f9e0-4675-b647-da22b017646fType: kubernetes.io/service-account-tokenData
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InlmbEZuYXp5M3d2WlE5NWVmdXdqNEk2V1lFX3FqS0VhWGhJUGc4ZWJmTlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI2Z2h2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0YTUyYWFkNy1mOWUwLTQ2NzUtYjY0Ny1kYTIyYjAxNzY0NmYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.JnGk0ryoT5CDtbRDKUMBlcHYdGkmeaLJqyq8r2WGRXaXfu34DhzAS3VyDeQ4xLQi60Mn-DUYWiW5wW6j2wvy20q9_z7w0g7nEkb1cZ03E8g5ABHOQwCGDjlHicRodOankcZgsKwhq_tHLRRAH87c67osE_xS0Rx33vV_31M0kNuCq_LOoqxSEKQ29c2V2fHvbGdJjEcc-V-v5MzDEf3_9bYGSntUfsqJ-8Mrb5QwEL58qjBRyrfV0dP7c3rQYo8USW4TOMxJRb3c_z7MxJ4tuRkuUDEFZ6d24UzYKpwQdsX1PXxzeafNYauTCxwDENK9LhJEBa4UeJpJiR0q_ZnlxQ
7.2部署dashboard-v2.3.1
root@kubeadm-master1:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yamlroot@kubeadm-master1:~# mv recommended.yaml dashboard-v2.3.1.yamlroot@kubeadm-master1:~# vim dashboard-v2.3.1.yaml spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30002selector:k8s-app: kubernetes-dashboard#上传admin-user.yml
root@kubeadm-master1:~# cat admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboardroot@kubeadm-master1:~# kubectl apply -f dashboard-v2.3.1.yaml -f admin-user.yml root@kubeadm-master1:~# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.200.230.131 <none> 8000/TCP 31m
kubernetes-dashboard NodePort 10.200.171.71 <none> 443:30002/TCP 31m
root@kubeadm-master1:~# ss -tnl
LISTEN 0 128 0.0.0.0:30002 0.0.0.0:*#获取登录token
root@kubeadm-master1:~# kubectl get secret -A |grep admin
kubernetes-dashboard admin-user-token-r6ghv kubernetes.io/service-account-token 3 50mroot@kubeadm-master1:~# kubectl describe secret admin-user-token-r6ghv -n kubernetes-dashboard
Name: admin-user-token-r6ghv
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 4a52aad7-f9e0-4675-b647-da22b017646fType: kubernetes.io/service-account-tokenData
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InlmbEZuYXp5M3d2WlE5NWVmdXdqNEk2V1lFX3FqS0VhWGhJUGc4ZWJmTlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI2Z2h2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0YTUyYWFkNy1mOWUwLTQ2NzUtYjY0Ny1kYTIyYjAxNzY0NmYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.JnGk0ryoT5CDtbRDKUMBlcHYdGkmeaLJqyq8r2WGRXaXfu34DhzAS3VyDeQ4xLQi60Mn-DUYWiW5wW6j2wvy20q9_z7w0g7nEkb1cZ03E8g5ABHOQwCGDjlHicRodOankcZgsKwhq_tHLRRAH87c67osE_xS0Rx33vV_31M0kNuCq_LOoqxSEKQ29c2V2fHvbGdJjEcc-V-v5MzDEf3_9bYGSntUfsqJ-8Mrb5QwEL58qjBRyrfV0dP7c3rQYo8USW4TOMxJRb3c_z7MxJ4tuRkuUDEFZ6d24UzYKpwQdsX1PXxzeafNYauTCxwDENK9LhJEBa4UeJpJiR0q_ZnlxQ
7.3访问浏览器
#可以访问各个节点
https://10.0.0.10:30002#可以访问VIP
https://10.0.0.100:6443
通过kubeadm部署高可用的k8s集群相关推荐
- Kubeadm安装高可用的K8S集群--多master单node
Kubeadm安装高可用的K8S集群–多master单node master1 IP 192.168.1.180/24 OS Centos7.6 master2 IP 192.168.1.181/24 ...
- 总结 Underlay 和 Overlay 网络,在k8s集群实现underlay网络,网络组件flannel vxlan/ calico IPIP模式的网络通信流程,基于二进制实现高可用的K8S集群
1.总结Underlay和Overlay网络的的区别及优缺点 Overlay网络: Overlay 叫叠加网络也叫覆盖网络,指的是在物理网络的 基础之上叠加实现新的虚拟网络,即可使网络的中的容器可 ...
- 高可用安装K8s集群1.20.x
目录 1. 安装说明 2. 节点规划 3. 基本配置 4. 内核配置 5. 基本组件安装 6. 高可用组件安装 7. 集群初始化 8. 高可用Master 9. 添加Node节点 10. Calico ...
- Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记
目录 二进制Metrics&Dashboard安装 二进制高可用集群可用性验证 生产环境k8s集群关键性配置 Bootstrapping: Kubelet启动过程 Bootstrapping: ...
- (shell批量版)二进制高可用安装k8s集群v1.23.5版本,搭配containerd容器运行时
目录 第1章 安装前准备 1.1 节点规划 1.2 配置NTP 1.3 bind安装DNS服务 1.4 修改主机DNS 1.5 安装runtime环境及依赖 1.5.1 安装docker运行时 1.5 ...
- 项目四 CentOS使用kubeadm部署工具部署测试环境的K8s集群---Kubectl命令使用以及安装dashboard界面
大家好,我是SuieKa.在之前呢有幸学习了马哥教育提供的K8s入门指南以及视频.初来乍到,写一篇关于K8s的介绍以及部署测试环境使用的K8s集群. 树 @·K8s入门简单介绍 一.K8s(Kuber ...
- Redis practise(二)使用Docker部署Redis高可用,分布式集群
思路 鉴于之间学习过的Docker一些基础知识,这次准备部署一个简单的分布式,高可用的redis集群,如下的拓扑 tuopu.png 下面介绍下,对于这张拓扑图而言,需要了解的一些基础概念. Redi ...
- 使用Kubespray部署生产可用的Kubernetes集群(1.11.2)
Kubernetes的安装部署是难中之难,每个版本安装方式都略有区别.笔者一直想找一种支持多平台 .相对简单 .适用于生产环境 的部署方案.经过一段时间的调研,有如下几种解决方案进入笔者视野: 部署方 ...
- 搭建 K8S 环境:Centos7安装生产环境可用的K8S集群图文教程指南
搭建 K8S 环境:Centos7安装生产环境可用的K8S集群图文教程指南 一. K8S 简介 二. K8S 学习的几大拦路虎 2.1 K8S 安装对硬件要求比较高 2.2. K8S 对使用者来说要求 ...
最新文章
- jspstudy启动mysql失败_MySql启动数据库设置初始密码
- 谷歌最强 NLP 模型 BERT 解读
- 【数据展示】matplotlib设置画面大小
- python shape函数_Perlin噪声和Python的ctypes
- 不需要密码的windows计划任务设置
- Delphi如何获取本机IP地址
- 【三维路径规划】基于matlab自适应遗传算法求解单无人机三维路径规划问题【含Matlab源码 214期】
- 仿Android 5.0 侧滑菜单按钮动画 以及侧滑菜单联动
- cad2004教程_CAD卸载教程
- dhcp服务器怎样自动,dhcp服务器设置教程【图文教程】
- 2048总结 python_2048游戏的python实现
- 网络、如何通信、TCP/IP协议
- 现在计算机怎样读硬盘端口,组装电脑之硬盘识别篇
- HashMap线程安全性问题
- GTO与OKR工具选择
- 湖人 PK 凯尔特人!!!
- 内存泄露检测方案和代码实现
- 计算机网络基础名词,计算机网络基础名词解释
- 【地理知识】3度带和6度带
- CLIPDraw:基于CLIP的text-to-vector生成器