目录

第1章 安装前准备

1.1 节点规划

1.2 配置NTP

1.3 bind安装DNS服务

1.4 修改主机DNS

1.5 安装runtime环境及依赖

1.5.1 安装docker运行时

1.5.2 安装containerd运行时

1.6 安装habor仓库

1.7 配置高可用

第2章 k8s安装集群master

2.1 下载二进制安装文件

2.2 生成证书

2.2.1 生成etcd证书

2.2.2 生成k8s证书

2.3 安装etcd

2.4 安装apisercer

2.5 安装controller

2.6 安装scheduler

第3章 k8s安装集群node

3.1 TLS Bootstrapping配置

3.2 node证书配置

3.3 安装kubelet

3.4 安装kube-proxy

第4章 k8s安装集群插件

4.1 安装calico

4.2 安装coredns

4.3 安装metrics-server

4.4 安装ingress-nginx

4.5 配置storageClass动态存储

4.6 安装kubesphere集群管理


第1章 安装前准备

1.1 节点规划

操作系统 Almalinux RHEL-8

m1,node1 : 192.168.44.201 主机名称:k8s-<ip>.host.com 内存4g CPU2核心

m2,node2 : 192.168.44.202 主机名称:k8s-<ip>.host.com 内存4g CPU2核心

m3,node3 : 192.168.44.203 主机名称:k8s-<ip>.host.com 内存4g CPU2核心

192.168.44.201

apiserver,controller,scheduler,kubelet,kube-proxy,flannel

192.168.44.202

apiserver,controller,scheduler,kubelet,kube-proxy,flannel

192.168.44.203

apiserver,controller,scheduler,kubelet,kube-proxy,flannel

192.168.44.200

vip

10.0.0.0/16

service网络

10.244.0.0/16

pod网络

#配置免密登录201
cd ~
cat >host_list.txt <<EOF
192.168.44.201 1
192.168.44.202 1
192.168.44.203 1
EOF
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
yum install -y sshpass
cd ~
for i in $(seq $(cat host_list.txt |wc -l));\
do \
echo "start config host:$(cat host_list.txt |awk "NR==${i}"'{print $1}')";\
sshpass -p$(cat host_list.txt |awk "NR==${i}"'{print $2}') ssh-copy-id -i /root/.ssh/id_rsa.pub root@$(cat host_list.txt |awk "NR==${i}"'{print $1}') "-o StrictHostKeyChecking=no";\
done
#验证201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} ip a;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#修改主机名称所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} hostnamectl set-hostname k8s-${i//./-}.host.com;\
ssh root@${i} hostname;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#准备epel源所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm;\
ssh root@${i} sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*;\
ssh root@${i} sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*;\
done#安装相关小软件所有master和node节点201,202,203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install -y wget tar net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils vim less;\
ssh root@${i} yum install rsync -y;\
ssh root@${i} yum install lrzsz lvm2 jq psmisc yum-utils git net-tools nmap unzip tree vim dos2unix nc telnet wget lsof bash-completion -y;\
done#修改host文件所有master和node节点201.202.203
#单机编辑201
vim /etc/hosts
192.168.44.201   k8s-192-168-44-201.host.com
192.168.44.202   k8s-192-168-44-202.host.com
192.168.44.203   k8s-192-168-44-203.host.com
192.168.44.200   vip
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/hosts root@${i}:/etc/;\
ssh root@${i} cat /etc/hosts;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#关闭防火墙和selinux 所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl stop firewalld;\
ssh root@${i} systemctl disable firewalld;\
ssh root@${i} sed -ri 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux;\
ssh root@${i} setenforce 0;\
ssh root@${i} getenforce;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#关闭swap 所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} swapoff -a;\
ssh root@${i} sed -ri "'s@(.*)(swap)(.*)@#\1\2\3@g'" /etc/fstab;\
ssh root@${i} grep "swap" /etc/fstab;\
ssh root@${i} free -h;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#配置limit 所有master和node节点201.202.203
#单机编辑201
vim /etc/security/limits.conf
#末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/security/limits.conf root@${i}:/etc/security/;\
ssh root@${i} cat /etc/security/limits.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#下载安装文档201
cd ~
git clone https://github.com/dotbalo/k8s-ha-install.git#升级内核(7的系统升级,8的不用)推荐4.18及以上
#所有master和node节点201.202.203
uname -a
Linux Alma 4.18.0-305.el8.x86_64 #1 SMP Wed May 19 18:55:28 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
yum update -y --exclude=kernel* && reboot
#安装内核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
yum localinstall kernel-ml* -y
#查看默认内核
grubby --default-kernel
#查看所有内核
grubby --info=ALL
#修改默认内核
grubby --set-default /boot/vmlinuz-4.18.0-305.el8.x86_64
#重启检查内核
reboot
grubby --default-kernel#安装ipvsadm工具所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install ipvsadm ipset sysstat conntrack libseccomp -y;\
done#加载ipvs模块所有master和node节点201.202.203
#4.18以上内核将nf_conntrack_ipv4改为nf_conntrack
#单机编辑201
vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
ip_vs_ovf
ip_vs_pe_sip
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/modules-load.d/ipvs.conf root@${i}:/etc/modules-load.d/;\
ssh root@${i} cat /etc/modules-load.d/ipvs.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#编辑服务所有master和node节点201.202.203
#单机编辑201
vim /usr/lib/systemd/system/systemd-modules-load.service
[Install]
WantedBy=multi-user.target
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /usr/lib/systemd/system/systemd-modules-load.service root@${i}:/usr/lib/systemd/system/;\
ssh root@${i} cat /usr/lib/systemd/system/systemd-modules-load.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#启动服务所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl daemon-reload;\
ssh root@${i} systemctl enable --now systemd-modules-load.service;\
ssh root@${i} systemctl status systemd-modules-load.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#检查模块加载情况所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} lsmod | grep ip_vs;\
ssh root@${i} ipvsadm -Ln;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#加载br_netfilter模块所有master和node节点201.202.203
#单机编辑201
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/modules-load.d/k8s.conf root@${i}:/etc/modules-load.d/;\
ssh root@${i} cat /etc/modules-load.d/k8s.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#重新启动服务所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl restart systemd-modules-load.service;\
ssh root@${i} systemctl status systemd-modules-load.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#检查模块加载情况所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} lsmod | grep br_netfilter;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#配置内核参数所有master和node节点201.202.203
#单机编辑201
cat >  /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/sysctl.d/k8s.conf root@${i}:/etc/sysctl.d/;\
ssh root@${i} cat /etc/sysctl.d/k8s.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#生效内核参数所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} sysctl --system;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#重启验证内核所有master和node节点201.202.203
reboot
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} lsmod | grep ip_vs;\
ssh root@${i} ipvsadm -Ln;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done

1.2 配置NTP

#服务端201
yum install chrony  -y
vim /etc/chrony.conf
allow 192.168.44.0/24
systemctl enable chronyd.service
systemctl restart chronyd.service
chronyc sources#客户端所有master和node节点202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install chrony -y;\
done
#单机编辑201
cp /etc/chrony.conf /tmp
vim /tmp/chrony.conf
server 192.168.44.201 iburst
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}'|grep -v "192.168.44.201");\
do \
rsync -avzP /tmp/chrony.conf root@${i}:/etc/;\
ssh root@${i} cat /etc/chrony.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#批量重启服务201
cd ~
for i in $(cat host_list.txt |awk '{print $1}'|grep -v "192.168.44.201");\
do \
ssh root@${i} systemctl enable chronyd.service;\
ssh root@${i} systemctl restart chronyd.service;\
ssh root@${i} chronyc sources;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done

1.3 bind安装DNS服务

#201
~]# yum install -y bind
主配置文件
注意语法(分号空格)这里的IP是内网IP
~]# vim /etc/named.conf    //确保以下配置正确
listen-on port 53 { 192.168.44.201; }; # 监听端口53 下面一行本来有ipv6地址 需要删除
directory   "/var/named";
allow-query     { any; }; #允许内机器都可以查
forwarders      { 192.168.44.2; }; # 虚拟机这里填网关地址,阿里云机器可以填223.5.5.5
recursion yes; # 采用递归方法查询IP
dnssec-enable no;
dnssec-validation no;
~]# named-checkconf # 检查配置  没有信息即为正确  在 201 配置域名文件
增加两个zone配置,od.com为业务域,host.com为主机域
~]# vim /etc/named.rfc1912.zones
zone "host.com" IN { type  master;        file  "host.com.zone";        allow-update { 192.168.44.201; };
};  zone "od.com" IN {type  master;         file  "od.com.zone";         allow-update { 192.168.44.201; };
};在 201 配置主机域文件
第四行时间需要修改 (年月日01)每次修改配置文件都需要前滚一个序列号
~]# vim /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600        ; 10 minutes
@     IN SOA    dns.host.com. dnsadmin.host.com. (2021071301   ; serial10800        ; refresh (3 hours)900          ; retry (15 minutes)604800       ; expire (1 week)86400        ; minimum (1 day))NS   dns.host.com.
$TTL 60 ; 1 minute
dns                     A    192.168.44.201
k8s-192-168-44-201           A    192.168.44.201
k8s-192-168-44-202           A    192.168.44.202
k8s-192-168-44-203           A    192.168.44.203在 201 配置业务域文件
~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (2021071301 ; serial10800      ; refresh (3 hours)900        ; retry (15 minutes)604800     ; expire (1 week)86400      ; minimum (1 day))NS   dns.od.com.
$TTL 60 ; 1 minute
dns        A    192.168.44.201在 201 启动bind服务,并测试
~]# named-checkconf  # 检查配置文件
~]# systemctl start named ; systemctl enable named
~]# dig -t A k8s-192-168-44-203.host.com  @192.168.44.201 +short #检查是否可以解析
192.168.44.203

1.4 修改主机DNS

#所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} sed -ri '"s#(DNS1=)(.*)#\1192.168.44.201#g"' /etc/sysconfig/network-scripts/ifcfg-eth0;\
ssh root@${i} systemctl restart NetworkManager.service;\
ssh root@${i} ping www.baidu.com -c 1;\
ssh root@${i} ping k8s-192-168-44-203.host.com -c 1;\
ssh root@${i} cat /etc/resolv.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done==============================================================
# Generated by NetworkManager
search host.com #添加后解析主机A记录 可以不加域名 例如
dig -t A k8s-192-168-44-201  @192.168.44.201 +short
nameserver 192.168.44.201

1.5 安装runtime环境及依赖

1.5.1 安装docker运行时

虽然docker被k8s抛弃了。但是还是要安装上,因为后期的ci/cd功能需要用到宿主机的docker的引擎来构建镜像,所以所有master和node节点201.202.203都需要安装上

#所有master和node节点201.202.203
#依赖关系:https://git.k8s.io/kubernetes/build/dependencies.yaml
#官方网站:https://docs.docker.com/engine/install/centos/
#清理环境
yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine
#官方源(推荐)
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} curl  https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo;\
ssh root@${i} ls -l /etc/yum.repos.d;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
=====================================================================
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#安装指定版本docker这里是20.10.14版本
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install -y yum-utils;\
ssh root@${i} yum list docker-ce --showduplicates | sort -r;\
ssh root@${i} yum list docker-ce-cli --showduplicates | sort -r;\
ssh root@${i} yum list containerd.io --showduplicates | sort -r;\
ssh root@${i} yum install docker-ce-20.10.14 docker-ce-cli-20.10.14 containerd.io -y;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#阿里源(备用)
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} curl  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo;\
ssh root@${i} ls -l /etc/yum.repos.d;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
==================================================================
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#创建文件夹所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} mkdir /etc/docker -p;\
ssh root@${i} mkdir /data/docker -p;\
ssh root@${i} tree /data;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#修改docker配置文件,bip172.xx.xx.1的xx.xx替换为主机ip的后两位
#单机编辑201
vim /etc/docker/daemon.json
{"graph": "/data/docker","storage-driver": "overlay2","insecure-registries": ["registry.access.redhat.com","quay.io","192.168.44.201:8088"],"registry-mirrors": ["https://uoggbpok.mirror.aliyuncs.com"],"bip": "172.16.0.1/16","exec-opts": ["native.cgroupdriver=systemd"],"live-restore": true,"log-driver": "json-file","log-opts": {"max-size":"100m", "max-file":"3"}
}
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/docker/daemon.json root@${i}:/etc/docker/;\
ssh root@${i} cat /etc/docker/daemon.json;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有master和node节点201.202.203
#启动docker,验证版本信息
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl daemon-reload;\
ssh root@${i} systemctl enable docker;\
ssh root@${i} systemctl start docker;\
ssh root@${i} docker version;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done

1.5.2 安装containerd运行时

注意:默认kubelet使用的是docker运行时,如果要指定其他运行时需要修改kebelet启动配置文件参数

KUBELET_RUNTIME_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"

#所有master和node节点201.202.203
#官方文档:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#containerd
#设置模块所有master和node节点201.202.203
#单机编辑201
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/modules-load.d/containerd.conf root@${i}:/etc/modules-load.d/;\
ssh root@${i} cat /etc/modules-load.d/containerd.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#批量加载模块201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} sudo modprobe overlay;\
ssh root@${i} sudo modprobe br_netfilter;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#检查模块所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} lsmod |egrep "(overlay|br_netfilter)";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#设置内核参数所有master和node节点201.202.203
#单机编辑201
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/sysctl.d/99-kubernetes-cri.conf root@${i}:/etc/sysctl.d/;\
ssh root@${i} cat /etc/sysctl.d/99-kubernetes-cri.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#应用 sysctl 参数而无需重新启动所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} sudo sysctl --system;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#安装 containerd 所有master和node节点201.202.203
从官方Docker仓库安装 containerd.io 软件包。可以在 安装 Docker 引擎 中找到有关为各自的 Linux 发行版设置 Docker 存储库和安装 containerd.io 软件包的说明。
依赖关系:https://git.k8s.io/kubernetes/build/dependencies.yaml
#添加源
#官方源(推荐)
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} curl  https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo;\
ssh root@${i} ls -l /etc/yum.repos.d;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
=====================================================================
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#安装指定版本docker这里是20.10.14版本
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install -y yum-utils;\
ssh root@${i} yum list docker-ce --showduplicates | sort -r;\
ssh root@${i} yum list docker-ce-cli --showduplicates | sort -r;\
ssh root@${i} yum list containerd.io --showduplicates | sort -r;\
ssh root@${i} yum install docker-ce-20.10.14 docker-ce-cli-20.10.14 containerd.io -y;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#阿里源(备用)
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} curl  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo;\
ssh root@${i} ls -l /etc/yum.repos.d;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
==================================================================
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo#配置containerd 所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} sudo mkdir -p /etc/containerd;\
ssh root@${i} containerd config default | sudo tee /etc/containerd/config.toml;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#编辑配置文件,使用 systemd cgroup 驱动程序201.202.203
#单机编辑201
vim /etc/containerd/config.toml
#修改数据目录
root = "/data/containerd"
state = "/run/containerd"
#修改systemd[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]runtime_type = "io.containerd.runc.v2"runtime_engine = ""runtime_root = ""privileged_without_host_devices = falsebase_runtime_spec = ""[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup = true #启用systemd
#修改镜像加速[plugins."io.containerd.grpc.v1.cri".registry][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint = ["https://uoggbpok.mirror.aliyuncs.com","https://registry-1.docker.io"]
#修改pause镜像[plugins."io.containerd.grpc.v1.cri"]disable_tcp_service = truestream_server_address = "127.0.0.1"stream_server_port = "0"stream_idle_timeout = "4h0m0s"enable_selinux = falseselinux_category_range = 1024sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
#添加insecure私有仓库[plugins."io.containerd.grpc.v1.cri".registry][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint = ["https://uoggbpok.mirror.aliyuncs.com","https://registry-1.docker.io"][plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.myharbor.com"]endpoint = ["https://registry.myharbor.com"][plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.44.201"]endpoint = ["https://192.168.44.201"][plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]endpoint = ["https://quay.io"][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.configs."registry.myharbor.com".tls]insecure_skip_verify = true[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.myharbor.com".auth]username = "admin"password = "888888"[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.44.201".tls]insecure_skip_verify = true[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.44.201".auth]username = "admin"password = "888888"[plugins."io.containerd.grpc.v1.cri".registry.configs."quay.io".tls]insecure_skip_verify = true
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/containerd/config.toml root@${i}:/etc/containerd/;\
ssh root@${i} cat /etc/containerd/config.toml;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#启动服务所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl daemon-reload;\
ssh root@${i} systemctl enable --now containerd;\
ssh root@${i} systemctl status containerd;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#配置crictl客户端工具所有master和node节点201.202.203
官方下载地址:https://github.com/kubernetes-sigs/cri-tools/releases
依赖关系:https://git.k8s.io/kubernetes/build/dependencies.yaml
#下载安装对应k8s版本的crictl工具所有master和node节点201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} mkdir /opt/tool -p;\
ssh root@${i} cd /opt/tool/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#转发文件到其余节点201
#单机下载201
cd /opt/tool/
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.22.0/crictl-v1.22.0-linux-amd64.tar.gz
tar xf crictl-v1.22.0-linux-amd64.tar.gz
\mv crictl /usr/local/bin/
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /usr/local/bin/crictl root@${i}:/usr/local/bin/;\
ssh root@${i} ls -l /usr/local/bin;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#编辑配置文件master和node节点201.202.203
#单机编辑201
vim /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/crictl.yaml root@${i}:/etc/;\
ssh root@${i} cat /etc/crictl.yaml;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#命令补全所有master和node节点201.202.203
#单机执行201
source <(crictl completion bash) && echo 'source <(crictl completion bash)' >> ~/.bashrc
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP ~/.bashrc root@${i}:~/;\
ssh root@${i} cat ~/.bashrc;\
ssh root@${i} systemctl restart containerd.service;\
ssh root@${i} crictl --version;\
ssh root@${i} crictl images;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done=======================================================================
crictl version v1.22.0
#简单入门使用方法见文档:
https://kubernetes.io/zh/docs/tasks/debug-application-cluster/crictl/

1.6 安装habor仓库

#前提准备201
yum install -y docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
mkdir /opt/tool
cd /opt/tool #官方文档:https://goharbor.io/docs/2.0.0/install-config/configure-https/
wget https://github.com/goharbor/harbor/releases/download/v2.0.1/harbor-offline-installer-v2.0.1.tgz
mkdir -p /opt/harbor_data
tar xf harbor-offline-installer-v2.0.1.tgz
cd harbor
mkdir cert -p
cd cert
#生成 CA 证书私钥
openssl genrsa -out ca.key 4096
#生成 CA 证书(替换CN为registry.myharbor.com)
openssl req -x509 -new -nodes -sha512 -days 36500 \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=registry.myharbor.com" \-key ca.key \-out ca.crt
#生成域名私钥
openssl genrsa -out registry.myharbor.com.key 4096
#生成域名请求csr(替换CN为registry.myharbor.com)
openssl req -sha512 -new \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=registry.myharbor.com" \-key registry.myharbor.com.key \-out registry.myharbor.com.csr
#生成 v3 扩展文件
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=registry.myharbor.com
DNS.2=registry.myharbor
DNS.3=hostname
IP.1=192.168.44.201
EOF
#生成域名证书
openssl x509 -req -sha512 -days 36500 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in registry.myharbor.com.csr \-out registry.myharbor.com.crt
#查看效果
tree ./
./
├── ca.crt
├── ca.key
├── ca.srl
├── registry.myharbor.com.crt
├── registry.myharbor.com.csr
├── registry.myharbor.com.key
└── v3.ext0 directories, 7 files
#转换xxx.crt为xxx.cert, 供 Docker 使用
openssl x509 -inform PEM \-in registry.myharbor.com.crt \-out registry.myharbor.com.cert
#查看
tree ./
./
├── ca.crt
├── ca.key
├── ca.srl
├── registry.myharbor.com.cert
├── registry.myharbor.com.crt
├── registry.myharbor.com.csr
├── registry.myharbor.com.key
└── v3.ext0 directories, 8 files
#复制证书到harbor主机的docker文件夹
mkdir /etc/docker/certs.d/registry.myharbor.com/ -p
\cp registry.myharbor.com.cert /etc/docker/certs.d/registry.myharbor.com/
\cp registry.myharbor.com.key /etc/docker/certs.d/registry.myharbor.com/
\cp ca.crt /etc/docker/certs.d/registry.myharbor.com/
如果您将默认nginx端口 443 映射到不同的端口,请创建文件夹/etc/docker/certs.d/yourdomain.com:port或/etc/docker/certs.d/harbor_IP:port.
#检查
tree /etc/docker/certs.d/
/etc/docker/certs.d/
└── registry.myharbor.com├── ca.crt├── registry.myharbor.com.cert└── registry.myharbor.com.key1 directory, 3 files
#重启Docker 引擎。
systemctl restart docker #部署或重新配置 Harbor
cd /opt/tool/harbor/
cp harbor.yml.tmpl  harbor.yml
修改配置文件主要修改参数如下:
vim harbor.yml
hostname = registry.myharbor.com    //修改主机名称
关于http配置(见下)
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 8088                       //修改端口
关于https配置(见下)
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /opt/tool/harbor/cert/registry.myharbor.com.crtprivate_key: /opt/tool/harbor/cert/registry.myharbor.com.key
其余配置
harbor_admin_password: 888888         //修改admin密码
data_volume: /opt/harbor_data    //修改持久化数据目录
第五步:执行./install.sh
#如果前面已经安装了http只要启用https的话操作如下
./prepare
docker-compose down -v
docker-compose up -d
第六步:浏览器访问https://192.168.44.201或者https://registry.myharbor.com/
账号admin密码888888

注意:需要使用该仓库域名的机器配置hosts解析,这里为所有k8s的node节点

#所有master和node节点201.202.203
#单机编辑201
vim /etc/hosts
192.168.44.201 registry.myharbor.com
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/hosts root@${i}:/etc/;\
ssh root@${i} cat /etc/hosts;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done

1.7 配置高可用

#201 && 202 && 203
配置Nginx反向代理192.168.44.201:6443和192.168.44.202:6443和192.168.44.203:6443
cat >/etc/yum.repos.d/nginx.repo <<EOF
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=1
priority=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
EOFcd /etc/yum.repos.d/
ll
mv epel.repo epel.repo1
mv almalinux.repo almalinux.repo1
yum install nginx -y
mv epel.repo1 epel.repo
mv almalinux.repo1 almalinux.repo
vim /etc/nginx/nginx.conf
# 末尾加上以下内容,stream 只能加在 main 中(4层代理)
# 此处只是简单配置下nginx,实际生产中,建议进行更合理的配置
stream {log_format proxy '$time_local|$remote_addr|$upstream_addr|$protocol|$status|''$session_time|$upstream_connect_time|$bytes_sent|$bytes_received|''$upstream_bytes_sent|$upstream_bytes_received' ;upstream kube-apiserver {server 192.168.44.201:6443     max_fails=3 fail_timeout=30s;server 192.168.44.202:6443     max_fails=3 fail_timeout=30s;server 192.168.44.203:6443     max_fails=3 fail_timeout=30s;}server {listen 192.168.44.200:7443;proxy_connect_timeout 1s;proxy_timeout 900s;proxy_pass kube-apiserver;access_log /var/log/nginx/proxy.log proxy;}
}
nginx -t
systemctl enable nginx
systemctl start nginx#高可用配置201
yum install -y keepalived
mkdir /server/scripts/ -p
cat >/server/scripts/check_web.sh <<EOF
#!/bin/bash
num=\`netstat -tunpl |awk '{print \$4}' |egrep -c ":80\$"\`
if [ \$num -eq 0 ]
thenexit 1
elseexit 0
fi
EOF
chmod +x /server/scripts/check_web.sh
bash -x /server/scripts/check_web.shcat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {router_id lb01}vrrp_script check_web {script "/server/scripts/check_web.sh"  interval 3   weight -60}vrrp_instance oldboy {state MASTERinterface eth0virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.44.200/24}track_script {check_web}}
EOF
systemctl enable keepalived.service
systemctl start keepalived.service
ip a#高可用202
yum install -y keepalived
mkdir /server/scripts/ -p
cat >/server/scripts/check_web.sh <<EOF
#!/bin/bash
num=\`netstat -tunpl |awk '{print \$4}' |egrep -c ":80\$"\`
if [ \$num -eq 0 ]
thenexit 1
elseexit 0
fi
EOF
chmod +x /server/scripts/check_web.sh
bash -x /server/scripts/check_web.sh cat >/server/scripts/check_ip.sh <<EOF
#!/bin/bash
ip a s eth0|grep "192.168.44.200" >/dev/null
if [ \$? -eq 0 ]
thenecho "主机\$(hostnamectl |awk 'NR==1''{print \$3}')里边的keepalived服务出现异常,请进行检查"|mail -s 异常告警-keepalived  2744588786@qq.com
fi
EOF
chmod +x  /server/scripts/check_ip.sh
配置发送邮件
vim /etc/mail.rc
set from=xxxx@163.com smtp=smtp.163.com
set smtp-auth-user=xxxx@163.com smtp-auth-password=xxxx smtp-auth=login
systemctl restart postfix.service
echo "邮件发送测试"|mail -s "邮件测试" xxxx@qq.comcat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {router_id lb02}vrrp_script check_web {script "/server/scripts/check_web.sh"  interval 3   weight -60}vrrp_instance oldboy {state BACKUPinterface eth0virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.44.200/24}track_script {check_web}}
EOFsystemctl enable keepalived.service
systemctl start keepalived.service
ip a#高可用203
yum install -y keepalived
mkdir /server/scripts/ -p
cat >/server/scripts/check_web.sh <<EOF
#!/bin/bash
num=\`netstat -tunpl |awk '{print \$4}' |egrep -c ":80\$"\`
if [ \$num -eq 0 ]
thenexit 1
elseexit 0
fi
EOF
chmod +x /server/scripts/check_web.sh
bash -x /server/scripts/check_web.sh cat >/server/scripts/check_ip.sh <<EOF
#!/bin/bash
ip a s eth0|grep "192.168.44.200" >/dev/null
if [ \$? -eq 0 ]
thenecho "主机\$(hostnamectl |awk 'NR==1''{print \$3}')里边的keepalived服务出现异常,请进行检查"|mail -s 异常告警-keepalived  2744588786@qq.com
fi
EOF
chmod +x  /server/scripts/check_ip.sh
配置发送邮件
vim /etc/mail.rc
set from=xxxx@163.com smtp=smtp.163.com
set smtp-auth-user=xxxx@163.com smtp-auth-password=xxxx smtp-auth=login
systemctl restart postfix.service
echo "邮件发送测试"|mail -s "邮件测试" xxxx@qq.comcat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {router_id lb03}vrrp_script check_web {script "/server/scripts/check_web.sh"  interval 3   weight -60}vrrp_instance oldboy {state BACKUPinterface eth0virtual_router_id 51priority 80advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.44.200/24}track_script {check_web}}
EOFsystemctl enable keepalived.service
systemctl start keepalived.service
ip a#配置nginx内核模块支持绑定不存在的IP地址201 && 202 && 203
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
sysctl -p
vim /etc/nginx/nginx.conf
stream {log_format proxy '$time_local|$remote_addr|$upstream_addr|$protocol|$status|''$session_time|$upstream_connect_time|$bytes_sent|$bytes_received|''$upstream_bytes_sent|$upstream_bytes_received' ;upstream kube-apiserver {server 192.168.44.201:6443     max_fails=3 fail_timeout=30s;server 192.168.44.202:6443     max_fails=3 fail_timeout=30s;server 192.168.44.203:6443     max_fails=3 fail_timeout=30s;}server {listen 192.168.44.200:7443;proxy_connect_timeout 1s;proxy_timeout 900s;proxy_pass kube-apiserver;access_log /var/log/nginx/proxy.log proxy;}
}
systemctl restart nginx.service
netstat -tunpl|grep nginx

第2章 k8s安装集群master

2.1 下载二进制安装文件

  • k8s下载

https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md

#(所有master和 node节点)201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} mkdir /opt/tool -p;\
ssh root@${i} cd /opt/tool;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#(所有master和 node节点)201.202.203
#单机下载201
cd /opt/tool
wget https://storage.googleapis.com/kubernetes-release/release/v1.23.5/kubernetes-server-linux-amd64.tar.gz -O kubernetes-v1.23.5-server-linux-amd64.tar.gz
tar xf kubernetes-v1.23.5-server-linux-amd64.tar.gz
mv kubernetes /opt/kubernetes-1.23.5
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /opt/kubernetes-1.23.5 root@${i}:/opt/;\
ssh root@${i} ln -s /opt/kubernetes-1.23.5 /opt/kubernetes;\
ssh root@${i} tree -L 1 /opt/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#(所有master和 node节点)201.202.203
tree -L 1 /opt/
/opt/
├── containerd
├── kubernetes -> /opt/kubernetes-1.23.5
├── kubernetes-1.23.5
└── tool4 directories, 0 files#命令补全(所有master和 node节点)201.202.203
#单机执行201
\cp /opt/kubernetes/server/bin/kubectl /usr/local/bin/
source <(kubectl completion bash) && echo 'source <(kubectl completion bash)' >> ~/.bashrc
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /usr/local/bin/kubectl root@${i}:/usr/local/bin/;\
rsync -avzP ~/.bashrc root@${i}:~/;\
ssh root@${i} tree -L 1 /usr/local/bin/;\
ssh root@${i} cat ~/.bashrc;\
ssh root@${i} kubectl version;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done===========================================================================
kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • 下载etcd

https://github.com/etcd-io/etcd/releases/tag/v3.5.1

依赖关系:https://git.k8s.io/kubernetes/build/dependencies.yaml

#所有etcd节点201.202.203
#批量201
cd ~
for i in 192.168.44.{201..203};\
do \
ssh root@${i} mkdir /opt/tool -p;\
ssh root@${i} cd /opt/tool
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#下载安装包所有etcd节点201.202.203
#单机下载201
wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
tar xf etcd-v3.5.1-linux-amd64.tar.gz
mv etcd-v3.5.1-linux-amd64 /opt/etcd-v3.5.1
#分发201
cd ~
for i in 192.168.44.{201..203};\
do \
rsync -avzP /opt/etcd-v3.5.1 root@${i}:/opt/;\
ssh root@${i} ln -s /opt/etcd-v3.5.1 /opt/etcd;\
ssh root@${i} tree -L 1 /opt/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有etcd节点201.202.203
tree -L 1 /opt/
/opt/
├── containerd
├── etcd -> /opt/etcd-v3.5.1
├── etcd-v3.5.1
├── kubernetes -> /opt/kubernetes-1.23.5
├── kubernetes-1.23.5
└── tool6 directories, 0 files#拷贝命令到环境变量(所有etcd节点201.202.203)
#单机执行201
\cp /opt/etcd/etcdctl /usr/local/bin/
#分发201
cd ~
for i in 192.168.44.{201..203};\
do \
rsync -avzP /usr/local/bin/etcdctl root@${i}:/usr/local/bin/;\
ssh root@${i} tree -L 1 /usr/local/bin/;\
ssh root@${i} etcdctl version;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done====================================================================
etcdctl version
etcdctl version: 3.5.1
API version: 3.5

2.2 生成证书

#201
cd
rz
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssl-json
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl*
\mv cfssl* /usr/local/bin/
ll /usr/local/bin/
total 140556
-rwxr-xr-x 1 root root  12049979 Apr 29 20:53 cfssl
-rwxr-xr-x 1 root root  12021008 Apr 29 21:16 cfssl-certinfo
-rwxr-xr-x 1 root root   9663504 Apr 29 21:08 cfssl-json
-rwxr-xr-x 1 1000 users 45611187 Aug  5  2021 crictl
-rwxr-xr-x 1 root root  17981440 Apr 29 19:11 etcdctl
-rwxr-xr-x 1 root root  46596096 Apr 29 18:24 kubectl
  • 2.2.1 生成etcd证书

#创建证书目录(所有etcd节点)201.202.203
#批量201
cd ~
for i in 192.168.44.{201..203};\
do \
ssh root@${i} mkdir -p /etc/kubernetes/pki/etcd;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#进入安装文档目录201
cd /root/k8s-ha-install/
git checkout manual-installation-v1.23.x
git branch
cd pki/
#生成etcd的ca证书201
cfssl gencert -initca etcd-ca-csr.json | cfssl-json -bare /etc/kubernetes/pki/etcd/etcd-ca
#生成etcd的证书(注意hostname的ip地址修改预留几个可能扩容的etcd地址)
cfssl gencert \-ca=/etc/kubernetes/pki/etcd/etcd-ca.pem \-ca-key=/etc/kubernetes/pki/etcd/etcd-ca-key.pem \-config=ca-config.json \-hostname=127.0.0.1,k8s-192-168-44-201.host.com,k8s-192-168-44-202.host.com,k8s-192-168-44-203.host.com,192.168.44.201,192.168.44.202,192.168.44.203,192.168.44.204,192.168.44.205,192.168.44.206,192.168.44.207 \-profile=kubernetes \etcd-csr.json | cfssl-json -bare /etc/kubernetes/pki/etcd/etcd
#查看结果201
ll /etc/kubernetes/pki/etcd
total 24
-rw-r--r-- 1 root root 1005 Dec 24 01:15 etcd-ca.csr
-rw------- 1 root root 1675 Dec 24 01:15 etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Dec 24 01:15 etcd-ca.pem
-rw-r--r-- 1 root root 1005 Dec 24 01:40 etcd.csr
-rw------- 1 root root 1675 Dec 24 01:40 etcd-key.pem
-rw-r--r-- 1 root root 1562 Dec 24 01:40 etcd.pem#下发证书到其他etcd节点201
cd ~
for i in 192.168.44.{201..203};\
do \
rsync -avzP /etc/kubernetes/pki/etcd/etcd* root@${i}:/etc/kubernetes/pki/etcd;\
ssh root@${i} tree /etc/kubernetes/pki/etcd;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#验证下发结果
#202
ll /etc/kubernetes/pki/etcd
total 24
-rw-r--r-- 1 root root 1005 Dec 24 01:15 etcd-ca.csr
-rw------- 1 root root 1675 Dec 24 01:15 etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Dec 24 01:15 etcd-ca.pem
-rw-r--r-- 1 root root 1005 Dec 24 01:40 etcd.csr
-rw------- 1 root root 1675 Dec 24 01:40 etcd-key.pem
-rw-r--r-- 1 root root 1562 Dec 24 01:40 etcd.pem
#203
ll /etc/kubernetes/pki/etcd
total 24
-rw-r--r-- 1 root root 1005 Dec 24 01:15 etcd-ca.csr
-rw------- 1 root root 1675 Dec 24 01:15 etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Dec 24 01:15 etcd-ca.pem
-rw-r--r-- 1 root root 1005 Dec 24 01:40 etcd.csr
-rw------- 1 root root 1675 Dec 24 01:40 etcd-key.pem
-rw-r--r-- 1 root root 1562 Dec 24 01:40 etcd.pem
  • 2.2.2 生成k8s证书

#创建证书目录(所有master节点)201.202.203
#批量201
cd ~
for i in 192.168.44.{201..203};\
do \
ssh root@${i} mkdir -p /etc/kubernetes/pki/etcd;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#进入安装文档目录201
cd /root/k8s-ha-install/
git checkout manual-installation-v1.23.x
git branch
cd pki/
#生成ca证书201
cfssl gencert -initca ca-csr.json | cfssl-json -bare /etc/kubernetes/pki/ca
#生成apiserver证书201(注意VIP地址和预留几个扩容的apiserver地址)
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-hostname=10.0.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.44.200,192.168.44.201,192.168.44.202,192.168.44.203,192.168.44.204,192.168.44.205,192.168.44.206,192.168.44.207 \-profile=kubernetes \apiserver-csr.json | cfssl-json -bare /etc/kubernetes/pki/apiserver#生成聚合ca证书201
cfssl gencert -initca front-proxy-ca-csr.json | cfssl-json -bare /etc/kubernetes/pki/front-proxy-ca
#生成聚合证书201
cfssl gencert \-ca=/etc/kubernetes/pki/front-proxy-ca.pem \-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \-config=ca-config.json \-profile=kubernetes \front-proxy-client-csr.json | cfssl-json -bare /etc/kubernetes/pki/front-proxy-client#生成controller证书201
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssl-json -bare /etc/kubernetes/pki/controller-manager
#配置controller设置201
设置集群(注意VIP地址)
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.44.200:7443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
设置用户
kubectl config set-credentials system:kube-controller-manager \--client-certificate=/etc/kubernetes/pki/controller-manager.pem \--client-key=/etc/kubernetes/pki/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
设置上下文
kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
设置用户使用的默认上下文
kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig  #生成scheduler证书201
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssl-json -bare /etc/kubernetes/pki/scheduler
#配置scheduler设置201
设置集群(注意VIP地址)
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.44.200:7443 \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
设置用户
kubectl config set-credentials system:kube-scheduler \--client-certificate=/etc/kubernetes/pki/scheduler.pem \--client-key=/etc/kubernetes/pki/scheduler-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
设置上下文
kubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
设置用户使用的默认上下文
kubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig#生成admin的证书201
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \admin-csr.json | cfssl-json -bare /etc/kubernetes/pki/admin
#配置admin的设置201
设置集群(注意VIP地址)
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.44.200:7443 \--kubeconfig=/etc/kubernetes/admin.kubeconfig
设置用户
kubectl config set-credentials kubernetes-admin \--client-certificate=/etc/kubernetes/pki/admin.pem \--client-key=/etc/kubernetes/pki/admin-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/admin.kubeconfig
设置上下文
kubectl config set-context kubernetes-admin@kubernetes \--cluster=kubernetes \--user=kubernetes-admin \--kubeconfig=/etc/kubernetes/admin.kubeconfig
设置默认上下文
kubectl config use-context kubernetes-admin@kubernetes \--kubeconfig=/etc/kubernetes/admin.kubeconfig #创建ServiceAccount Key 201
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub#查看结果201
tree /etc/kubernetes/
/etc/kubernetes/
├── admin.kubeconfig
├── controller-manager.kubeconfig
├── pki
│   ├── admin.csr
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver.csr
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca.csr
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager.csr
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── etcd
│   │   ├── etcd-ca.csr
│   │   ├── etcd-ca-key.pem
│   │   ├── etcd-ca.pem
│   │   ├── etcd.csr
│   │   ├── etcd-key.pem
│   │   └── etcd.pem
│   ├── front-proxy-ca.csr
│   ├── front-proxy-ca-key.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client.csr
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler.csr
│   ├── scheduler-key.pem
│   └── scheduler.pem
└── scheduler.kubeconfig2 directories, 32 files#下发证书到其他master节点201
cd ~
for i in 192.168.44.{201..203};\
do \
rsync -avzP /etc/kubernetes/ root@${i}:/etc/kubernetes;\
ssh root@${i} tree /etc/kubernetes/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#查看结果
#202
tree /etc/kubernetes/
/etc/kubernetes/
├── admin.kubeconfig
├── controller-manager.kubeconfig
├── pki
│   ├── admin.csr
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver.csr
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca.csr
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager.csr
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── etcd
│   │   ├── etcd-ca.csr
│   │   ├── etcd-ca-key.pem
│   │   ├── etcd-ca.pem
│   │   ├── etcd.csr
│   │   ├── etcd-key.pem
│   │   └── etcd.pem
│   ├── front-proxy-ca.csr
│   ├── front-proxy-ca-key.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client.csr
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler.csr
│   ├── scheduler-key.pem
│   └── scheduler.pem
└── scheduler.kubeconfig2 directories, 32 files#203
tree /etc/kubernetes/
/etc/kubernetes/
├── admin.kubeconfig
├── controller-manager.kubeconfig
├── pki
│   ├── admin.csr
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver.csr
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca.csr
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager.csr
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── etcd
│   │   ├── etcd-ca.csr
│   │   ├── etcd-ca-key.pem
│   │   ├── etcd-ca.pem
│   │   ├── etcd.csr
│   │   ├── etcd-key.pem
│   │   └── etcd.pem
│   ├── front-proxy-ca.csr
│   ├── front-proxy-ca-key.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client.csr
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler.csr
│   ├── scheduler-key.pem
│   └── scheduler.pem
└── scheduler.kubeconfig2 directories, 32 files

2.3 安装etcd

准备配置文件

#注意修改ip地址
#201
mkdir /etc/etcd -p
vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.44.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.44.201:2379,http://127.0.0.1:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.44.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.44.201:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.44.201:2380,etcd2=https://192.168.44.202:2380,etcd3=https://192.168.44.203:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"#202
mkdir /etc/etcd -p
vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.44.202:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.44.202:2379,http://127.0.0.1:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.44.202:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.44.202:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.44.201:2380,etcd2=https://192.168.44.202:2380,etcd3=https://192.168.44.203:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"#203
mkdir /etc/etcd -p
vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.44.203:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.44.203:2379,http://127.0.0.1:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.44.203:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.44.203:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.44.201:2380,etcd2=https://192.168.44.202:2380,etcd3=https://192.168.44.203:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

准备启动文件

#所有etcd节点201.202.203
#单机编辑201
cat >/usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/opt/etcd/etcd \\--cert-file=/etc/kubernetes/pki/etcd/etcd.pem \\--key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \\--trusted-ca-file=/etc/kubernetes/pki/etcd/etcd-ca.pem \\--peer-cert-file=/etc/kubernetes/pki/etcd/etcd.pem \\--peer-key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \\--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/etcd-ca.pem \\--peer-client-cert-auth \\--client-cert-auth
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
#分发201
cd ~
for i in 192.168.44.{201..203};\
do \
rsync -avzP /usr/lib/systemd/system/etcd.service root@${i}:/usr/lib/systemd/system/;\
ssh root@${i} cat /usr/lib/systemd/system/etcd.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有etcd节点启动服务201.202.203
#批量201
cd ~
for i in 192.168.44.{201..203};\
do \
ssh root@${i} mkdir -p /var/lib/etcd;\
ssh root@${i} systemctl daemon-reload;\
ssh root@${i} systemctl enable etcd.service;\
ssh root@${i} systemctl start etcd.service;\
ssh root@${i} systemctl status etcd.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有etcd节点查看服务是否报错201.202.203
journalctl -xeu etcd#查看集群状态201.202.203
ETCDCTL_API=3
etcdctl --write-out=table \--cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \--cert=/etc/kubernetes/pki/etcd/etcd.pem \--key=/etc/kubernetes/pki/etcd/etcd-key.pem \--endpoints=https://192.168.44.201:2379,https://192.168.44.202:2379,https://192.168.44.203:2379 \endpoint status
==========================================
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.44.201:2379 | 67c0fca4984e9ae2 |   3.5.1 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| https://192.168.44.202:2379 | 60f469fa0ad8659d |   3.5.1 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
| https://192.168.44.203:2379 | f361abc3703ec236 |   3.5.1 |   25 kB |     false |      false |         2 |          9 |                  9 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
=======================================
etcdctl --write-out=table \--cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \--cert=/etc/kubernetes/pki/etcd/etcd.pem \--key=/etc/kubernetes/pki/etcd/etcd-key.pem \--endpoints=https://192.168.44.201:2379,https://192.168.44.202:2379,https://192.168.44.203:2379 \endpoint health
+-----------------------------+--------+------------+-------+
|          ENDPOINT           | HEALTH |    TOOK    | ERROR |
+-----------------------------+--------+------------+-------+
| https://192.168.44.203:2379 |   true |  9.16524ms |       |
| https://192.168.44.201:2379 |   true | 8.769627ms |       |
| https://192.168.44.202:2379 |   true | 10.22853ms |       |
+-----------------------------+--------+------------+-------+
==========================================

2.4 安装apisercer

#创建相关目录(所有master节点)201.202.203
#批量201
cd ~
for i in 192.168.44.{201..203};\
do \
ssh root@${i} mkdir -p /opt/cni/bin /etc/cni/net.d /etc/kubernetes/manifests /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#创建token文件(所有master节点)201.202.203
#单机编辑201
cat >/etc/kubernetes/token.csv <<EOF
02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers"
EOF
#分发201
cd ~
for i in 192.168.44.{201..203};\
do \
rsync -avzP /etc/kubernetes/token.csv root@${i}:/etc/kubernetes/;\
ssh root@${i} cat /etc/kubernetes/token.csv;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#准备启动文件(注意修改service的ip地址段,和etcd集群ip地址)
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#201
cat >/usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \\--v=2  \\--logtostderr=true  \\--allow-privileged=true  \\--bind-address=0.0.0.0  \\--secure-port=6443  \\--insecure-port=0  \\--advertise-address=192.168.44.201 \\--service-cluster-ip-range=10.0.0.0/16  \\--service-node-port-range=30000-32767  \\--etcd-servers=https://192.168.44.201:2379,https://192.168.44.202:2379,https://192.168.44.203:2379 \\--etcd-cafile=/etc/kubernetes/pki/etcd/etcd-ca.pem  \\--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem  \\--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem  \\--client-ca-file=/etc/kubernetes/pki/ca.pem  \\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\--service-account-key-file=/etc/kubernetes/pki/sa.pub  \\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\--authorization-mode=Node,RBAC  \\--enable-bootstrap-token-auth=true  \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\--requestheader-allowed-names=aggregator  \\--requestheader-group-headers=X-Remote-Group  \\--requestheader-extra-headers-prefix=X-Remote-Extra-  \\--requestheader-username-headers=X-Remote-User  \\--token-auth-file=/etc/kubernetes/token.csv  \\--feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true"Restart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
EOF#202
cat >/usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \\--v=2  \\--logtostderr=true  \\--allow-privileged=true  \\--bind-address=0.0.0.0  \\--secure-port=6443  \\--insecure-port=0  \\--advertise-address=192.168.44.202 \\--service-cluster-ip-range=10.0.0.0/16  \\--service-node-port-range=30000-32767  \\--etcd-servers=https://192.168.44.201:2379,https://192.168.44.202:2379,https://192.168.44.203:2379 \\--etcd-cafile=/etc/kubernetes/pki/etcd/etcd-ca.pem  \\--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem  \\--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem  \\--client-ca-file=/etc/kubernetes/pki/ca.pem  \\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\--service-account-key-file=/etc/kubernetes/pki/sa.pub  \\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\--authorization-mode=Node,RBAC  \\--enable-bootstrap-token-auth=true  \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\--requestheader-allowed-names=aggregator  \\--requestheader-group-headers=X-Remote-Group  \\--requestheader-extra-headers-prefix=X-Remote-Extra-  \\--requestheader-username-headers=X-Remote-User  \\--token-auth-file=/etc/kubernetes/token.csv  \\--feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true"Restart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
EOF#203
cat >/usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \\--v=2  \\--logtostderr=true  \\--allow-privileged=true  \\--bind-address=0.0.0.0  \\--secure-port=6443  \\--insecure-port=0  \\--advertise-address=192.168.44.203 \\--service-cluster-ip-range=10.0.0.0/16  \\--service-node-port-range=30000-32767  \\--etcd-servers=https://192.168.44.201:2379,https://192.168.44.202:2379,https://192.168.44.203:2379 \\--etcd-cafile=/etc/kubernetes/pki/etcd/etcd-ca.pem  \\--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem  \\--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem  \\--client-ca-file=/etc/kubernetes/pki/ca.pem  \\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\--service-account-key-file=/etc/kubernetes/pki/sa.pub  \\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\--authorization-mode=Node,RBAC  \\--enable-bootstrap-token-auth=true  \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\--requestheader-allowed-names=aggregator  \\--requestheader-group-headers=X-Remote-Group  \\--requestheader-extra-headers-prefix=X-Remote-Extra-  \\--requestheader-username-headers=X-Remote-User  \\--token-auth-file=/etc/kubernetes/token.csv  \\--feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true"Restart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
EOF#所有master节点启动服务#201.202.203
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl status kube-apiserver.service
journalctl -xeu kube-apiserver.service

2.5 安装controller

#准备启动文件(注意修改pod的ip地址段,本次10.244.0.0/16)
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#所有master节点201.202.203
cat >/usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \\--v=2 \\--logtostderr=true \\--authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\--authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\--bind-address=127.0.0.1 \\--client-ca-file=/etc/kubernetes/pki/ca.pem \\--root-ca-file=/etc/kubernetes/pki/ca.pem \\--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\--service-cluster-ip-range=10.0.0.0/16 \\--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\--leader-elect=true \\--use-service-account-credentials=true \\--node-monitor-grace-period=40s \\--node-monitor-period=5s \\--pod-eviction-timeout=2m0s \\--controllers=*,bootstrapsigner,tokencleaner \\--allocate-node-cidrs=true \\--cluster-cidr=10.244.0.0/16 \\--cluster-name=kubernetes \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\--cluster-signing-duration=876000h0m0s \\--feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true" \\--node-cidr-mask-size=24Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
EOF#启动服务201.202.203
systemctl daemon-reload
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager.service
journalctl -xeu kube-controller-manager.service

2.6 安装scheduler

#准备启动文件
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#所有master节点201.202.203
cat >/usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/server/bin/kube-scheduler \\--v=2 \\--logtostderr=true \\--authentication-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \\--authorization-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \\--bind-address=127.0.0.1 \\--leader-elect=true \\--feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true" \\--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
EOF#启动服务
#所有master节点201.202.203
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
systemctl status kube-scheduler.service
journalctl -xeu kube-scheduler.service

第3章 k8s安装集群node

3.1 TLS Bootstrapping配置

官方文档:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#bootstrap-tokens

#配置Bootstrapping配置文件201
cd /root/k8s-ha-install/
git checkout manual-installation-v1.23.x
git branch
cd bootstrap/
创建集群(注意VIP地址)
kubectl config set-cluster kubernetes  \--certificate-authority=/etc/kubernetes/pki/ca.pem  \--embed-certs=true  \--server=https://192.168.44.200:7443  \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
创建用户(注意token要和bootstrap.secret.yaml或者token.csv文件里边一样)
kubectl config set-credentials tls-bootstrap-token-user \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
创建上下文
kubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=kubernetes  \--user=tls-bootstrap-token-user \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
创建默认上下文
kubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig#所有master节点201.202.203
mkdir /root/.kube -p
cp /etc/kubernetes/admin.kubeconfig /root/.kube/config#配置token以及RBAC规则201
=============================================
方法一:令牌认证文件(推荐,因为token永不过期)
cat /etc/kubernetes/token.csv
02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers"
#需要api开启这个特性
--token-auth-file=/etc/kubernetes/token.csv
=============================================
方法二:启动引导令牌(推荐,因为token过期会自动被清理)
#需要api组件开启这个特性
--enable-bootstrap-token-auth=true
==============================================
cat ~/k8s-ha-install/bootstrap/bootstrap.secret.yaml
#启动令牌认证
apiVersion: v1
kind: Secret
metadata:# name 必须是 "bootstrap-token-<token id>" 格式的name: bootstrap-token-c8ad9cnamespace: kube-system
# type 必须是 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:# 供人阅读的描述,可选。description: "The default bootstrap token generated by 'kubelet '."# 令牌 ID 和秘密信息,必需。token-id: c8ad9ctoken-secret: 2e4d610cf3e7426e# 可选的过期时间字段  expiration: "2999-03-10T03:22:11Z"# 允许的用法usage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"# 令牌要认证为的额外组,必须以 "system:bootstrappers:" 开头auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
# 允许启动引导节点创建 CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: create-csrs-for-bootstrapping
subjects:
- kind: Groupname: system:bootstrappersapiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRolename: system:node-bootstrapperapiGroup: rbac.authorization.k8s.io
---
# 批复 "system:bootstrappers" 组的所有 客户端CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: auto-approve-csrs-for-group
subjects:
- kind: Groupname: system:bootstrappersapiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientapiGroup: rbac.authorization.k8s.io
---
# 批复 "system:nodes" 组的 客户端 CSR 续约请求
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: auto-approve-renewals-for-nodes
subjects:
- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientapiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace: ""
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver
===========================================================
#应用资源201
kubectl create -f bootstrap.secret.yaml

3.2 node证书配置

#创建证书目录(所有node节点)201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} mkdir -p /etc/kubernetes/pki/etcd;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done
#创建证书目录(所有node节点)201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} mkdir -p /etc/kubernetes/pki;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#下发证书201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/kubernetes/ --exclude=kubelet.kubeconfig root@${i}:/etc/kubernetes;\
ssh root@${i} tree /etc/kubernetes/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#检查
#202
tree /etc/kubernetes/
/etc/kubernetes/
├── admin.kubeconfig
├── bootstrap-kubelet.kubeconfig
├── controller-manager.kubeconfig
├── manifests
├── pki
│   ├── admin.csr
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver.csr
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca.csr
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager.csr
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── etcd
│   │   ├── etcd-ca.csr
│   │   ├── etcd-ca-key.pem
│   │   ├── etcd-ca.pem
│   │   ├── etcd.csr
│   │   ├── etcd-key.pem
│   │   └── etcd.pem
│   ├── front-proxy-ca.csr
│   ├── front-proxy-ca-key.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client.csr
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler.csr
│   ├── scheduler-key.pem
│   └── scheduler.pem
└── scheduler.kubeconfig
└── token.csv3 directories, 34 files

3.3 安装kubelet

#所有node节点创建相关目录201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} mkdir -p /opt/cni/bin /etc/cni/net.d /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node节点配置启动文件201.202.203
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#单机编辑201
cat >/usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service[Service]
ExecStart=/opt/kubernetes/server/bin/kubeletRestart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /usr/lib/systemd/system/kubelet.service root@${i}:/usr/lib/systemd/system/;\
ssh root@${i} cat /usr/lib/systemd/system/kubelet.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node节点配置启动服务配置文件201.202.203
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#单机编辑201
cat >/etc/systemd/system/kubelet.service.d/10-kubelet.conf <<EOF
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--cgroup-driver=systemd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true""
Environment="KUBELET_RUNTIME_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
ExecStart=
ExecStart=/opt/kubernetes/server/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS \$KUBELET_RUNTIME_ARGS
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/systemd/system/kubelet.service.d/10-kubelet.conf root@${i}:/etc/systemd/system/kubelet.service.d/;\
ssh root@${i} cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node节点创建kubelet的配置文件(注意修改codns地址为10.0.0.10)201.202.203
#官网:https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#单机编辑201
cat >/etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.0.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
logging:flushFrequency: 0options:json:infoBufferSize: "0"verbosity: 0
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
memorySwap: {}
nodeStatusReportFrequency: 10s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/kubernetes/kubelet-conf.yml root@${i}:/etc/kubernetes/;\
ssh root@${i} cat /etc/kubernetes/kubelet-conf.yml;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node节点启动服务201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl daemon-reload;\
ssh root@${i} systemctl enable kubelet.service;\
ssh root@${i} systemctl start kubelet.service;\
ssh root@${i} systemctl status kubelet.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node节点查看服务日志201.202.203
journalctl -xeu kubelet
#只有如下报错表明正常
Dec 24 15:16:00 master01.host.com kubelet[133745]: W1224 15:16:00.357606  133745 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Dec 24 15:16:01 master01.host.com kubelet[133745]: E1224 15:16:01.478030  133745 kubelet.go:2184] Container runtime network not ready: NetworkReady=false re>
lines 1114-1141/1141 (END)#查看证书有效期201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -text |grep -A 1 "Not Before:";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done

3.4 安装kube-proxy

#创建配置201(注意vip地址)
cd /root/k8s-ha-install/
git branch
#1
kubectl -n kube-system create serviceaccount kube-proxy
#2
kubectl create clusterrolebinding system:kube-proxy \--clusterrole system:node-proxier \--serviceaccount kube-system:kube-proxy
#3
SECRET=$(kubectl -n kube-system get sa/kube-proxy \--output=jsonpath='{.secrets[0].name}')
#4
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \--output=jsonpath='{.data.token}' | base64 -d)
#5
PKI_DIR=/etc/kubernetes/pki
#6
K8S_DIR=/etc/kubernetes
#7
kubectl config set-cluster kubernetes  \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true  \--server=https://192.168.44.200:7443  \--kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
#8
kubectl config set-credentials kubernetes  \--token=${JWT_TOKEN}  \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#9
kubectl config set-context kubernetes  \--cluster=kubernetes  \--user=kubernetes  \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#10
kubectl config use-context kubernetes  \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig#分发配置到其他node节点201
cd /root/k8s-ha-install/
git branch
ll kube-proxy/
total 12
-rw-r--r-- 1 root root  813 Dec 24 01:06 kube-proxy.conf
-rw-r--r-- 1 root root  288 Dec 24 01:06 kube-proxy.service
-rw-r--r-- 1 root root 3677 Dec 24 01:06 kube-proxy.yml
#编辑kube-proxy.conf文件修改pod网络10.244.0.0/16
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#单机编辑201
cat >/etc/kubernetes/kube-proxy.yml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF#分发文件到其他node节点201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /etc/kubernetes/kube-proxy.yml root@${i}:/etc/kubernetes;\
rsync -avzP /etc/kubernetes/kube-proxy.kubeconfig root@${i}:/etc/kubernetes;\
ssh root@${i} tree /etc/kubernetes/;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#检查分发结果202.203
tree /etc/kubernetes/
/etc/kubernetes/
├── admin.kubeconfig
├── bootstrap-kubelet.kubeconfig
├── controller-manager.kubeconfig
├── kubelet-conf.yml
├── kubelet.kubeconfig
├── kube-proxy.kubeconfig
├── kube-proxy.yml
├── manifests
├── pki
│   ├── admin.csr
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver.csr
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca.csr
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager.csr
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── etcd
│   │   ├── etcd-ca.csr
│   │   ├── etcd-ca-key.pem
│   │   ├── etcd-ca.pem
│   │   ├── etcd.csr
│   │   ├── etcd-key.pem
│   │   └── etcd.pem
│   ├── front-proxy-ca.csr
│   ├── front-proxy-ca-key.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client.csr
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler.csr
│   ├── scheduler-key.pem
│   └── scheduler.pem
├── scheduler.kubeconfig
└── token.csv3 directories, 38 files#所有node节点配置启动文件201.202.203
#注意使用相同版本的虚拟机,打开kubeadm集群查看对应参数,必要时调整
#单机编辑201
cat >/usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/server/bin/kube-proxy \\--config=/etc/kubernetes/kube-proxy.yml \\--feature-gates="RemoveSelfLink=false,EphemeralContainers=true,ExpandCSIVolumes=true" \\--v=2Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
EOF
#分发201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
rsync -avzP /usr/lib/systemd/system/kube-proxy.service root@${i}:/usr/lib/systemd/system/;\
ssh root@${i} cat /usr/lib/systemd/system/kube-proxy.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node启动服务201.202.203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} systemctl daemon-reload;\
ssh root@${i} systemctl enable kube-proxy.service;\
ssh root@${i} systemctl start kube-proxy.service;\
ssh root@${i} systemctl status kube-proxy.service;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#所有node查看服务日志201.202.203
journalctl -xeu kube-proxy

第4章 k8s安装集群插件

4.1 安装calico

https://kubernetes.io/docs/concepts/cluster-administration/addons/
#201
这里calico
cd /root/k8s-ha-install
git checkout manual-installation-v1.23.x
git branch
cd calico/
#修改文件内容,注意ip地址201
cp calico-etcd.yaml calico-etcd.yaml.bak$(date +"%F-%H-%M-%S")
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.44.201:2379,https://192.168.44.202:2379,https://192.168.44.203:2379"#g' calico-etcd.yaml
ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
POD_SUBNET="10.244.0.0/16"
sed -i 's@- name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@value: "POD_CIDR"@value: '"${POD_SUBNET}"'@g' calico-etcd.yaml #安装201
kubectl apply -f calico-etcd.yaml
kubectl get --all-namespaces pod
kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   77mkubectl get nodes
NAME                          STATUS   ROLES    AGE   VERSION
k8s-192-168-44-201.host.com   Ready    <none>   33m   v1.23.5
k8s-192-168-44-202.host.com   Ready    <none>   33m   v1.23.5
k8s-192-168-44-203.host.com   Ready    <none>   33m   v1.23.5#修改节点角色(所有master节点)201
cd ~
for i in k8s-192-168-44-{201..203}.host.com;\
do \
kubectl label node ${i} node-role.kubernetes.io/master=;\
kubectl label node ${i} node-role.kubernetes.io/node=;\
done#修改节点角色(所有node节点)201
cd ~
for i in $(kubectl get nodes |awk "NR>1"'{print $1}');\
do \
kubectl label node ${i} node-role.kubernetes.io/node=;\
done#查看效果201
kubectl get nodes
NAME                          STATUS   ROLES         AGE   VERSION
k8s-192-168-44-201.host.com   Ready    master,node   37m   v1.23.5
k8s-192-168-44-202.host.com   Ready    master,node   37m   v1.23.5
k8s-192-168-44-203.host.com   Ready    master,node   37m   v1.23.5

4.2 安装coredns

#201
cd /root/k8s-ha-install/
git checkout manual-installation-v1.23.x
git branch
cd CoreDNS/
#修改coredns的IP地址10.0.0.10
vim coredns.yaml
spec:selector:k8s-app: kube-dnsclusterIP: 10.0.0.10
#应用资源清单201
kubectl apply -f coredns.yaml
#查看结果
kubectl get --all-namespaces all
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-5f6d4b864b-rfgxl   1/1     Running   0          55m
kube-system   pod/calico-node-4f7h9                          1/1     Running   0          55m
kube-system   pod/calico-node-4j57l                          1/1     Running   0          55m
kube-system   pod/calico-node-r8dxd                          1/1     Running   0          55m
kube-system   pod/coredns-867d46bfc6-9zt6d                   1/1     Running   0          23sNAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP                  10h
kube-system   service/kube-dns     ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP,9153/TCP   24sNAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   3         3         3       3            3           kubernetes.io/os=linux   55mNAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           55m
kube-system   deployment.apps/coredns                   1/1     1            1           25sNAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-5f6d4b864b   1         1         1       55m
kube-system   replicaset.apps/coredns-867d46bfc6                   1         1         1       24s

4.3 安装metrics-server

#201
cd /root/k8s-ha-install/metrics-server/
kubectl apply -f comp.yaml
kubectl get --all-namespaces pod
kubectl top node
kubectl top pod --all-namespaces#镜像批量保存脚本
cat image_save.sh
#!/bin/bash
#fun: save docker image list to local disk
source /etc/init.d/functions
docker image ls|awk 'NR>1''{print $1":"$2}' >/tmp/image_list.txt
for image_name in $(cat /tmp/image_list.txt)
dotmp_name=${image_name-}tmp1_name=${tmp_name//./-}file_name=${tmp1_name//:/-}.tarecho "saveing image ${image_name}...."docker image save ${image_name} >/tmp/${file_name}action "image ${image_name} has saved complete!" /bin/true
done#镜像批量导入脚本
cat image_load.sh
#!/bin/bash
#fun: load docker image list from local disk
source /etc/init.d/functions
for image_name in $(find /tmp/ -name "*.tar")
doecho "loading image ${image_name//\/tmp\//}...."docker image load -i ${image_name}action "image ${image_name//\/tmp\//} has loaded complete!" /bin/true
done
docker image ls
echo "导入完成!本次导入了 $(find /tmp/ -name "*.tar"|wc -l) 个镜像,系统共计 $(docker image ls |awk 'NR>1'|wc -l) 个镜像。"

4.4 安装ingress-nginx

可以略过,参考第9章节ingress资源详解来安装最新版!!!!

#201
地址:https://github.com/kubernetes/ingress-nginx
mkdir /data/k8s-yaml/ingress-controller -p
cd /data/k8s-yaml/ingress-controller
=====================================================
cat ingress-controller.yaml
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---kind: ConfigMap
apiVersion: v1
metadata:name: nginx-configurationnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: tcp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: udp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: v1
kind: ServiceAccount
metadata:name: nginx-ingress-serviceaccountnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: nginx-ingress-clusterrolelabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secretsverbs:- list- watch- apiGroups:- ""resources:- nodesverbs:- get- apiGroups:- ""resources:- servicesverbs:- get- list- watch- apiGroups:- ""resources:- eventsverbs:- create- patch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingressesverbs:- get- list- watch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingresses/statusverbs:- update---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:name: nginx-ingress-rolenamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- pods- secrets- namespacesverbs:- get- apiGroups:- ""resources:- configmapsresourceNames:# Defaults to "<election-id>-<ingress-class>"# Here: "<ingress-controller-leader>-<nginx>"# This has to be adapted if you change either parameter# when launching the nginx-ingress-controller.- "ingress-controller-leader-nginx"verbs:- get- update- apiGroups:- ""resources:- configmapsverbs:- create- apiGroups:- ""resources:- endpointsverbs:- get---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:name: nginx-ingress-role-nisa-bindingnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: nginx-ingress-role
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: nginx-ingress-clusterrole-nisa-bindinglabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: nginx-ingress-clusterrole
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-ingress-controllernamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxannotations:prometheus.io/port: "10254"prometheus.io/scrape: "true"spec:#hostNetwork: true# wait up to five minutes for the drain of connectionsterminationGracePeriodSeconds: 300serviceAccountName: nginx-ingress-serviceaccountnodeSelector:kubernetes.io/os: linuxcontainers:- name: nginx-ingress-controllerimage: lizhenliang/nginx-ingress-controller:0.30.0args:- /nginx-ingress-controller- --configmap=$(POD_NAMESPACE)/nginx-configuration- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services- --udp-services-configmap=$(POD_NAMESPACE)/udp-services- --publish-service=$(POD_NAMESPACE)/ingress-nginx- --annotations-prefix=nginx.ingress.kubernetes.iosecurityContext:allowPrivilegeEscalation: truecapabilities:drop:- ALLadd:- NET_BIND_SERVICE# www-data -> 101runAsUser: 101env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCPlivenessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 10readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 10lifecycle:preStop:exec:command:- /wait-shutdown---apiVersion: v1
kind: LimitRange
metadata:name: ingress-nginxnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:limits:- min:memory: 90Micpu: 100mtype: Container
=====================================================
cat svc.yaml
apiVersion: v1
kind: Service
metadata:creationTimestamp: nulllabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxname: nginx-ingress-controllernamespace: ingress-nginx
spec:ports:- name: port-1port: 80protocol: TCPtargetPort: 80nodePort: 30030- name: port-2port: 443protocol: TCPtargetPort: 443nodePort: 30033selector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxtype: NodePort
status:loadBalancer: {}
====================================================#应用资源201
kubectl apply -f ingress-controller.yaml
kubectl apply -f svc.yaml
kubectl get -n ingress-nginx all
NAME                                            READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-5759f86b89-bmx9q   1/1     Running   0          4m2sNAME                               TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
service/nginx-ingress-controller   NodePort   10.0.158.23   <none>        80:30030/TCP,443:30033/TCP   4mNAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller   1/1     1            1           4m2sNAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-ingress-controller-5759f86b89   1         1         1       4m2s#配置代理201.202.203
cat >/etc/nginx/conf.d/od.com.conf <<EOF
upstream default_backend_traefik {server 192.168.44.201:30030    max_fails=3 fail_timeout=10s;server 192.168.44.202:30030    max_fails=3 fail_timeout=10s;server 192.168.44.203:30030    max_fails=3 fail_timeout=10s;
}server {listen 80;server_name *.od.com;client_max_body_size 80000m;client_header_timeout 3600s;client_body_timeout 3600s;fastcgi_connect_timeout 3600s;fastcgi_send_timeout 3600s;fastcgi_read_timeout 3600s;location / {proxy_pass http://default_backend_traefik;proxy_connect_timeout 1s;proxy_send_timeout 3600s;proxy_read_timeout 3600s;proxy_set_header Host       \$http_host;proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;}
}
EOFnginx -t
systemctl restart nginx

4.5 配置storageClass动态存储

#服务端201
mkdir /volume -p
yum install -y nfs-utils rpcbind
echo '/volume 192.168.44.0/24(insecure,rw,sync,no_root_squash,no_all_squash)' >/etc/exports
systemctl start rpcbind.service
systemctl enable rpcbind.service
systemctl start nfs-server
systemctl enable nfs-server#客户端所有node节点202 && 203
#批量201
cd ~
for i in $(cat host_list.txt |awk '{print $1}');\
do \
ssh root@${i} yum install -y nfs-utils;\
ssh root@${i} showmount -e 192.168.44.201;\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo "${i}";\
ssh root@${i} echo "===================================================";\
ssh root@${i} echo;\
done#创建目录(第1个存储类就创建对应编号1的目录)201
mkdir /data/k8s-yaml/storageClass/1 -p
cd /data/k8s-yaml/storageClass/1
==================================================
cat >rbac.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:creationTimestamp: nullname: nfs-storage
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storage
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storage
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storage
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storage
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storage
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
EOF
===========================================================
cat >dp.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisioner-1 #第1个存储类就加编号1labels:app: nfs-client-provisioner-1 #第1个存储类就加编号1# replace with namespace where provisioner is deployednamespace: nfs-storage
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisioner-1 #第1个存储类就加编号1template:metadata:labels:app: nfs-client-provisioner-1 #第1个存储类就加编号1spec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisioner-1 #第1个存储类就加编号1image: quay.io/external_storage/nfs-client-provisioner:latestimagePullPolicy: IfNotPresentvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: nfs-client-provisioner-1 #第1个存储类就加编号1,与StorageClass里边一致- name: NFS_SERVERvalue: 192.168.44.201 #替换为你的nfs服务器地址- name: NFS_PATHvalue: /volume #替换为你的nfs共享目录volumes:- name: nfs-client-rootnfs:server: 192.168.44.201 #替换为你的nfs服务器地址path: /volume #替换为你的nfs共享目录
EOF
===========================================================
cat >sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage-1 #第1个存储类就加编号1annotations:storageclass.kubernetes.io/is-default-class: "true" #设置true表示为默认的storageclass
provisioner: nfs-client-provisioner-1 #第1个存储类就加编号1,与deployment里边env PROVISIONER_NAME一致
parameters:archiveOnDelete: "false"
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
==========================================================
#所有master节点201.202.203
#如果是kubeadm安装编辑
vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --feature-gates="RemoveSelfLink=false"
#如果是二进制安装编辑
vim /usr/lib/systemd/system/kube-apiserver.service
- --feature-gates="RemoveSelfLink=false"
systemctl daemon-reload
systemctl restart kube-apiserver.service
systemctl status kube-apiserver.service
#201
kubectl apply -f rbac.yaml
kubectl apply -f dp.yaml
kubectl apply -f sc.yaml
kubectl -n nfs-storage get pod
kubectl -n nfs-storage get storageclasses.storage.k8s.io
NAME                            PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage-1           nfs-client-provisioner-1   Delete          Immediate           false                  35s#测试201
mkdir /data/k8s-yaml/mynginx -p
cd /data/k8s-yaml/mynginx
#编辑pvc
vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:namespace: nginxname: nginxlabels: {}
spec:resources:requests:storage: 10GistorageClassName: managed-nfs-storage-1 #与目标storageClass一致volumeMode: FilesystemaccessModes:- ReadWriteMany
#编辑部署
vim dp.yaml
spec:volumes:- name: host-timehostPath:path: /etc/localtimetype: ''- name: volume-00ye4lpersistentVolumeClaim:claimName: nginxcontainers:- name: nginximage: nginxvolumeMounts:- name: host-timereadOnly: truemountPath: /etc/localtime- name: volume-00ye4lmountPath: /mntterminationMessagePath: /dev/termination-logterminationMessagePolicy: FileimagePullPolicy: IfNotPresent#应用资源
kubectl apply -f .
#编辑域名解析(时间串加一位)201
vim /var/named/od.com.zone
mynginx   A    192.168.44.200
mynginx1   A    192.168.44.200
systemctl restart named
#查看效果
kubectl -n nginx get pvc
NAME    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
nginx   Bound    pvc-02e534b2-c2bf-4b3b-8543-e0cb97218c34   10Gi       RWX            managed-nfs-storage-1   6m44skubectl get --all-namespaces pod
ll /volume/nginx-nginx-pvc-02e534b2-c2bf-4b3b-8543-e0cb97218c34/

4.6 安装kubesphere集群管理

https://kubesphere.com.cn/

#201
mkdir /data/k8s-yaml/kubesphere -p
cd /data/k8s-yaml/kubesphere/
wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
#修改配置
vim cluster-configuration.yaml
etcd:monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.endpointIps: 192.168.44.201  # etcd cluster EndpointIps. It can be a bunch of IPs here.port: 2379              # etcd port.tlsEnable: true
redis:enabled: true
openldap:enabled: true
basicAuth:enabled: false
console:enableMultiLogin: true
alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.enabled: true
auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.enabled: true
devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.enabled: true
events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.enabled: true
ruler:enabled: true
logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.enabled: true
logsidecar:enabled: true
metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).enabled: false
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.enabled: true
store:enabled: true
servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.enabled: true
kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.enabled: true#将master设置为可调度,因为这平台耗费资源厉害
kubectl taint node k8s-192-168-44-201.host.com node-role.kubernetes.io/master-
kubectl describe node k8s-192-168-44-201.host.com
#将节点打标签
kubectl label nodes k8s-192-168-44-201.host.com node-role.kubernetes.io/woker=#应用资源
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml#查看效果
kubectl get --all-namespaces pod
kubesphere-system      ks-installer-54c6bcf76b-whxgj                0/1     ContainerCreating   0          74s#查看安装日志不允许出现失败的
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
localhost                  : ok=31   changed=25   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0#安装完成
localhost                  : ok=31   changed=25   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   Start installing monitoring
Start installing multicluster
Start installing openpitrix
Start installing network
**************************************************
Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)
task monitoring status is successful  (4/4)
**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################Console: http://192.168.44.201:30880
Account: admin
Password: P@88w0rdNOTES:1. After you log into the console, please check themonitoring status of service components in"Cluster Management". If any service is notready, please wait patiently until all components are up and running.2. Please change the default password after login.#####################################################
https://kubesphere.io             2021-10-03 19:21:41
#####################################################

安装完成界面效果

监控模块报错解决

#201
kubectl -n kubesphere-monitoring-system create  secret generic kube-etcd-client-certs \--from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/etcd-ca.pem \--from-file=etcd-client.crt=/etc/kubernetes/pki/etcd/etcd.pem \--from-file=etcd-client.key=/etc/kubernetes/pki/etcd/etcd-key.pem

(shell批量版)二进制高可用安装k8s集群v1.23.5版本,搭配containerd容器运行时相关推荐

  1. Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记

    目录 二进制Metrics&Dashboard安装 二进制高可用集群可用性验证 生产环境k8s集群关键性配置 Bootstrapping: Kubelet启动过程 Bootstrapping: ...

  2. 高可用安装K8s集群1.20.x

    目录 1. 安装说明 2. 节点规划 3. 基本配置 4. 内核配置 5. 基本组件安装 6. 高可用组件安装 7. 集群初始化 8. 高可用Master 9. 添加Node节点 10. Calico ...

  3. 二进制安装k8s集群V1.23.0

    目录 一.集群规划 二.基础环境配置 1.配置/etc/hosts文件 2.设置主机名 3.安装yum源(Centos7) 4.必备工具安装 5.所有节点关闭firewalld .dnsmasq.se ...

  4. 总结 Underlay 和 Overlay 网络,在k8s集群实现underlay网络,网络组件flannel vxlan/ calico IPIP模式的网络通信流程,基于二进制实现高可用的K8S集群

    1.总结Underlay和Overlay网络的的区别及优缺点 Overlay网络:  Overlay 叫叠加网络也叫覆盖网络,指的是在物理网络的 基础之上叠加实现新的虚拟网络,即可使网络的中的容器可 ...

  5. Kubeadm安装高可用的K8S集群--多master单node

    Kubeadm安装高可用的K8S集群–多master单node master1 IP 192.168.1.180/24 OS Centos7.6 master2 IP 192.168.1.181/24 ...

  6. (亲测无坑)Centos7.x使用kubeadm安装K8s集群1.15.0版本

    基础环境配置 三台Centos7.x的服务器,主节点 cpu >=2,node节点>=1 注:(上述cpu为最低配置,否则集群安装部署会报错,无法启动,对其他硬件无硬性要求) 以下操作若无 ...

  7. 通过kubeadm部署高可用的k8s集群

    1环境准备 注意: 禁用swap 关闭selinux 关闭iptable 优化内核参数限制参数 root@kubeadm-master1:~# sysctl -p net.ipv4.ip_forwar ...

  8. 【云原生-K8s】kubeadm搭建安装k8s集群v1.25版本完整教程【docker、网络插件calico、中间层cri-docker】

    前言 基础描述 从 k8s 1.24开始,dockershim已经从kubelet中移除,但因为历史问题docker却不支持kubernetes主推的CRI(容器运行时接口)标准,所以docker不能 ...

  9. 部署k8s集群--1.23.1版本

    一.环境信息与准备 1.环境信息 hostname IP 配置 功能 k8s-node1 192.168.43.11 2C4G master.registry k8s-node2 192.168.43 ...

最新文章

  1. 兀键和6键怎么判断_湖南槽钢经销商告诉您,槽钢的优劣状况应该怎么判断,注意这6点...
  2. c语言分手代码大全,C语言代码大全
  3. 如何在VC中创建动态数组
  4. linux的sed命令是什么,linux sed命令
  5. 产品认知:真正厉害的产品经理,都是“数据思维”的高手
  6. java poi生.docx_java – Apache POI或docx4j处理docx文件
  7. BERT论文的解读 PPT
  8. Struts与Struts2的区别
  9. 2020家用千兆路由器哪款好_千兆路由器哪个好 2020年值得入手的家用千兆路由器推荐...
  10. iptv原版固件_华为悦盒原版固件下载|华为悦盒永久免费IPTV固件 V1.0 最新免费版 下载_当下软件园_软件下载...
  11. C# 16进制转字符串,字符串转16进制
  12. 二分查找边界问题总结
  13. 类名与样式是否为并列关系
  14. [黑苹果]炫龙毁灭者DC显卡无解
  15. 国内外云服务现状及发展探讨
  16. mysql 法语字符比较_法语比较级如何表达?超全整理
  17. 我愿称之为最强归纳—浮点数的规格化与进制数之间的转换
  18. HttpClient访问https,设置忽略SSL证书验证
  19. 分布式之gossip共识算法分析
  20. VC++6.0报错: include stdafx.h before including this file for PCH

热门文章

  1. Python爬虫企查查
  2. Halcon 卡尺找圆
  3. 《复联4》在中国首映的 阴谋
  4. idea java: 错误: 不支持发行版本 17
  5. BMFont制作字体
  6. 关于机械系2011年硕士生复试有关事…
  7. Fortofy扫描漏洞解决方案
  8. 漏洞修复:Often Misused: Weak SSL Certificate
  9. java中map参数封装到bean_JavaBean和Map转换封装类详解
  10. 服务器物理内存利用率,服务器提高物理内存利用率