openEuler 22.09环境二进制安装Kubernetes(k8s) v1.26
本文档描述了如何在openEuler 22.09上以二进制模式部署高可用Kubernetes集群(适用k8s v1.26版本)。
注意:本文档中的所有操作均使用root权限执行。
1 部署环境
1.1 软硬件配置
1、主机清单
本文档采用5台华为ECS进行部署,基本情况如下表所示。
主机名称 | IP地址 | 说明 | 软件 |
k8s-master01 | 192.168.218.100 | master节点 |
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy、nfs-client、nginx |
k8s-master02 | 192.168.218.101 | master节点 |
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy、nfs-client、nginx |
k8s-master03 | 192.168.218.102 | master节点 |
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy、nfs-client、nginx |
k8s-node01 | 192.168.218.103 | node节点 | kubelet、kube-proxy、nfs-client、nginx |
k8s-node02 | 192.168.218.104 | node节点 | kubelet、kube-proxy、nfs-client、nginx |
2、主要软件清单
其中的涉及的主要软件及其版本如下表所示。
软件 | 版本 |
kernel | 5.10.0 |
openEuler | 22.09 |
etcd | v3.5.7 |
containerd | 1.6.16 |
cfssl | v1.6.3 |
cni | 1.2.0 |
crictl | 1.24.1 |
3、网络
k8s主机:192.168.218.0/24
service:10.96.0.0/12
pod:172.16.0.0/12
1.2 k8节点系统基础配置
1.2.1 IP配置
根据表1中的IP地址,配置K8S各节点的IP地址,在网络接口配置文件中,修改和添加以下选项,以下以master01为例,其它所有节点参照完成。
[root@k8s-master01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp4s3
TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.218.100
PREFIX=24
GATEWAY=192.168.218.1
DNS1=8.8.8.8
1.2.2 激活网络连接
[root@k8s-master01 ~]# nmcli connection reload
[root@k8s-master01 ~]# nmcli connection up enp4s3
1.2.3 设置主机名
设置主机名后,退出当前shell,重新打开shell生效。
[root@k8s-master01 ~]# hostnamectl hostname k8s-master01
[root@k8s-master01 ~]# exit
1.2.4 配置yum源
openEuler 22.09 DVD ISO镜像默认已配置了yum源,这里暂不作修改。如若没有,可参考以下内容。
[root@k8s-master01 ~]# vim /etc/yum.repos.d/openEuler.repo [OS]
name=OS
baseurl=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler[everything]
name=everything
baseurl=http://repo.openeuler.org/openEuler-22.09/everything/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler[EPOL]
name=EPOL
baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler[debuginfo]
name=debuginfo
baseurl=http://repo.openeuler.org/openEuler-22.09/debuginfo/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/debuginfo/$basearch/RPM-GPG-KEY-openEuler[source]
name=source
baseurl=http://repo.openeuler.org/openEuler-22.09/source/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/source/RPM-GPG-KEY-openEuler[update]
name=update
baseurl=http://repo.openeuler.org/openEuler-22.09/update/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler
1.2.5 安装一些必备工具
在线部署时,执行以下操作
[root@k8s-master01 ~]# dnf -y install wget psmisc vim net-tools nfs-utils telnet device-mapper-persistent-data lvm2 git tar curl bash-completion
若要离线部署,可在另一台可上网的openEuler 22.09主机上按以下步骤将软件包下载下来,然后共享给其它主机,并根据这些包制作本地yum源,本文档后面均采用离线部署方式。
# 下载必要工具
dnf -y install createrepo wget# 下载全量依赖包
mkdir -p /data/openEuler2203/
cd /data/openEuler2203/
repotrack createrepo wget psmisc vim net-tools nfs-utils telnet device-mapper-persistent-data lvm2 git tar curl gcc keepalived haproxy bash-completion chrony sshpass ipvsadm ipset sysstat conntrack libseccomp bash-completion make automake autoconf libtool# 删除libseccomp
rm -rf libseccomp-*.rpm# 下载libseccomp
wget https://repo.huaweicloud.com/openeuler/openEuler-20.03-LTS-SP1/OS/x86_64/Packages/libseccomp-2.5.0-3.oe1.x86_64.rpm# 创建yum源信息
createrepo -u -d /data/openEuler2203/# 将下载的包拷贝到内网机器上
scp -r /data/openEuler2203/ root@192.168.218.100:
scp -r /data/openEuler2203/ root@192.168.218.101:
scp -r /data/openEuler2203/ root@192.168.218.102:
scp -r /data/openEuler2203/ root@192.168.218.103:
scp -r /data/openEuler2203/ root@192.168.218.104:# 在内网机器上创建repo配置文件
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
cat > /etc/yum.repos.d/local.repo << EOF
[base]
name=base software
baseurl=file:///root/openEuler2203/
gpgcheck=0
enabled=1
EOF
# 安装下载好的包
dnf clean all
dnf makecache
dnf -y install /root/openEuler2203/* --skip-broken
1.2.6 选择性下载工具
执行以下shell脚本,下载必要工具,此操作仅在可上网的主机上操作即可,需要时可拷贝至其它主机。
[root@k8s-master01 ~]# vim download.sh
#!/bin/bash# 查看版本地址:
#
# https://github.com/containernetworking/plugins/releases/
# https://github.com/containerd/containerd/releases/
# https://github.com/kubernetes-sigs/cri-tools/releases/
# https://github.com/Mirantis/cri-dockerd/releases/
# https://github.com/etcd-io/etcd/releases/
# https://github.com/cloudflare/cfssl/releases/
# https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
# https://download.docker.com/linux/static/stable/x86_64/
# https://github.com/opencontainers/runc/releases/cni_plugins='v1.2.0'
cri_containerd_cni='1.6.16'
crictl='v1.26.0'
cri_dockerd='0.3.1'
etcd='v3.5.7'
cfssl='1.6.3'
cfssljson='1.6.3'
kubernetes_server='v1.26.1'
docker_v='20.10.23'
runc='1.1.4'if [ ! -f "runc.amd64" ];then
wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v${runc}/runc.amd64
else
echo "文件存在"
fiif [ ! -f "docker-${docker_v}.tgz" ];then
wget https://download.docker.com/linux/static/stable/x86_64/docker-${docker_v}.tgz
else
echo "文件存在"
fiif [ ! -f "cni-plugins-linux-amd64-${cni_plugins}.tgz" ];then
wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/${cni_plugins}/cni-plugins-linux-amd64-${cni_plugins}.tgz
else
echo "文件存在"
fiif [ ! -f "cri-containerd-cni-${cri_containerd_cni}-linux-amd64.tar.gz" ];then
wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v${cri_containerd_cni}/cri-containerd-cni-${cri_containerd_cni}-linux-amd64.tar.gz
else
echo "文件存在"
fiif [ ! -f "crictl-${crictl}-linux-amd64.tar.gz" ];then
wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/${crictl}/crictl-${crictl}-linux-amd64.tar.gz
else
echo "文件存在"
fiif [ ! -f "cri-dockerd-${cri_dockerd}.amd64.tgz" ];then
wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v${cri_dockerd}/cri-dockerd-${cri_dockerd}.amd64.tgz
else
echo "文件存在"
fiif [ ! -f "kubernetes-server-linux-amd64.tar.gz" ];then
wget https://dl.k8s.io/${kubernetes_server}/kubernetes-server-linux-amd64.tar.gz
else
echo "文件存在"
fiif [ ! -f "etcd-${etcd}-linux-amd64.tar.gz" ];then
wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/${etcd}/etcd-${etcd}-linux-amd64.tar.gz
else
echo "文件存在"
fiif [ ! -f "cfssl" ];then
wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${cfssl}/cfssl_${cfssl}_linux_amd64 -O cfssl
else
echo "文件存在"
fiif [ ! -f "cfssljson" ];then
wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${cfssljson}/cfssljson_${cfssljson}_linux_amd64 -O cfssljson
else
echo "文件存在"
fiif [ ! -f "helm-canary-linux-amd64.tar.gz" ];then
wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz
else
echo "文件存在"
fiif [ ! -f "nginx-1.22.1.tar.gz" ];then
wget http://nginx.org/download/nginx-1.22.1.tar.gz
else
echo "文件存在"
fiif [ ! -f "calico.yaml" ];then
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
else
echo "文件存在"
fiif [ ! -f "get_helm.sh" ];then
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
else
echo "文件存在"
fi
1.2.7 关闭firewalld防火墙和SELinux
所有主机均要执行
[root@k8s-master01 ~]# systemctl disable --now firewalld
[root@k8s-master01 ~]# setenforce 0
[root@k8s-master01 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
1.2.8 关闭交换分区
所有主机均要执行
[root@k8s-master01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@k8s-master01 ~]# swapoff -a && sysctl -w vm.swappiness=0
[root@k8s-master01 ~]# cat /etc/fstab
……
#/dev/mapper/openeuler-swap none swap defaults 0 0
1.2.9 网络配置
所有主机均要执行
[root@k8s-master01 ~]# vim /etc/NetworkManager/conf.d/calico.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*[root@k8s-master01 ~]# systemctl restart NetworkManager
1.2.10 时间同步
openEuler 22.09默认已安装并启动了chrony时间同步服务,这里只需配置时间同步服务器和客户机即可。
1、配置服务器,这里以master01作为服务器
[root@k8s-master01 ~]# vim /etc/chrony.conf
pool pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.218.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony[root@k8s-master01 ~]# systemctl restart chronyd ; systemctl enable chronyd
2、配置客户机
所有其它节点均为客户端,均需要配置。
[root@k8s-master02 ~]# vim /etc/chrony.conf
pool 192.168.218.100 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony[root@k8s-master01 ~]# systemctl restart chronyd ; systemctl enable chronyd
3、客户端验证
[root@k8s-master02 ~]# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 192.168.218.100 3 6 17 35 +428ns[-4918ns] +/- 55ms
1.2.11 配置ulimit
所有主机均要执行
[root@k8s-master01 ~]# ulimit -SHn 65535
#编辑文件,在末尾添加以下内容,并在其中可以看到帮助信息
[root@k8s-master01 ~]# vim /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimited
1.2.12 配置免密登录
该节操作仅在master01上完成。
[root@k8s-master01 ~]# ssh-keygen -f /root/.ssh/id_rsa -P ''
[root@k8s-master01 ~]# export IP="192.168.218.100 192.168.218.101 192.168.218.102 192.168.218.103 192.168.218.104"#下面的密码请替换成各主机的密码,且要求各主机密码相同
[root@k8s-master01 ~]# export SSHPASS=mima1234
[root@k8s-master01 ~]# for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST; done
1.2.13 安装ipvsadm
IPVS(IP Virtual Server,IP虚拟服务器)是运行在LVS(Linux Virtual Server)下的提供负载平衡功能的一种技术。ipvsadm用来查看Virtual Server状态和问题排查,比如节点分配情况以及连接数等情况。
该操作所有节点均要执行。
#新建ipvs.conf配置文件
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip#重启服务
[root@k8s-master01 ~]# systemctl restart systemd-modules-load.service#查看内核模块
[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 188416 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 180224 3 nf_nat,nft_ct,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 4 nf_conntrack,nf_nat,nf_tables,ip_vs
1.2.14 修改内核参数
该操作所有节点均要执行。
[root@k8s-master01 ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1#加载应用系统配置
sysctl --system
1.2.15 配置hosts本地解析
所有节点均要求操作。
[root@k8s-master01 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.218.100 k8s-master01
192.168.218.101 k8s-master02
192.168.218.102 k8s-master03
192.168.218.103 k8s-node01
192.168.218.104 k8s-node02
2 k8s基本组件安装
自v1.24.0,k8s不再支持docker方式的运行时Runtime ,推荐安装Containerd作为Runtime 。
要求在所有主机上完成以下操作。
2.1 安装Containerd
2.1.1 下载或拷贝软件包
将前面下载的cni-plugins-linux-amd64-v1.2.0.tgz和cri-containerd-cni-1.6.16-linux-amd64.tar.gz软件包拷贝至各主机节点。
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.100:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.101:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.102:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.103:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.104:scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.100:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.101:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.102:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.103:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.104:scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.100:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.101:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.102:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.103:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.104:
2.1.2 解压软件包
[root@k8s-master01 ~]# mkdir -p /etc/cni/net.d /opt/cni/bin#解压cni-plugins二进制包
[root@k8s-master01 ~]# tar xf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin/#解压cri-containerd
[root@k8s-master01 ~]# tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /
2.1.3 创建containerd服务
[root@k8s-master01 ~]# cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target
EOF
2.1.4 配置Containerd所需的内核
[root@k8s-master01 ~]# cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF# 加载内核
[root@k8s-master01 ~]# sysctl --system
2.1.5 创建Containerd的配置文件
# 创建默认配置文件
[root@k8s-master01 ~]# mkdir -p /etc/containerd
[root@k8s-master01 ~]# containerd config default | tee /etc/containerd/config.toml# 修改Containerd的配置文件
[root@k8s-master01 ~]# sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep SystemdCgroup[root@k8s-master01 ~]# sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep sandbox_image[root@k8s-master01 ~]# sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep certs.d[root@k8s-master01 ~]# mkdir /etc/containerd/certs.d/docker.io -pv[root@k8s-master01 ~]# cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://hub-mirror.c.163.com"]capabilities = ["pull", "resolve"]
EOF
2.1.6 启动并设置为开机启动
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now containerd[root@k8s-master01 ~]# systemctl restart containerd
2.1.7 配置crictl客户端连接的运行时位置
#解压
[root@k8s-master01 ~]# tar xf crictl-v1.26.0-linux-amd64.tar.gz -C /usr/bin/#生成配置文件
[root@k8s-master01 ~]# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF#测试
[root@k8s-master01 ~]# systemctl restart containerd
[root@k8s-master01 ~]# crictl info
2.2 k8s与etcd下载及安装
该节操作仅在master01上完成。
2.2.1 拷贝和解压软件包
#将前面下载的软件包拷贝至master01
scp kubernetes-server-linux-amd64.tar.gz root@192.168.218.100:
scp etcd-v3.5.7-linux-amd64.tar.gz root@192.168.218.100#在master01将软件包解压
[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
[root@k8s-master01 ~]# tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/# 查看/usr/local/bin下内容,共17个文件
[root@k8s-master01 ~]# ls /usr/local/bin/
containerd crictl etcdctl kube-proxy
containerd-shim critest kube-apiserver kube-scheduler
containerd-shim-runc-v1 ctd-decoder kube-controller-manager
containerd-shim-runc-v2 ctr kubectl
containerd-stress etcd kubelet
2.2.2 查看版本
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.26.1
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.7
API version: 3.5
2.2.3 将组件发送至其他k8s节点
[root@k8s-master01 ~]# Master='k8s-master02 k8s-master03'
[root@k8s-master01 ~]# Work='k8s-node01 k8s-node02'[root@k8s-master01 ~]# for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done[root@k8s-master01 ~]# for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done[root@k8s-master01 ~]# mkdir -p /opt/cni/bin
2.4 创建证书文件
[root@k8s-master01 ~]# mkdir pki
[root@k8s-master01 ~]# cd pki#创建admin-csr.json证书文件
[root@k8s-master01 ~]# cat > admin-csr.json << EOF
{"CN": "admin","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes-manual"}]
}
EOF#创建ca-config.json证书文件
[root@k8s-master01 ~]# cat > ca-config.json << EOF
{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}
EOF#创建etcd-ca-csr.json证书文件
[root@k8s-master01 ~]# cat > etcd-ca-csr.json << EOF
{"CN": "etcd","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "Etcd Security"}],"ca": {"expiry": "876000h"}
}
EOF#创建front-proxy-ca-csr.json证书文件
[root@k8s-master01 ~]# cat > front-proxy-ca-csr.json << EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"ca": {"expiry": "876000h"}
}
EOF#创建kubelet-csr.json证书文件
[root@k8s-master01 ~]# cat > kubelet-csr.json << EOF
{"CN": "system:node:\$NODE","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "system:nodes","OU": "Kubernetes-manual"}]
}
EOF#创建manager-csr.json证书文件
[root@k8s-master01 ~]# cat > manager-csr.json << EOF
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes-manual"}]
}
EOF#创建apiserver-csr.json证书文件
[root@k8s-master01 ~]# cat > apiserver-csr.json << EOF
{"CN": "kube-apiserver","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}]
}
EOF#创建ca-csr.json证书文件
[root@k8s-master01 ~]# cat > ca-csr.json << EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}],"ca": {"expiry": "876000h"}
}
EOF#创建etcd-csr.json证书文件
[root@k8s-master01 ~]# cat > etcd-csr.json << EOF
{"CN": "etcd","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "Etcd Security"}]
}
EOF#创建front-proxy-client-csr.json证书文件
[root@k8s-master01 ~]# cat > front-proxy-client-csr.json << EOF
{"CN": "front-proxy-client","key": {"algo": "rsa","size": 2048}
}
EOF#创建kube-proxy-csr.json证书文件
[root@k8s-master01 ~]# cat > kube-proxy-csr.json << EOF
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-proxy","OU": "Kubernetes-manual"}]
}
EOF#创建scheduler-csr.json证书文件
[root@k8s-master01 ~]# cat > scheduler-csr.json << EOF
{"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes-manual"}]
}
EOF[root@k8s-master01 ~]# cd ..
[root@k8s-master01 ~]# mkdir bootstrap
[root@k8s-master01 ~]# cd bootstrap#创建bootstrap.secret.yaml文件
[root@k8s-master01 bootstrap]# cat > bootstrap.secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:name: bootstrap-token-c8ad9cnamespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:description: "The default bootstrap token generated by 'kubelet '."token-id: c8ad9ctoken-secret: 2e4d610cf3e7426eusage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubelet-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-autoapprove-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-autoapprove-certificate-rotation
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace: ""
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver
EOF[root@k8s-master01 ~]# cd ..
[root@k8s-master01 ~]# mkdir coredns
[root@k8s-master01 ~]# cd coredns#创建coredns.yaml文件
[root@k8s-master01 coredns]# cat > coredns.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: registry.cn-hangzhou.aliyuncs.com/chenby/coredns:v1.10.0imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.96.0.10 ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
EOF[root@k8s-master01 ~]# cd ..
[root@k8s-master01 ~]# mkdir metrics-server
[root@k8s-master01 ~]# cd metrics-server#创建metrics-server.yaml文件
[root@k8s-master01 metrics-server]# cat > metrics-server.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:hostNetwork: truecontainers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s- --kubelet-insecure-tls- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm- --requestheader-username-headers=X-Remote-User- --requestheader-group-headers=X-Remote-Group- --requestheader-extra-headers-prefix=X-Remote-Extra-image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.5.2imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dir- name: ca-sslmountPath: /etc/kubernetes/pkinodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir- name: ca-sslhostPath:path: /etc/kubernetes/pki---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100
EOF
3 生成证书
3.1 工具准备
将前面下载的工具拷贝至master01
#将下载好的文件拷贝至master01
scp cfssl root@192.168.218.100:
scp cfssljson root@192.168.218.100:#在master01节点,将文件拷贝至指定位置,并增加执行权限
[root@k8s-master01 ~]# cp cfssl /usr/local/bin/cfssl
[root@k8s-master01 ~]# cp cfssljson /usr/local/bin/cfssljson
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
3.2 生成etcd证书
特别说明除外,以下操作在所有master节点操作
3.1.1 在所有master节点创建证书存放目录
mkdir -p /etc/etcd/ssl
3.1.2 master01节点生成etcd证书
[root@k8s-master01 ~]# cd pki
# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
# 若没有IPv6 可删除可保留
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca[root@k8s-master01 pki]# cfssl gencert \-ca=/etc/etcd/ssl/etcd-ca.pem \-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \-config=ca-config.json \-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.218.100,192.168.218.101,192.168.218.102 \-profile=kubernetes \etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
3.1.3 将证书复制到其他master节点
[root@k8s-master01 pki]# Master='k8s-master02 k8s-master03'[root@k8s-master01 pki]# for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done
3.2 生成k8s相关证书
特别说明除外,以下操作在所有master节点操作,注意Shell命令提示符。
3.2.1 所有master节点创建证书存放目录
mkdir -p /etc/kubernetes/pki
3.2.2 master01节点生成k8s证书
[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
# 10.96.0.1是service网段的第一个地址[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.218.100,192.168.218.101,192.168.218.102,192.168.218.103,192.168.218.104,192.168.218.105,192.168.218.106,192.168.218.107,192.168.218.108,192.168.218.109 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
3.2.3 master01节点生成apiserver聚合证书
[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 执行以下命令会有一个警告,可以忽略
[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
3.2.4 master01节点生成controller-manage的证书
[root@k8s-master01 pki]# cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 设置一个集群项,这里使用nginx方案实现高可用
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://127.0.0.1:8443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个用户项[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \--client-certificate=/etc/kubernetes/pki/controller-manager.pem \--client-key=/etc/kubernetes/pki/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置默认环境[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig[root@k8s-master01 pki]# cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://127.0.0.1:8443 \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \--client-certificate=/etc/kubernetes/pki/scheduler.pem \--client-key=/etc/kubernetes/pki/scheduler-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig[root@k8s-master01 pki]# cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://127.0.0.1:8443 \--kubeconfig=/etc/kubernetes/admin.kubeconfig[root@k8s-master01 pki]# kubectl config set-credentials kubernetes-admin \--client-certificate=/etc/kubernetes/pki/admin.pem \--client-key=/etc/kubernetes/pki/admin-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/admin.kubeconfig[root@k8s-master01 pki]# kubectl config set-context kubernetes-admin@kubernetes \--cluster=kubernetes \--user=kubernetes-admin \--kubeconfig=/etc/kubernetes/admin.kubeconfig[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
3.2.5 master01节点创建kube-proxy证书
[root@k8s-master01 pki]# cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://127.0.0.1:8443 \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig[root@k8s-master01 pki]# kubectl config set-credentials kube-proxy \--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig[root@k8s-master01 pki]# kubectl config set-context kube-proxy@kubernetes \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig[root@k8s-master01 pki]# kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
3.2.5 master01节点创建ServiceAccount Key ——secret
[root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048[root@k8s-master01 pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
3.2.6 将证书发送到其他master节点
#确保其它master节点上已创建/etc/kubernetes/pki/目录
[root@localhost pki]# for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
3.2.7 查看证书
确保所有master节点上都能查看到以下26个证书
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr apiserver.csr ca.csr controller-manager.csr front-proxy-ca.csr front-proxy-client.csr kube-proxy.csr sa.key scheduler-key.pem
admin-key.pem apiserver-key.pem ca-key.pem controller-manager-key.pem front-proxy-ca-key.pem front-proxy-client-key.pem kube-proxy-key.pem sa.pub scheduler.pem
admin.pem apiserver.pem ca.pem controller-manager.pem front-proxy-ca.pem front-proxy-client.pem kube-proxy.pem scheduler.csr[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ | wc -l
26
4 k8s系统组件配置
4.1 etcd配置
4.1.1 master01配置
[root@k8s-master01 ~]# cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.218.100:2380'
listen-client-urls: 'https://192.168.218.100:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.218.100:2380'
advertise-client-urls: 'https://192.168.218.100:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.218.100:2380,k8s-master02=https://192.168.218.101:2380,k8s-master03=https://192.168.218.102:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
4.1.2 master02配置
[root@k8s-master02 ~]# cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.218.101:2380'
listen-client-urls: 'https://192.168.218.101:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.218.101:2380'
advertise-client-urls: 'https://192.168.218.101:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.218.100:2380,k8s-master02=https://192.168.218.101:2380,k8s-master03=https://192.168.218.102:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
4.1.3 master03配置
[root@k8s-master03 ~]# cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.218.102:2380'
listen-client-urls: 'https://192.168.218.102:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.218.102:2380'
advertise-client-urls: 'https://192.168.218.102:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.218.100:2380,k8s-master02=https://192.168.218.101:2380,k8s-master03=https://192.168.218.102:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
4.2 创建service
本节要求在所有master节点上操作完成。
4.2.1 创建etcd.service并启动
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
Alias=etcd3.serviceEOF
4.2.2 创建etcd证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
4.2.3 查看etcd状态
每个master节点上都要查看
[root@k8s-master01 ~]# export ETCDCTL_API=3[root@k8s-master01 ~]# etcdctl --endpoints="192.168.218.102:2379,192.168.218.101:2379,192.168.218.100:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.218.102:2379 | d43da0d39aaa95be | 3.5.7 | 20 kB | false | false | 2 | 9 | 9 | |
| 192.168.218.101:2379 | 11f94c330b77381a | 3.5.7 | 20 kB | false | false | 2 | 9 | 9 | |
| 192.168.218.100:2379 | ee015994ddee02d0 | 3.5.7 | 20 kB | true | false | 2 | 9 | 9 | |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
5 配置高可用性
可以使用 nginx或者haproxy+keepalived实现高可用,这文档使用nginx方案。
5.1 手动编译安装nginx
将最初下载的nginx软件包拷贝至所有节点,然后解压和编译安装。
#将先前下载好的nginx软件包拷贝至所有节点
scp nginx-1.22.1.tar.gz root@192.168.218.100:
scp nginx-1.22.1.tar.gz root@192.168.218.101:
scp nginx-1.22.1.tar.gz root@192.168.218.102:
scp nginx-1.22.1.tar.gz root@192.168.218.103:
scp nginx-1.22.1.tar.gz root@192.168.218.104:#在各节点解压软件包
tar xvf nginx-*.tar.gz
cd nginx-1.22.1# 在各节点编译和安装
./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install
5.2 写入启动配置
在所有主机上执行
# 写入nginx配置文件
cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {worker_connections 1024;
}
stream {upstream backend {least_conn;hash $remote_addr consistent;server 192.168.218.100:6443 max_fails=3 fail_timeout=30s;server 192.168.218.101:6443 max_fails=3 fail_timeout=30s;server 192.168.218.102:6443 max_fails=3 fail_timeout=30s;}server {listen 127.0.0.1:8443;proxy_connect_timeout 1s;proxy_pass backend;}
}
EOF# 写入启动配置文件
cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF# 设置开机自启,并查看其状态,确保其状态均为running
systemctl enable --now kube-nginx
systemctl restart kube-nginx
systemctl status kube-nginx
6 k8s组件配置
所有k8s节点执行以下命令创建目录。
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
6.1 创建apiserver
6.1.1 master01节点配置
[root@k8s-master01 ~]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \\--v=2 \\--allow-privileged=true \\--bind-address=0.0.0.0 \\--secure-port=6443 \\--advertise-address=192.168.218.100 \\--service-cluster-ip-range=10.96.0.0/12 \\--service-node-port-range=30000-32767 \\--etcd-servers=https://192.168.218.100:2379,https://192.168.218.101:2379,https://192.168.218.102:2379 \\--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\--etcd-certfile=/etc/etcd/ssl/etcd.pem \\--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\--client-ca-file=/etc/kubernetes/pki/ca.pem \\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\--service-account-key-file=/etc/kubernetes/pki/sa.pub \\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \\--enable-bootstrap-token-auth=true \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\--requestheader-allowed-names=aggregator \\--requestheader-group-headers=X-Remote-Group \\--requestheader-extra-headers-prefix=X-Remote-Extra- \\--requestheader-username-headers=X-Remote-User \\--enable-aggregator-routing=true# --feature-gates=IPv6DualStack=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF
6.1.2 master02节点配置
[root@k8s-master02 ~]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \\--v=2 \\--allow-privileged=true \\--bind-address=0.0.0.0 \\--secure-port=6443 \\--advertise-address=192.168.218.101 \\--service-cluster-ip-range=10.96.0.0/12 \\--service-node-port-range=30000-32767 \\--etcd-servers=https://192.168.218.100:2379,https://192.168.218.101:2379,https://192.168.218.102:2379 \\--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\--etcd-certfile=/etc/etcd/ssl/etcd.pem \\--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\--client-ca-file=/etc/kubernetes/pki/ca.pem \\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\--service-account-key-file=/etc/kubernetes/pki/sa.pub \\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\--authorization-mode=Node,RBAC \\--enable-bootstrap-token-auth=true \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\--requestheader-allowed-names=aggregator \\--requestheader-group-headers=X-Remote-Group \\--requestheader-extra-headers-prefix=X-Remote-Extra- \\--requestheader-username-headers=X-Remote-User \\--enable-aggregator-routing=true# --feature-gates=IPv6DualStack=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF
6.1.3 master03节点配置
[root@k8s-master03 ~]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \\--v=2 \\--allow-privileged=true \\--bind-address=0.0.0.0 \\--secure-port=6443 \\--advertise-address=192.168.218.102 \\--service-cluster-ip-range=10.96.0.0/12 \\--service-node-port-range=30000-32767 \\--etcd-servers=https://192.168.218.100:2379,https://192.168.218.101:2379,https://192.168.218.102:2379 \\--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\--etcd-certfile=/etc/etcd/ssl/etcd.pem \\--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\--client-ca-file=/etc/kubernetes/pki/ca.pem \\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\--service-account-key-file=/etc/kubernetes/pki/sa.pub \\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\--authorization-mode=Node,RBAC \\--enable-bootstrap-token-auth=true \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\--requestheader-allowed-names=aggregator \\--requestheader-group-headers=X-Remote-Group \\--requestheader-extra-headers-prefix=X-Remote-Extra- \\--requestheader-username-headers=X-Remote-User \\--enable-aggregator-routing=true# --feature-gates=IPv6DualStack=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF
6.1.4 启动apiserver(所有master节点)
systemctl daemon-reload && systemctl enable --now kube-apiserver# 查看状态,确保均为正常
systemctl status kube-apiserver
6.2 配置kube-controller-manager service
6.2.1 所有master节点配置,且配置相同
# 172.16.0.0/12为pod网段,按需求设置你自己的网段cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\--v=2 \\--bind-address=127.0.0.1 \\--root-ca-file=/etc/kubernetes/pki/ca.pem \\--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\--leader-elect=true \\--use-service-account-credentials=true \\--node-monitor-grace-period=40s \\--node-monitor-period=5s \\--pod-eviction-timeout=2m0s \\--controllers=*,bootstrapsigner,tokencleaner \\--allocate-node-cidrs=true \\--service-cluster-ip-range=10.96.0.0/12 \\--cluster-cidr=172.16.0.0/12 \\--node-cidr-mask-size-ipv4=24 \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF
6.2.2 启动kube-controller-manager,并查看状态,确保其状态均正常
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
6.3 配置kube-scheduler service
6.3.1 所有master节点配置,且配置相同
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-scheduler \\--v=2 \\--bind-address=127.0.0.1 \\--leader-elect=true \\--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF
6.3.2 启动并查看服务状态,确保其状态均正常
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler
7 配置TLS Bootstrapping
TLS bootstrapping 是用来简化管理员配置kubelet 与 apiserver 双向加密通信的配置步骤的一种机制。
7.1 在master01上配置
[root@k8s-master01 ~]# cd bootstrap[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true --server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig# token的位置在bootstrap.secret.yaml,如果修改的话到该文件修改
[root@k8s-master01 ~]# kubectl config set-credentials tls-bootstrap-token-user \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig[root@k8s-master01 ~]# kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig[root@k8s-master01 ~]# kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig[root@k8s-master01 ~]# mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
7.2 查看集群状态
此步若没问题,则可继续后续操作。
[root@k8s-master01 bootstrap]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true","reason":""}
etcd-0 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""} # 切记执行,别忘记!!!
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
8 配置node节点
8.1 在master01上将证书复制到其它各节点
[root@k8s-master01 ~]# cd /etc/kubernetes/[root@k8s-master01 kubernetes]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done
8.2 kubelet配置
8.2.1 所有节点配置kubelet service
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/cat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service[Service]
ExecStart=/usr/local/bin/kubelet \\--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\--config=/etc/kubernetes/kubelet-conf.yml \\--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\--node-labels=node.kubernetes.io/node=# --feature-gates=IPv6DualStack=true# --container-runtime=remote# --runtime-request-timeout=15m# --cgroup-driver=systemd[Install]
WantedBy=multi-user.target
EOF
8.2.2 所有节点创建kubelet的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
8.2.3 所有节点启动kubelet
systemctl daemon-reload
systemctl restart kubelet
systemctl enable --now kubelet
8.2.4 查看集群
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 10s v1.26.1
k8s-master02 Ready <none> 8s v1.26.1
k8s-master03 Ready <none> 6s v1.26.1
k8s-node01 Ready <none> 4s v1.26.1
k8s-node02 Ready <none> 2s v1.26.1
8.3 kube-proxy配置
8.3.1 将kubeconfig发送至其他节点
root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; doneroot@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
8.3.2 所有节点添加kube-proxy的service文件
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-proxy \\--config=/etc/kubernetes/kube-proxy.yaml \\--v=2Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF
8.3.3 所有节点添加kube-proxy的配置
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250msEOF
8.3.4 所有节点启动kube-proxy
systemctl daemon-reload
systemctl restart kube-proxy
systemctl enable --now kube-proxy
9 安装网络插件
查看libseccomp的版本,确保其版本高于2.4,否则无法安装网络插件。
本节步骤只在master01上执行。
#将先前下载的软件包拷贝至master01
scp runc.amd64 root@192.168.218.100:#升级runc
[root@k8s-master01 ~]# install -m 755 runc.amd64 /usr/local/sbin/runc
[root@k8s-master01 ~]# cp -p /usr/local/sbin/runc /usr/local/bin/runc
[root@k8s-master01 ~]# cp -p /usr/local/sbin/runc /usr/bin/runc#查看当前版本
[root@k8s-master01 ~]# runc -v
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d1
spec: 1.0.2-dev
go: go1.17.10
libseccomp: 2.5.4
9.1 安装Calico
到此处后,建议创建好快照后再进行操作,后续出问题可以回滚
9.1.1 更改calico网段
首先从指定站点下载calico.yaml文件至/etc/kubernetes目录。
如果pod使用网络为192.168.0.0/16,则根据自身情况修改calico.yaml,在名为calico-config的ConfigMap中,将etcd_endpoints的值设置为etcd服务器的IP地址和端口。否则无需修改,Calico将根据运行配置自动检测CIDR。
本文档Pod网络为172.16.0.0/12,无需修改calico.yaml。
#下载calico.yaml文件
[root@k8s-master01 ~]# mkdir yaml
[root@k8s-master01 ~]# cd yaml
[root@k8s-master01 yaml]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-etcd.yaml -o calico.yaml#应用清单
[root@k8s-master01 yaml]# kubectl apply -f calico.yaml
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
……此处省略部分输出……
9.1.2查看容器状态
calico 初始化会比较慢,需要耐心等待一下,大约十分钟左右,再执行以下命令可看到容器状态为Running。
[root@k8s-master01 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57b57c56f-drxz8 1/1 Running 0 15m
kube-system calico-node-8xz7p 1/1 Running 0 15m
kube-system calico-node-j5h47 1/1 Running 0 15m
kube-system calico-node-lqtmn 1/1 Running 0 15m
kube-system calico-node-qhrx6 1/1 Running 0 15m
kube-system calico-node-vfvwt 1/1 Running 0 15m
9.2 安装cilium
本节操作仅在master01上完成。
9.2.1 安装helm
# 将先前下载的helm-canary-linux-amd64.tar.gz拷贝至master01
scp helm-canary-linux-amd64.tar.gz root@192.168.218.100:# master01节点,解压软件包
[root@k8s-master01 ~]# tar xvf helm-canary-linux-amd64.tar.gz
[root@k8s-master01 ~]# cp linux-amd64/helm /usr/local/bin/
9.2.2 安装cilium
# 添加源
[root@k8s-master01 ~]# helm repo add cilium https://helm.cilium.io# 拉取并解压cilium压缩包,离线部署时,可将拉取的cilium-*.tgz压缩包拷贝至master01节点
[root@k8s-master01 ~]# helm pull cilium/cilium
[root@k8s-master01 ~]# tar xvf cilium-*.tgz# 默认参数安装
[root@k8s-master01 ~]# helm install harbor ./cilium/ -n kube-system#卸载命令为 helm uninstall harbor -n kube-system# 启用路由信息和监控插件,默认参数安装与下面的命令可选择执行,本文档选择执行以下命令
[root@k8s-master01 ~]# helm install cilium/ cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"
9.2.3 查看
上步操作后,大约稍候1-2分钟,再执行以下命令,可看到容器状态为Running。
[root@k8s-master01 ~]# kubectl get pod -A | grep cil
cilium-monitoring grafana-698bbd89d5-7z86j 1/1 Running 0 57m
cilium-monitoring prometheus-75fd6464d8-hlz72 1/1 Running 0 57m
kube-system cilium-5w885 1/1 Running 0 3m59s
kube-system cilium-c4skr 1/1 Running 0 3m59s
kube-system cilium-dmqpb 1/1 Running 0 3m59s
kube-system cilium-operator-65b585d467-cxqrv 1/1 Running 0 3m59s
kube-system cilium-operator-65b585d467-nrk26 1/1 Running 0 3m59s
kube-system cilium-ps9fl 1/1 Running 0 3m59s
kube-system cilium-z42rr 1/1 Running 0 3m59s
9.2.4 下载专属监控面板
[root@k8s-master01 ~]# cd yaml
[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
[root@k8s-master01 yaml]# kubectl apply -f monitoring-example.yaml
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
……此处省略部分输出……
9.2.5 下载部署测试用例
[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml[root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml[root@k8s-master01 yaml]# kubectl apply -f connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
……此处省略部分输出……
9.2.6 查看pod
[root@k8s-master01 yaml]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cilium-monitoring grafana-698bbd89d5-7z86j 1/1 Running 0 58m
cilium-monitoring prometheus-75fd6464d8-hlz72 1/1 Running 0 58m
default echo-a-568cb98744-nnw59 1/1 Running 0 54m
default echo-b-64db4dfd5d-7rj86 1/1 Running 0 54m
default echo-b-host-6b7bb88666-n722x 1/1 Running 0 54m
default host-to-b-multi-node-headless-5458c6bff-d8cph 0/1 Running 19 (43s ago) 54m
default pod-to-a-allowed-cnp-55cb67b5c5-m9hjm 0/1 Running 19 (54s ago) 54m
default pod-to-a-c9b8bf6f7-2f2vv 0/1 Running 19 (24s ago) 54m
default pod-to-a-denied-cnp-85fb9df657-czx7x 1/1 Running 0 54m
default pod-to-b-intra-node-nodeport-55784cc5c9-bdxrr 0/1 Running 19 (13s ago) 54m
default pod-to-b-multi-node-clusterip-5c46dd6677-6zr97 0/1 Running 19 (4s ago) 54m
default pod-to-b-multi-node-headless-748dfc6fd7-v7r2l 0/1 Running 19 (24s ago) 54m
default pod-to-b-multi-node-nodeport-f6464499f-bj2j6 0/1 Running 19 (43s ago) 54m
default pod-to-external-1111-96c489555-zmp42 0/1 Running 19 (24s ago) 54m
default pod-to-external-fqdn-allow-google-cnp-57694dc7df-7ldpf 0/1 Running 19 (14s ago) 54m
kube-system calico-kube-controllers-57b57c56f-65nqp 1/1 Running 0 3h14m
kube-system calico-node-8xz7p 1/1 Running 0 4h28m
kube-system calico-node-j5h47 1/1 Running 0 4h28m
kube-system calico-node-lqtmn 1/1 Running 0 4h28m
kube-system calico-node-q8xcg 1/1 Running 0 3h14m
kube-system calico-node-vfvwt 1/1 Running 0 4h28m
kube-system cilium-5w885 1/1 Running 0 5m20s
kube-system cilium-c4skr 1/1 Running 0 5m20s
kube-system cilium-dmqpb 1/1 Running 0 5m20s
kube-system cilium-operator-65b585d467-cxqrv 1/1 Running 0 5m20s
kube-system cilium-operator-65b585d467-nrk26 1/1 Running 0 5m20s
kube-system cilium-ps9fl 1/1 Running 0 5m20s
kube-system cilium-z42rr 1/1 Running 0 5m20s
kube-system hubble-relay-69d66476f4-z2fmn 1/1 Running 0 5m20s
kube-system hubble-ui-59588bd5c7-kx4s8 2/2 Running 0 5m20s
9.2.7 修改服务类型为NodePort
[root@k8s-master01 yaml]# kubectl edit svc -n kube-system hubble-ui
#将倒数第三行的type: ClusterIP改为type: NodePort,与vim相同,按:wq保存退出[root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana
#将倒数第三行的type: ClusterIP改为type: NodePort[root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus
#将倒数第三行的type: ClusterIP改为type: NodePort
9.2.8 查看端口
查看grafana、prometheus和hubble-ui服务的端口,如图中的30550、30123和30092。
[root@k8s-master01 yaml]# kubectl get svc -A | grep monit
cilium-monitoring grafana NodePort 10.103.26.114 <none> 3000:30550/TCP 43m
cilium-monitoring prometheus NodePort 10.98.21.159 <none> 9090:30123/TCP 43m[root@k8s-master01 yaml]# kubectl get svc -A | grep hubble
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 7m55s
kube-system hubble-peer ClusterIP 10.105.135.226 <none> 443/TCP 7m55s
kube-system hubble-relay ClusterIP 10.109.191.76 <none> 80/TCP 7m55s
kube-system hubble-ui NodePort 10.103.19.102 <none> 80:30092/TCP 7m55s
9.2.9 访问
依次访问master01上的30550、30123和30092端口,如用VM虚拟机部署,则访问地址如下:
http://master01的IP地址:30550
http://master01的IP地址:30123
http://master01的IP地址:30092
本文档采用华为ECS部署,这里使用了master01的EIP,访问效果如下图所示。
10 安装CoreDNS
本节只在master01节点上操作
10.1 查看或修改文件
在coredns.yaml文件中可查看或修改clusterIP,本文档采用默认clusterIP。
[root@k8s-master01 ~]# cd coredns/
[root@k8s-master01 coredns]# cat coredns.yaml | grep clusterIP:clusterIP: 10.96.0.10
10.2 安装CoreDNS
[root@k8s-master01 coredns]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
11.安装Metrics Server
本节只在master01节点上操作
11.1 安装Metrics-server
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
[root@k8s-master01 ~]# cd metrics-server
[root@k8s-master01 metrics-server]# kubectl apply -f metrics-server.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
11.2 查看状态
稍等片刻查看状态
[root@k8s-master01 metrics-server]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 113m 0% 3245Mi 10%
k8s-master02 129m 0% 2086Mi 6%
k8s-master03 129m 0% 2274Mi 7%
k8s-node01 86m 0% 1373Mi 4%
k8s-node02 179m 1% 1739Mi 5%
12 集群验证
本节只在master01节点上操作
12.1 部署pod资源
12.1.1 部署pod资源
[root@k8s-master01 ~]# cat<<EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
> name: busybox
> namespace: default
> spec:
> containers:
> - name: busybox
> image: docker.io/library/busybox:1.28.3
> command:
> - sleep
> - "3600"
> imagePullPolicy: IfNotPresent
> restartPolicy: Always
> EOF
12.1.2 查看pod资源
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 20s
……此处省略其它pod资源信息……
如若拉取失败,可执行【kubectl describe pod busybox】命令查看失败原因。
12.2 用pod解析默认命名空间中的kubernetes
#查看svc
[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-a ClusterIP 10.104.127.34 <none> 8080/TCP 18h
echo-b NodePort 10.103.67.31 <none> 8080:31414/TCP 18h
echo-b-headless ClusterIP None <none> 8080/TCP 18h
echo-b-host-headless ClusterIP None <none> <none> 18h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h[root@k8s-master01 ~]# kubectl exec busybox -n default -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
若提示** server can't find kubernetes.svc.cluster.local: NXDOMAIN,可拉取其它版本的busybox的,很可能就是busybox的版本的问题。
12.3 测试跨命名空间是否可以解析
[root@k8s-master01 ~]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
12.4 测试每个节点能否访问Kubernetes的kubernetes svc 443
[root@k8s-master01 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
12.5 测试每个节点能否访问kube-dns的service 53
[root@k8s-master01 ~]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server
12.6 测试Pod和Pod之间是否相通
[root@k8s-master01 ~]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 8m53s 10.0.2.237 k8s-master03 <none> <none>
echo-a-568cb98744-nnw59 1/1 Running 0 18h 10.0.3.127 k8s-master01 <none> <none>
echo-b-64db4dfd5d-7rj86 1/1 Running 0 18h 10.0.2.175 k8s-master03 <none> <none>
echo-b-host-6b7bb88666-n722x 1/1 Running 0 18h 192.168.218.102 k8s-master03 <none> <none>
host-to-b-multi-node-clusterip-6cfc94d779-96fw6 1/1 Running 18 (17h ago) 18h 192.168.218.101 k8s-master02 <none> <none>
host-to-b-multi-node-headless-5458c6bff-d8cph 1/1 Running 26 (17h ago) 18h 192.168.218.104 k8s-node02 <none> <none>
pod-to-a-allowed-cnp-55cb67b5c5-m9hjm 1/1 Running 26 (17h ago) 18h 10.0.1.145 k8s-master02 <none> <none>
pod-to-a-c9b8bf6f7-2f2vv 1/1 Running 26 (17h ago) 18h 10.0.0.190 k8s-node02 <none> <none>
pod-to-a-denied-cnp-85fb9df657-czx7x 1/1 Running 0 18h 10.0.4.175 k8s-node01 <none> <none>
pod-to-b-intra-node-nodeport-55784cc5c9-bdxrr 1/1 Running 26 (17h ago) 18h 10.0.2.51 k8s-master03 <none> <none>
pod-to-b-multi-node-clusterip-5c46dd6677-6zr97 1/1 Running 26 (17h ago) 18h 10.0.3.76 k8s-master01 <none> <none>
pod-to-b-multi-node-headless-748dfc6fd7-v7r2l 1/1 Running 26 (17h ago) 18h 10.0.3.224 k8s-master01 <none> <none>
pod-to-b-multi-node-nodeport-f6464499f-bj2j6 1/1 Running 26 (17h ago) 18h 10.0.4.66 k8s-node01 <none> <none>
pod-to-external-1111-96c489555-zmp42 0/1 Running 321 (55s ago) 18h 10.0.1.198 k8s-master02 <none> <none>
pod-to-external-fqdn-allow-google-cnp-57694dc7df-7ldpf 0/1 Running 321 (5s ago) 18h 10.0.4.132 k8s-node01 <none> <none>[root@k8s-master01 ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-57b57c56f-65nqp 1/1 Running 0 21h 172.16.58.193 k8s-node02 <none> <none>
calico-node-8xz7p 1/1 Running 0 22h 192.168.218.101 k8s-master02 <none> <none>
calico-node-j5h47 1/1 Running 0 22h 192.168.218.102 k8s-master03 <none> <none>
calico-node-lqtmn 1/1 Running 0 22h 192.168.218.100 k8s-master01 <none> <none>
calico-node-q8xcg 1/1 Running 0 21h 192.168.218.104 k8s-node02 <none> <none>
calico-node-vfvwt 1/1 Running 0 22h 192.168.218.103 k8s-node01 <none> <none>
cilium-5w885 1/1 Running 0 18h 192.168.218.101 k8s-master02 <none> <none>
cilium-c4skr 1/1 Running 0 18h 192.168.218.103 k8s-node01 <none> <none>
cilium-dmqpb 1/1 Running 0 18h 192.168.218.100 k8s-master01 <none> <none>
cilium-operator-65b585d467-cxqrv 1/1 Running 0 18h 192.168.218.102 k8s-master03 <none> <none>
cilium-operator-65b585d467-nrk26 1/1 Running 0 18h 192.168.218.104 k8s-node02 <none> <none>
cilium-ps9fl 1/1 Running 0 18h 192.168.218.104 k8s-node02 <none> <none>
cilium-z42rr 1/1 Running 0 18h 192.168.218.102 k8s-master03 <none> <none>
coredns-568bb5dbff-dz2vx 1/1 Running 0 17h 10.0.4.42 k8s-node01 <none> <none>
hubble-relay-69d66476f4-z2fmn 1/1 Running 0 18h 10.0.1.86 k8s-master02 <none> <none>
hubble-ui-59588bd5c7-kx4s8 2/2 Running 0 18h 10.0.1.34 k8s-master02 <none> <none>
metrics-server-679f8d6774-szn9l 1/1 Running 0 17h 10.0.0.90 k8s-node02 <none> <none>#进入busybox ping其他节点上的pod
[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
/ # ping -c4 192.168.218.103
PING 192.168.218.103 (192.168.218.103): 56 data bytes
64 bytes from 192.168.218.103: seq=0 ttl=62 time=0.411 ms
64 bytes from 192.168.218.103: seq=1 ttl=62 time=0.278 ms
64 bytes from 192.168.218.103: seq=2 ttl=62 time=0.269 ms
64 bytes from 192.168.218.103: seq=3 ttl=62 time=0.337 ms--- 192.168.218.103 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.269/0.323/0.411 ms/ # exit
[root@k8s-master01 ~]# # 可以连通证明这个pod是可以跨命名空间和跨主机通信的
12.6 测式分布式部署
创建三个nginx副本,可以看到3个副本分布在不同的节点上,用完可以删除。
[root@k8s-master01 ~]# cd yaml/
[root@k8s-master01 yaml]# cat > deployments.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: docker.io/library/nginx:latestports:- containerPort: 80
EOF[root@k8s-master01 yaml]# kubectl apply -f deployments.yaml
deployment.apps/nginx-deployment created[root@k8s-master01 yaml]# kubectl get pod | grep nginx
nginx-deployment-7d8f6659d6-bpqk2 1/1 Running 0 87s
nginx-deployment-7d8f6659d6-j4bbc 1/1 Running 0 2m11s
nginx-deployment-7d8f6659d6-mf2gj 1/1 Running 0 115s#查看各个nginx容器的详细信息,可看到三个容器分别部署在不同节点上
[root@k8s-master01 yaml]# kubectl describe pod nginx
Name: nginx-deployment-7d8f6659d6-bpqk2
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node01/192.168.218.103
Start Time: Mon, 27 Feb 2023 15:11:52 +0800
Labels: app=nginxpod-template-hash=7d8f6659d6
Annotations: <none>
Status: Running
IP: 10.0.4.105
IPs:IP: 10.0.4.105
Controlled By: ReplicaSet/nginx-deployment-7d8f6659d6
……此处省略其它输出信息……# 不需要时,可删除这些容器
#删除某一个容器
[root@k8s-master01 yaml]# kubectl delete pod 此处填容器名称#删除部署的所有nginx容器
[root@k8s-master01 yaml]# kubectl delete -f deployments.yaml
deployment.apps "nginx-deployment" deleted
13 安装dashboard
本节步骤只在master01上执行。
13.1 安装dashboard
1、获取部署dashboard的配置文件
# 下载部署配置文件
[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml# 若下载不了,可复制以下内容保存
[root@k8s-master01 ~]# vim kubernetes-dashboard.yaml apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
rules:
- apiGroups: [""]resources: ["secrets"]verbs: ["create"]
- apiGroups: [""]resources: ["configmaps"]verbs: ["create"]
- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]verbs: ["get", "update", "delete"]
- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]
- apiGroups: [""]resources: ["services"]resourceNames: ["heapster"]verbs: ["proxy"]
- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:"]verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: registry.cn-hangzhou.aliyuncs.com/kubeapps/k8s-gcr-kubernetes-dashboard-amd64:v1.8.3ports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificatesvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardtolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 31080selector:k8s-app: kubernetes-dashboard
2、部署dashboard
[root@k8s-master01 ~]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
13.2 查看dashboard容器
在部署配置文件中,dashboard被指定安装在kube-system命名空间,服务类型指定为NodePort,对外端口指定为31080,等待1-2分钟后可查看pod的详情。
# 1分钟后,查看dashboard pod名称及其工作状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep dashboard
kubernetes-dashboard-549694665c-mpg52 1/1 Running 0 20m# 根据上面的pod名称,查看pod详情,详情中可以看到该pod部署在哪个节点
[root@k8s-master01 ~]# kubectl describe pod kubernetes-dashboard-549694665c-mpg52 -n kube-system……此处省略部分输出……
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 22m default-scheduler Successfully assigned kube-system/kubernetes-dashboard-549694665c-mpg52 to k8s-node-1Normal Pulled 22m kubelet Container image "registry.cn-hangzhou.aliyuncs.com/kubeapps/k8s-gcr-kubernetes-dashboard-amd64:v1.8.3" already present on machineNormal Created 22m kubelet Created container kubernetes-dashboardNormal Started 22m kubelet Started container kubernetes-dashboard# 查看dashboard服务类型与端口
[root@k8s-master01 ~]# kubectl get svc -n kube-system | grep dashboard
kubernetes-dashboard NodePort 10.108.153.227 <none> 443:31080/TCP 21m
13.3 创建身份验证令牌(RBAC)
1、创建服务账号配置文件
[root@k8s-master01 ~]# vim createuser.yaml apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
2、创建集群角色绑定配置文件
[root@k8s-master-1 ~]# vim createClusterRoleBinding.yaml apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:- kind: ServiceAccountname: admin-usernamespace: kube-system
3、创建账号和集群角色绑定
[root@k8s-master-1 ~]# kubectl apply -f createuser.yaml
serviceaccount/admin-user created[root@k8s-master-1 ~]# kubectl apply -f createClusterRoleBinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
4、获取账号token
[root@k8s-master01 ~]# kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6Imp3TVliUi1MQUxFMl9TSnlUMXlTYW1HdG8zSnB6NDZLTGpwMDhsSHBMUFkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHquc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjgwNjgxMDAyLCJpYXQiOjE2ODA2Nzc0MDIsimlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMjdiNDcwMzAtMDk0MS00MmEwLWIzNGMtOTY1M2I1ZTEyOGZmIn19LCJuYmYiOjE2ODA2Nzc0MDIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.cbgOhkOsIMBet4HDK-ivipIEqL0EVE_m_dFxT4tomLZI_YnTUxRz7l-_QT76dVkid8mqCUQMyBURa2gxYp-BbsBnQbJ1ww6ABTgw-5ZFTyPrBDFem2LmjzktwfeMMviGzr2a_A_vEUr4agw0iA8WDXXXFTMQEDkO_hNMmH4feYITlJiqF6cwUOzIKpmIFYfX1WTjiZMDpGLtRNS1g0JBbajWyJw9_GmFPLn9ofjyTpRa4dque3w920nfalbm9MHBLUG8M7VAm_IP7P6sNn8WbzKa3ahs9eab42Y-oVfF1KgwZipZkBOZsyC41mvFeie2Jpd83qyKHv71mAvSkN8P7w
13.4 登录dashboard
复制上面生成的token,并访问URL地址:https://dashboard容器所在节点IP地址:31080
粘贴前面生成的token,登录成功后的效果如下图所示。
14 安装ingress
14.1 创建文件
14.1.1 创建deploy.yaml文件
[root@k8s-master01 ~]# mkdir ingress
[root@k8s-master01 ~]# cd ingress/#利用以下内容创建deploy.yaml文件
[root@k8s-master01 ingress]# cat deploy.yaml
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginx---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
data:allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmname: ingress-nginx
rules:- apiGroups:- ''resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch- apiGroups:- ''resources:- nodesverbs:- get- apiGroups:- ''resources:- servicesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch- apiGroups:- ''resources:- eventsverbs:- create- patch- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmname: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx
subjects:- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
rules:- apiGroups:- ''resources:- namespacesverbs:- get- apiGroups:- ''resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch- apiGroups:- ''resources:- servicesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch- apiGroups:- ''resources:- configmapsresourceNames:- ingress-controller-leaderverbs:- get- update- apiGroups:- ''resources:- configmapsverbs:- create- apiGroups:- ''resources:- eventsverbs:- create- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx
subjects:- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controller-admissionnamespace: ingress-nginx
spec:type: ClusterIPports:- name: https-webhookport: 443targetPort: webhookappProtocol: httpsselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:annotations:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
spec:type: NodePortexternalTrafficPolicy: LocalipFamilyPolicy: SingleStackipFamilies:- IPv4ports:- name: httpport: 80protocol: TCPtargetPort: httpappProtocol: http- name: httpsport: 443protocol: TCPtargetPort: httpsappProtocol: httpsselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
spec:selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controllerrevisionHistoryLimit: 10minReadySeconds: 0template:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controllerspec:dnsPolicy: ClusterFirstcontainers:- name: controllerimage: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.2.0 imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownargs:- /nginx-ingress-controller- --election-id=ingress-controller-leader- --controller-class=k8s.io/ingress-nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keysecurityContext:capabilities:drop:- ALLadd:- NET_BIND_SERVICErunAsUser: 101allowPrivilegeEscalation: trueenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.solivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1ports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCP- name: webhookcontainerPort: 8443protocol: TCPvolumeMounts:- name: webhook-certmountPath: /usr/local/certificates/readOnly: trueresources:requests:cpu: 100mmemory: 90MinodeSelector:kubernetes.io/os: linuxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300volumes:- name: webhook-certsecret:secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: nginxnamespace: ingress-nginx
spec:controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookname: ingress-nginx-admission
webhooks:- name: validate.nginx.ingress.kubernetes.iomatchPolicy: Equivalentrules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressesfailurePolicy: FailsideEffects: NoneadmissionReviewVersions:- v1clientConfig:service:namespace: ingress-nginxname: ingress-nginx-controller-admissionpath: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: ingress-nginx-admissionannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
rules:- apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: ingress-nginx-admissionannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
rules:- apiGroups:- ''resources:- secretsverbs:- get- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-createnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
spec:template:metadata:name: ingress-nginx-admission-createlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookspec:containers:- name: createimage: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresentargs:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionnodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-patchnamespace: ingress-nginxannotations:helm.sh/hook: post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
spec:template:metadata:name: ingress-nginx-admission-patchlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookspec:containers:- name: patchimage: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresentargs:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionnodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 2000
14.1.2 创建backend.yaml文件
[root@k8s-master01 ingress]# cat > backend.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: default-http-backendlabels:app.kubernetes.io/name: default-http-backendnamespace: kube-system
spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: default-http-backendtemplate:metadata:labels:app.kubernetes.io/name: default-http-backendspec:terminationGracePeriodSeconds: 60containers:- name: default-http-backendimage: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 livenessProbe:httpGet:path: /healthzport: 8080scheme: HTTPinitialDelaySeconds: 30timeoutSeconds: 5ports:- containerPort: 8080resources:limits:cpu: 10mmemory: 20Mirequests:cpu: 10mmemory: 20Mi
---
apiVersion: v1
kind: Service
metadata:name: default-http-backendnamespace: kube-systemlabels:app.kubernetes.io/name: default-http-backend
spec:ports:- port: 80targetPort: 8080selector:app.kubernetes.io/name: default-http-backendEOF
14.1.3 创建ingress-demo-app.yaml
[root@k8s-master01 ingress]# cat > ingress-demo-app.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: hello-server
spec:replicas: 2selector:matchLabels:app: hello-servertemplate:metadata:labels:app: hello-serverspec:containers:- name: hello-serverimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-serverports:- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginx-demoname: nginx-demo
spec:replicas: 2selector:matchLabels:app: nginx-demotemplate:metadata:labels:app: nginx-demospec:containers:- image: nginxname: nginx
---
apiVersion: v1
kind: Service
metadata:labels:app: nginx-demoname: nginx-demo
spec:selector:app: nginx-demoports:- port: 8000protocol: TCPtargetPort: 80
---
apiVersion: v1
kind: Service
metadata:labels:app: hello-servername: hello-server
spec:selector:app: hello-serverports:- port: 8000protocol: TCPtargetPort: 9000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: ingress-host-bar
spec:ingressClassName: nginxrules:- host: "hello.ptuxgk.cn"http:paths:- pathType: Prefixpath: "/"backend:service:name: hello-serverport:number: 8000- host: "demo.ptuxgk.cn"http:paths:- pathType: Prefixpath: "/nginx" backend:service:name: nginx-demoport:number: 8000
EOF
14.2 执行部署
[root@k8s-master01 ~]# cd ingress/[root@k8s-master01 ingress]# kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created[root@k8s-master01 ingress]# kubectl apply -f backend.yaml
deployment.apps/default-http-backend created
service/default-http-backend created# 等创建完成后在执行,大约等待2-3分钟后再执行:
[root@k8s-master01 ingress]# kubectl apply -f ingress-demo-app.yaml
deployment.apps/hello-server created
deployment.apps/nginx-demo created
service/nginx-demo created
service/hello-server created
ingress.networking.k8s.io/ingress-host-bar created[root@k8s-master01 ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-host-bar nginx hello.ptuxgk.cn,demo.ptuxgk.cn 192.168.218.100 80 2m40s
14.3 过滤查看ingress端口
[root@k8s-master01 ingress]# kubectl get svc -A | grep ingress
ingress-nginx ingress-nginx-controller NodePort 10.110.35.238 <none> 80:30370/TCP,443:31483/TCP 5m36s
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.66.156 <none> 443/TCP 5m36s
参考文献
[1] 二进制安装Kubernetes(k8s) v1.26.1 IPv4/IPv6双栈 可脱离互联网_小陈运维的博客-CSDN博客
[2] Install Calico | Calico Documentation
[3] Kubernetes
[4] Kubernetes 文档 | Kubernetes
openEuler 22.09环境二进制安装Kubernetes(k8s) v1.26相关推荐
- 二进制安装Kubernetes(k8s) v1.22.10 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.22.10 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了
- 二进制安装Kubernetes(k8s) v1.23.6
二进制安装Kubernetes(k8s) v1.23.6 背景 kubernetes二进制安装 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成. 后续尽可能第 ...
- 二进制安装Kubernetes(k8s)IPv4/IPv6双栈 v1.24.0
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈 介绍 kubernetes二进制安装 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24 ...
- 二进制安装Kubernetes(k8s) v1.24.0 IPv4
感谢:二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈 - 小陈运维 kubernetes 1.24 变化较大,详细见:Kubernetes 1.24 的删除和弃用 | ...
- 二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了
- 二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了
- 二进制安装Kubernetes(k8s) v1.24.3 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.24.3 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了
- 二进制安装Kubernetes(k8s) v1.21.13 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.21.13 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了
- Centos7 二进制安装 Kubernetes 1.13
目录 1.目录 1.1.什么是 Kubernetes? 1.2.Kubernetes 有哪些优势? 2.环境准备 2.1.网络配置 2.2.更改 HOSTNAME 2.3.配置ssh免密码登录登录 2 ...
最新文章
- MYSQL备份与恢复精华篇
- nginx+memcached+captcha_server实现验证码服务器
- IBM中国开发中心吉燕勇: 通过Cloud Data Services打造新型认知计算数据分析云平台...
- 不再以讹传讹,GET和POST的真正区别
- python中classmethod与staticmethod的差异及应用
- ICLR 2021 | 腾讯 AI Lab 入选论文
- Google搜索简单介绍
- 计算机的键盘应用,电脑键盘应用小知识
- CSDN学霸课表——网络工程师(软考中级)
- 数论概论 第三章 勾股数组与单位圆
- 运算放大器的16个基础知识点
- Redis学习之incr命令
- pandas 基础操作
- 国华小状元1号年金险怎么样?好不好?
- HTML5期末大作业:音乐网站设计——html5在线音乐新闻发布会网站模板(滚动页) HTML+CSS+JavaScript
- remind me of 2009
- python使用XPATH爬取电影票房
- 安装Gearman及其PHP扩展
- 负载均衡(Load Balance)简单介绍
- 笔记本电脑安装ubuntu18.04系统后wifi驱动缺失的解决方法