二进制安装Kubernetes(k8s) v1.23.6

背景

kubernetes二进制安装

1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成。

后续尽可能第一时间更新新版本文档

https://github.com/cby-chen/Kubernetes/releases

脚本项目地址:

https://github.com/cby-chen/Binary_installation_of_Kubernetes

手动项目地址:

https://github.com/cby-chen/Kubernetes

1.环境

主机名称 IP地址 说明 软件
Master01 192.168.1.81 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Master02 192.168.1.82 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Master03 192.168.1.83 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Node01 192.168.1.84 node节点 kubelet、kube-proxy、nfs-client
Node02 192.168.1.85 node节点 kubelet、kube-proxy、nfs-client
Node03 192.168.1.86 node节点 kubelet、kube-proxy、nfs-client
Node04 192.168.1.87 node节点 kubelet、kube-proxy、nfs-client
Node05 192.168.1.88 node节点 kubelet、kube-proxy、nfs-client
Lb01 192.168.1.80 Lb01节点 haproxy、keepalived
Lb02 192.168.1.90 Lb02节点 haproxy、keepalived
192.168.1.89 VIP
软件 版本
内核 4.18.0-373.el8.x86_64
CentOS 8 v8 或者 v7
kube-apiserver、kube-controller-manager
kube-scheduler、kubelet、kube-proxy
v1.23.6
etcd v3.5.3
docker-ce v20.10.14
containerd v1.5.11
cfssl v1.6.1
cni v1.1.1
crictl v1.23.0
haproxy v1.8.27
keepalived v2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

如果有条件建议k8s集群与etcd集群分开安装

1.1.k8s基础系统环境配置

1.2.配置IP

ssh root@192.168.1.161 "nmcli con mod ens18 ipv4.addresses 192.168.1.81/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.167 "nmcli con mod ens18 ipv4.addresses 192.168.1.82/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.137 "nmcli con mod ens18 ipv4.addresses 192.168.1.83/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.152 "nmcli con mod ens18 ipv4.addresses 192.168.1.84/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.198 "nmcli con mod ens18 ipv4.addresses 192.168.1.85/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.166 "nmcli con mod ens18 ipv4.addresses 192.168.1.86/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.171 "nmcli con mod ens18 ipv4.addresses 192.168.1.87/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.159 "nmcli con mod ens18 ipv4.addresses 192.168.1.88/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.122 "nmcli con mod ens18 ipv4.addresses 192.168.1.80/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.125 "nmcli con mod ens18 ipv4.addresses 192.168.1.90/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"

1.3.设置主机名

hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
hostnamectl set-hostname k8s-node03
hostnamectl set-hostname k8s-node04
hostnamectl set-hostname k8s-node05
hostnamectl set-hostname lb01
hostnamectl set-hostname lb02

1.4.配置yum源

# 对于 CentOS 7
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \-e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \-i.bak \/etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \-e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \-i.bak \/etc/yum.repos.d/CentOS-*.reposed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

1.5.安装一些必备工具

yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y

1.6.安装docker工具 (lb除外)

yum install -y yum-utils device-mapper-persistent-data lvm2
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache
yum -y install docker-ce
systemctl  enable --now docker

1.7.关闭防火墙

systemctl disable --now firewalld

1.8.关闭SELinux

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.关闭交换分区

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.10.关闭NetworkManager 并启用 network (lb除外)

systemctl disable --now NetworkManager
systemctl start network && systemctl enable network

1.11.进行时间同步 (lb除外)

服务端yum install chrony -y
cat > /etc/chrony.conf << EOF
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOFsystemctl restart chronyd
systemctl enable chronyd客户端yum install chrony -y
vim /etc/chrony.conf
cat /etc/chrony.conf | grep -v  "^#" | grep -v "^$"
pool 192.168.1.81 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronydyum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.81#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd使用客户端进行验证chronyc sources -v

1.12.配置ulimit

ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

1.13.配置免密登录

yum install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="192.168.1.81 192.168.1.82 192.168.1.83 192.168.1.84 192.168.1.85 192.168.1.86 192.168.1.87 192.168.1.88 192.168.1.80 192.168.1.90"
export SSHPASS=123123
for HOST in $IP;dosshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

1.14.添加启用源 (lb除外)

为 RHEL-8或 CentOS-8配置源
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm查看可用安装包
yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

1.15.升级内核至4.18版本以上 (lb除外)

安装最新的内核
# 我这里选择的是稳定版kernel-ml   如需更新长期维护版本kernel-lt
yum  --enablerepo=elrepo-kernel  install  kernel-ml查看已安装那些内核
rpm -qa | grep kernel
kernel-core-4.18.0-358.el8.x86_64
kernel-tools-4.18.0-358.el8.x86_64
kernel-ml-core-5.16.7-1.el8.elrepo.x86_64
kernel-ml-5.16.7-1.el8.elrepo.x86_64
kernel-modules-4.18.0-358.el8.x86_64
kernel-4.18.0-358.el8.x86_64
kernel-tools-libs-4.18.0-358.el8.x86_64
kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64查看默认内核
grubby --default-kernel
/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64若不是最新的使用命令设置
grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64重启生效
rebootv8 整合命令为:
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --default-kernel ; rebootv7 整合命令为:
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel

1.16.安装ipvsadm (lb除外)

yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          176128  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

1.17.修改内核参数 (lb除外)

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOFsysctl --system

1.18.所有节点配置hosts本地解析

cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 k8s-master01
192.168.1.82 k8s-master02
192.168.1.83 k8s-master03
192.168.1.84 k8s-node01
192.168.1.85 k8s-node02
192.168.1.86 k8s-node03
192.168.1.87 k8s-node04
192.168.1.88 k8s-node05
192.168.1.80 lb01
192.168.1.90 lb02
192.168.1.89 lb-vip
EOF

2.k8s基本组件安装

2.1.所有k8s节点安装Containerd作为Runtime

yum install containerd -y

2.1.1配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

2.1.2加载模块

systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF# 加载内核sysctl --system

2.1.4创建Containerd的配置文件

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml修改Containerd的配置文件
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroup# 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为符合版本地址sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"

2.1.5启动并设置为开机启动

systemctl daemon-reload
systemctl enable --now containerd

2.1.6配置crictl客户端连接的运行时位置

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOFsystemctl restart  containerd

2.2.k8s与etcd下载及安装(仅在master01操作)

2.2.1下载k8s安装包(你用哪个下哪个)

1.下载kubernetes1.23.+的二进制包
github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.mdwget https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包
github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz3.docker-ce二进制包下载地址
二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里需要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载
github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.2/cri-containerd-cni-1.6.2-linux-amd64.tar.gz5.下载cfssl二进制包
github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载
github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载
github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz解压k8s安装文件
tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件
tar -xf etcd-v3.5.3-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.3-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/
etcd  etcdctl  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler已经整理好的:
wget https://github.com/cby-chen/Kubernetes/releases/download/v1.23.6/kubernetes-v1.23.6.tar

2.2.2查看版本

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.23.6
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.3
API version: 3.5
[root@k8s-master01 ~]#

2.2.3将组件发送至其他k8s节点

Master='k8s-master02 k8s-master03'
Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

2.2.4克隆证书相关文件

git clone https://github.com/cby-chen/Kubernetes.git

2.2.5所有k8s节点创建目录

mkdir -p /opt/cni/bin

3.相关证书生成

master01节点下载证书生成工具
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

cd Kubernetes/pki/# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \-ca=/etc/etcd/ssl/etcd-ca.pem \-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \-config=ca-config.json \-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.81,192.168.1.82,192.168.1.83 \-profile=kubernetes \etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

Master='k8s-master02 k8s-master03'for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

mkdir -p /etc/kubernetes/pki

3.2.2master01节点生成k8s证书

# 生成一个根证书cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca# 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.89为高可用vip地址cfssl gencert   \
-ca=/etc/kubernetes/pki/ca.pem   \
-ca-key=/etc/kubernetes/pki/ca-key.pem   \
-config=ca-config.json   \
-hostname=10.96.0.1,192.168.1.89,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.81,192.168.1.82,192.168.1.83,192.168.1.84,192.168.1.85,192.168.1.86,192.168.1.87,192.168.1.88,192.168.1.80,192.168.1.90,192.168.1.40,192.168.1.41   \
-profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略cfssl gencert  \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
-config=ca-config.json   \
-profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 设置一个集群项kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.89:8443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个用户项kubectl config set-credentials system:kube-controller-manager \--client-certificate=/etc/kubernetes/pki/controller-manager.pem \--client-key=/etc/kubernetes/pki/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置默认环境kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfigcfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/schedulerkubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.89:8443 \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \--client-certificate=/etc/kubernetes/pki/scheduler.pem \--client-key=/etc/kubernetes/pki/scheduler-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigcfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \admin-csr.json | cfssljson -bare /etc/kubernetes/pki/adminkubectl config set-cluster kubernetes     \--certificate-authority=/etc/kubernetes/pki/ca.pem     \--embed-certs=true     \--server=https://192.168.1.89:8443     \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin  \--client-certificate=/etc/kubernetes/pki/admin.pem     \--client-key=/etc/kubernetes/pki/admin-key.pem     \--embed-certs=true     \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes    \--cluster=kubernetes     \--user=kubernetes-admin     \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创建ServiceAccount Key ——secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其他master节点

for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

ls /etc/kubernetes/pki/
admin.csr      apiserver-key.pem  ca.pem                      front-proxy-ca.csr      front-proxy-client-key.pem  scheduler.csr
admin-key.pem  apiserver.pem      controller-manager.csr      front-proxy-ca-key.pem  front-proxy-client.pem      scheduler-key.pem
admin.pem      ca.csr             controller-manager-key.pem  front-proxy-ca.pem      sa.key                      scheduler.pem
apiserver.csr  ca-key.pem         controller-manager.pem      front-proxy-client.csr  sa.pub# 一共23个就对了ls /etc/kubernetes/pki/ |wc -l
23

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.81:2380'
listen-client-urls: 'https://192.168.1.81:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.81:2380'
advertise-client-urls: 'https://192.168.1.81:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.2master02配置

cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.82:2380'
listen-client-urls: 'https://192.168.1.82:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.82:2380'
advertise-client-urls: 'https://192.168.1.82:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.3master03配置

cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.83:2380'
listen-client-urls: 'https://192.168.1.83:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.83:2380'
advertise-client-urls: 'https://192.168.1.83:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
Alias=etcd3.serviceEOF

4.2.2创建etcd证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

4.2.3查看etcd状态

export ETCDCTL_API=3
etcdctl --endpoints="192.168.1.83:2379,192.168.1.82:2379,192.168.1.81:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.1.83:2379 | 7cb7be3df5c81965 |   3.5.2 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.1.82:2379 | c077939949ab3f8b |   3.5.2 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.1.81:2379 | 2ee388f67565dac9 |   3.5.2 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 pki]#

5.高可用配置

5.1在lb01和lb02两台服务器上操作

5.1.1安装keepalived和haproxy服务

systemctl disable --now firewalldsetenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/configyum -y install keepalived haproxy

5.1.2修改haproxy配置文件(两台配置文件一样)

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bakcat >/etc/haproxy/haproxy.cfg<<"EOF"
globalmaxconn 2000ulimit-n 16384log 127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode httpoption httplogtimeout connect 5000timeout client 50000timeout server 50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorfrontend k8s-masterbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server  k8s-master01  192.168.1.81:6443 checkserver  k8s-master02  192.168.1.82:6443 checkserver  k8s-master03  192.168.1.83:6443 check
EOF

5.1.3lb01配置keepalived master节点

#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1
}
vrrp_instance VI_1 {state MASTERinterface ens18mcast_src_ip 192.168.1.80virtual_router_id 51priority 100nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.89}track_script {chk_apiserver
} }EOF

5.1.4lb02配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1}
vrrp_instance VI_1 {state BACKUPinterface ens18mcast_src_ip 192.168.1.90virtual_router_id 51priority 50nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.89}track_script {chk_apiserver
} }EOF

5.1.5健康检查脚本配置(两台lb主机)

cat >  /etc/keepalived/check_apiserver.sh << EOF
#!/bin/basherr=0
for k in \$(seq 1 3)
docheck_code=\$(pgrep haproxy)if [[ \$check_code == "" ]]; thenerr=\$(expr \$err + 1)sleep 1continueelseerr=0breakfi
doneif [[ \$err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi
EOF# 给脚本授权chmod +x /etc/keepalived/check_apiserver.sh

5.1.6启动服务

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

5.1.7测试高可用

# 能ping同[root@k8s-node02 ~]# ping 192.168.1.89# 能telnet访问[root@k8s-node02 ~]# telnet 192.168.1.89 8443# 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address=192.168.1.81 \--service-cluster-ip-range=10.96.0.0/12  \--service-node-port-range=30000-32767  \--etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User \--enable-aggregator-routing=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF

6.1.2master02节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address=192.168.1.82 \--service-cluster-ip-range=10.96.0.0/12  \--service-node-port-range=30000-32767  \--etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User \--enable-aggregator-routing=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF

6.1.3master03节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address=192.168.1.83 \--service-cluster-ip-range=10.96.0.0/12  \--service-node-port-range=30000-32767  \--etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User \--enable-aggregator-routing=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF

6.1.4启动apiserver(所有master节点)

systemctl daemon-reload && systemctl enable --now kube-apiserver# 注意查看状态是否启动正常systemctl status kube-apiserver

6.2.配置kube-controller-manager service

所有master节点配置,且配置相同
172.16.0.0/12为pod网段,按需求设置你自己的网段cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--logtostderr=true \--address=127.0.0.1 \--root-ca-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/pki/sa.key \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--pod-eviction-timeout=2m0s \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr=172.16.0.0/12 \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--node-cidr-mask-size=24Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF

6.2.1启动kube-controller-manager,并查看状态

systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl  status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--logtostderr=true \--address=127.0.0.1 \--leader-elect=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF

6.3.2启动并查看服务状态

systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

cd /root/Kubernetes/bootstrapkubectl config set-cluster kubernetes     \
--certificate-authority=/etc/kubernetes/pki/ca.pem     \
--embed-certs=true     --server=https://192.168.1.89:8443     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user     \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
etcd-2               Healthy   {"health":"true","reason":""}
etcd-1               Healthy   {"health":"true","reason":""} kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

cd /etc/kubernetes/for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

8.2.1所有k8s节点创建相关目录

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/所有k8s节点配置kubelet service
cat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service[Service]
ExecStart=/usr/local/bin/kubelet \--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--config=/etc/kubernetes/kubelet-conf.yml \--network-plugin=cni  \--cni-conf-dir=/etc/cni/net.d  \--cni-bin-dir=/opt/cni/bin  \--container-runtime=remote  \--runtime-request-timeout=15m  \--container-runtime-endpoint=unix:///run/containerd/containerd.sock  \--cgroup-driver=systemd \--node-labels=node.kubernetes.io/node=''Restart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
EOF

8.2.2所有k8s节点创建kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

8.2.3启动kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl enable --now kubelet

8.2.4查看集群

[root@k8s-master01 ~]# kubectl  get node
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   14h   v1.23.5
k8s-master02   NotReady   <none>   14h   v1.23.5
k8s-master03   NotReady   <none>   14h   v1.23.5
k8s-node01     NotReady   <none>   14h   v1.23.5
k8s-node02     NotReady   <none>   14h   v1.23.5
k8s-node03     NotReady   <none>   14h   v1.23.5
k8s-node04     NotReady   <none>   14h   v1.23.5
k8s-node05     NotReady   <none>   14h   v1.23.5
[root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1此配置只在master01操作

cd /root/Kubernetes/
kubectl -n kube-system create serviceaccount kube-proxykubectl create clusterrolebinding system:kube-proxy \
--clusterrole system:node-proxier \
--serviceaccount kube-system:kube-proxySECRET=$(kubectl -n kube-system get sa/kube-proxy \--output=jsonpath='{.secrets[0].name}')JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kuberneteskubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.1.89:8443 \
--kubeconfig=${K8S_DIR}/kube-proxy.kubeconfigkubectl config set-credentials kubernetes \
--token=${JWT_TOKEN} \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kubernetes \
--cluster=kubernetes \
--user=kubernetes \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kubernetes \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

8.3.2将kubeconfig发送至其他节点

for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; donefor NODE in k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

8.3.3所有k8s节点添加kube-proxy的配置和service文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yaml \--v=2Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250msEOF

8.3.4启动kube-proxy

systemctl daemon-reloadsystemctl enable --now kube-proxy

9.安装Calico

9.1以下步骤只在master01操作

9.1.1更改calico网段

cd /root/Kubernetes/calico/
sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml
grep "IPV4POOL_CIDR" calico.yaml  -A 1- name: CALICO_IPV4POOL_CIDRvalue: "172.16.0.0/12"# 创建kubectl apply -f calico.yaml

9.1.2查看容器状态

[root@k8s-master01 ~]# kubectl  get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6f6595874c-nb95g   1/1     Running   0          2m54s
kube-system   calico-node-67dn4                          1/1     Running   0          2m54s
kube-system   calico-node-79zxj                          1/1     Running   0          2m54s
kube-system   calico-node-85bsf                          1/1     Running   0          2m54s
kube-system   calico-node-8trsm                          1/1     Running   0          2m54s
kube-system   calico-node-dvz72                          1/1     Running   0          2m54s
kube-system   calico-node-qqzwx                          1/1     Running   0          2m54s
kube-system   calico-node-rngzq                          1/1     Running   0          2m55s
kube-system   calico-node-w8gqp                          1/1     Running   0          2m54s
kube-system   calico-typha-6b6cf8cbdf-2b454              1/1     Running   0          2m55s
[root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl  get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    <none>   14h   v1.23.5
k8s-master02   Ready    <none>   14h   v1.23.5
k8s-master03   Ready    <none>   14h   v1.23.5
k8s-node01     Ready    <none>   14h   v1.23.5
k8s-node02     Ready    <none>   14h   v1.23.5
k8s-node03     Ready    <none>   14h   v1.23.5
k8s-node04     Ready    <none>   14h   v1.23.5
k8s-node05     Ready    <none>   14h   v1.23.5
[root@k8s-master01 ~]#

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

cd /root/Kubernetes/CoreDNS/
sed -i "s#KUBEDNS_SERVICE_IP#10.96.0.10#g" coredns.yamlcat coredns.yaml | grep clusterIP:clusterIP: 10.96.0.10

10.1.2安装

kubectl  create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

安装metrics server
cd /root/Kubernetes/metrics-server/kubectl  create -f . serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

11.1.2稍等片刻查看状态

kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01   154m         1%     1715Mi          21%
k8s-master02   151m         1%     1274Mi          16%
k8s-master03   523m         6%     1345Mi          17%
k8s-node01     84m          1%     671Mi           8%
k8s-node02     73m          0%     727Mi           9%
k8s-node03     96m          1%     769Mi           9%
k8s-node04     68m          0%     673Mi           8%
k8s-node05     82m          1%     679Mi           8%

12.集群验证

12.1部署pod资源

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:name: busyboxnamespace: default
spec:containers:- name: busyboximage: busybox:1.28command:- sleep- "3600"imagePullPolicy: IfNotPresentrestartPolicy: Always
EOF# 查看kubectl  get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          17s

12.2用pod解析默认命名空间中的kubernetes

kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17hkubectl exec  busybox -n default -- nslookup kubernetes
3Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.curl 10.96.0.10:53
curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

kubectl get po -owide
NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>
calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.1.81     k8s-master01   <none>           <none>
calico-node-g8nqd                          1/1     Running   0             77m   192.168.1.84     k8s-node01     <none>           <none>
calico-node-mdps8                          1/1     Running   0             77m   192.168.1.85     k8s-node02     <none>           <none>
calico-node-nf4nt                          1/1     Running   0             77m   192.168.1.83     k8s-master03   <none>           <none>
calico-node-sq2ml                          1/1     Running   0             77m   192.168.1.82     k8s-master02   <none>           <none>
calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.1.85     k8s-node02     <none>           <none>
calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.1.81     k8s-master01   <none>           <none>
calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.1.84     k8s-node01     <none>           <none>
coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>
metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none># 进入busybox ping其他节点上的podkubectl exec -ti busybox -- sh
/ # ping 192.168.1.84
PING 192.168.1.84 (192.168.1.84): 56 data bytes
64 bytes from 192.168.1.84: seq=0 ttl=63 time=0.358 ms
64 bytes from 192.168.1.84: seq=1 ttl=63 time=0.668 ms
64 bytes from 192.168.1.84: seq=2 ttl=63 time=0.637 ms
64 bytes from 192.168.1.84: seq=3 ttl=63 time=0.624 ms
64 bytes from 192.168.1.84: seq=4 ttl=63 time=0.907 ms# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

cat > deployments.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80EOFkubectl  apply -f deployments.yaml
deployment.apps/nginx-deployment createdkubectl  get pod
NAME                               READY   STATUS    RESTARTS   AGE
busybox                            1/1     Running   0          6m25s
nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s# 删除nginx[root@k8s-master01 ~]# kubectl delete -f deployments.yaml

13.安装dashboard

cd /root/Kubernetes/dashboard/kubectl  create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

13.1创建管理员用户

cat > admin.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:- kind: ServiceAccountname: admin-usernamespace: kube-systemEOF

13.2执行yaml文件

kubectl apply -f admin.yaml -n kube-systemserviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

13.3更改dashboard的svc为NodePort,如果已是请忽略

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboardtype: NodePort

13.4查看端口号

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.98.201.22   <none>        443:31245/TCP   10m

13.5查看token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-5vfk4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: fc2535ae-8760-4037-9026-966f03ab9bf9Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1363 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVOMnhMdHFTRWxweUlfUm93VmhMZTVXZW1FXzFrT01nQ0dTcE5uYjJlNWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTV2Zms0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYzI1MzVhZS04NzYwLTQwMzctOTAyNi05NjZmMDNhYjliZjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.HSU1FeqY6pDVoXVIv4Lu27TDhCYHM-FzGsGybYL5QPJ5-P0b3tQqUH9i3AQlisiGPB--jCFT5CUeOeXneOyfV7XkC7frbn6VaQoh51n6ztkIvjUm8Q4xj_LQ2OSFfWlFUnaZsaYTdD-RCldwh63pX362T_FjgDknO4q1wtKZH5qR0mpL1dOjas50gnOSyBY0j-nSPrifhnNq3_GcDLE4LxjuzO1DfGNTEHZ6TojPJ_5ZElMolaYJsVejn2slfeUQEWdiD5AHFZlRd4exODCHyvUhRpzb9jO2rovN2LMqdE_vxBtNgXp19evQB9AgZyMMSmu1Ch2C2UAi4NxjKw8HNA

13.6登录dashboard

https://192.168.1.81:31245/

eyJhbGciOiJSUzI1NiIsImtpZCI6InYzV2dzNnQzV3hHb2FQWnYzdnlOSmpudmtpVmNjQW5VM3daRi12SFM4dEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWs1NDVrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMzA4MDcxYy00Y2Y1LTQ1ODMtODNhMi1lYWY3ODEyNTEyYjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.pshvZPi9ZJkXUWuWilcYs1wawTpzV-nMKesgF3d_l7qyTPaK2N5ofzIThd0SjzU7BFNb4_rOm1dw1Be5kLeHjY_YW5lDnM5TAxVPXmZQ0HJ2pAQ0pjQqCHFnPD0bZFIYkeyz8pZx0Hmwcd3ZdC1yztr0ADpTAmMgI9NC2ZFIeoFFo4Ue9ZM_ulhqJQjmgoAlI_qbyjuKCNsWeEQBwM6HHHAsH1gOQIdVxqQ83OQZUuynDQRpqlHHFIndbK2zVRYFA3GgUnTu2-VRQ-DXBFRjvZR5qArnC1f383jmIjGT6VO7l04QJteG_LFetRbXa-T4mcnbsd8XutSgO0INqwKpjw

14.ingress安装

14.1写入配置文件,并执行

[root@hello ~/yaml]# vim deploy.yaml
[root@hello ~/yaml]#
[root@hello ~/yaml]#
[root@hello ~/yaml]# cat deploy.yaml
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginx---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
data:allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmname: ingress-nginx
rules:- apiGroups:- ''resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch- apiGroups:- ''resources:- nodesverbs:- get- apiGroups:- ''resources:- servicesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch- apiGroups:- ''resources:- eventsverbs:- create- patch- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmname: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx
subjects:- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
rules:- apiGroups:- ''resources:- namespacesverbs:- get- apiGroups:- ''resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch- apiGroups:- ''resources:- servicesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch- apiGroups:- ''resources:- configmapsresourceNames:- ingress-controller-leaderverbs:- get- update- apiGroups:- ''resources:- configmapsverbs:- create- apiGroups:- ''resources:- eventsverbs:- create- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx
subjects:- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controller-admissionnamespace: ingress-nginx
spec:type: ClusterIPports:- name: https-webhookport: 443targetPort: webhookappProtocol: httpsselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:annotations:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
spec:type: NodePortexternalTrafficPolicy: LocalipFamilyPolicy: SingleStackipFamilies:- IPv4ports:- name: httpport: 80protocol: TCPtargetPort: httpappProtocol: http- name: httpsport: 443protocol: TCPtargetPort: httpsappProtocol: httpsselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
spec:selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controllerrevisionHistoryLimit: 10minReadySeconds: 0template:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controllerspec:dnsPolicy: ClusterFirstcontainers:- name: controllerimage: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.1.3 imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownargs:- /nginx-ingress-controller- --election-id=ingress-controller-leader- --controller-class=k8s.io/ingress-nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keysecurityContext:capabilities:drop:- ALLadd:- NET_BIND_SERVICErunAsUser: 101allowPrivilegeEscalation: trueenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.solivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1ports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCP- name: webhookcontainerPort: 8443protocol: TCPvolumeMounts:- name: webhook-certmountPath: /usr/local/certificates/readOnly: trueresources:requests:cpu: 100mmemory: 90MinodeSelector:kubernetes.io/os: linuxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300volumes:- name: webhook-certsecret:secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: nginxnamespace: ingress-nginx
spec:controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:labels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookname: ingress-nginx-admission
webhooks:- name: validate.nginx.ingress.kubernetes.iomatchPolicy: Equivalentrules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressesfailurePolicy: FailsideEffects: NoneadmissionReviewVersions:- v1clientConfig:service:namespace: ingress-nginxname: ingress-nginx-controller-admissionpath: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: ingress-nginx-admissionannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
rules:- apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: ingress-nginx-admissionannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
rules:- apiGroups:- ''resources:- secretsverbs:- get- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-createnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
spec:template:metadata:name: ingress-nginx-admission-createlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookspec:containers:- name: createimage: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresentargs:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionnodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-patchnamespace: ingress-nginxannotations:helm.sh/hook: post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
spec:template:metadata:name: ingress-nginx-admission-patchlabels:helm.sh/chart: ingress-nginx-4.0.10app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookspec:containers:- name: patchimage: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresentargs:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionnodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 2000
[root@hello ~/yaml]#

14.2启用后端,写入配置文件执行

[root@hello ~/yaml]# vim backend.yaml
[root@hello ~/yaml]# cat backend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: default-http-backendlabels:app.kubernetes.io/name: default-http-backendnamespace: kube-system
spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: default-http-backendtemplate:metadata:labels:app.kubernetes.io/name: default-http-backendspec:terminationGracePeriodSeconds: 60containers:- name: default-http-backendimage: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 livenessProbe:httpGet:path: /healthzport: 8080scheme: HTTPinitialDelaySeconds: 30timeoutSeconds: 5ports:- containerPort: 8080resources:limits:cpu: 10mmemory: 20Mirequests:cpu: 10mmemory: 20Mi
---
apiVersion: v1
kind: Service
metadata:name: default-http-backendnamespace: kube-systemlabels:app.kubernetes.io/name: default-http-backend
spec:ports:- port: 80targetPort: 8080selector:app.kubernetes.io/name: default-http-backend
[root@hello ~/yaml]#

14.3安装测试应用

[root@hello ~/yaml]# vim ingress-demo-app.yaml
[root@hello ~/yaml]#
[root@hello ~/yaml]# cat ingress-demo-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: hello-server
spec:replicas: 2selector:matchLabels:app: hello-servertemplate:metadata:labels:app: hello-serverspec:containers:- name: hello-serverimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-serverports:- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginx-demoname: nginx-demo
spec:replicas: 2selector:matchLabels:app: nginx-demotemplate:metadata:labels:app: nginx-demospec:containers:- image: nginxname: nginx
---
apiVersion: v1
kind: Service
metadata:labels:app: nginx-demoname: nginx-demo
spec:selector:app: nginx-demoports:- port: 8000protocol: TCPtargetPort: 80
---
apiVersion: v1
kind: Service
metadata:labels:app: hello-servername: hello-server
spec:selector:app: hello-serverports:- port: 8000protocol: TCPtargetPort: 9000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: ingress-host-bar
spec:ingressClassName: nginxrules:- host: "hello.chenby.cn"http:paths:- pathType: Prefixpath: "/"backend:service:name: hello-serverport:number: 8000- host: "demo.chenby.cn"http:paths:- pathType: Prefixpath: "/nginx"  backend:service:name: nginx-demoport:number: 8000
[root@hello ~/yaml]#
[root@hello ~/yaml]# kubectl  get ingress
NAME               CLASS    HOSTS                            ADDRESS        PORTS   AGE
ingress-demo-app   <none>   app.demo.com                     192.168.1.11   80      20m
ingress-host-bar   nginx    hello.chenby.cn,demo.chenby.cn   192.168.1.11   80      2m17s
[root@hello ~/yaml]#

14.4执行部署

root@hello:~# kubectl  apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
root@hello:~# root@hello:~# kubectl  apply -f backend.yaml
deployment.apps/default-http-backend created
service/default-http-backend created
root@hello:~# root@hello:~# kubectl  apply -f ingress-demo-app.yaml
deployment.apps/hello-server created
deployment.apps/nginx-demo created
service/nginx-demo created
service/hello-server created
ingress.networking.k8s.io/ingress-host-bar created
root@hello:~#

14.5过滤查看ingress端口

[root@hello ~/yaml]# kubectl  get svc -A | grep ingress
default         ingress-demo-app                     ClusterIP   10.68.231.41    <none>        80/TCP                       51m
ingress-nginx   ingress-nginx-controller             NodePort    10.68.93.71     <none>        80:32746/TCP,443:30538/TCP   32m
ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.68.146.23    <none>        443/TCP                      32m
[root@hello ~/yaml]#

15.安装命令行自动补全功能

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

附录:

配置kube-controller-manager有效期100年(能不能生效的先配上再说)

vim /usr/lib/systemd/system/kube-controller-manager.service# [Service]下找个地方加上--cluster-signing-duration=876000h0m0s \# 重启systemctl daemon-reload
systemctl restart kube-controller-manager

防止漏洞扫描

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

预留空间,按需分配

vim /etc/kubernetes/kubelet-conf.ymlrotateServerCertificates: true
allowedUnsafeSysctls:- "net.core*"- "net.ipv4.*"kubeReserved:cpu: "1"memory: 1Giephemeral-storage: 10GisystemReserved:cpu: "1"memory: 1Giephemeral-storage: 10Gi

数据盘要与系统盘分开;etcd使用ssd磁盘

https://www.oiox.cn/

https://www.chenby.cn/

https://cby-chen.github.io/

https://blog.csdn.net/qq_33921750

https://my.oschina.net/u/3981543

https://www.zhihu.com/people/chen-bu-yun-2

https://segmentfault.com/u/hppyvyv6/articles

https://juejin.cn/user/3315782802482007

https://cloud.tencent.com/developer/column/93230

https://www.jianshu.com/u/0f894314ae2c

https://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

二进制安装Kubernetes(k8s) v1.23.6相关推荐

  1. 二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈

    二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了

  2. 二进制安装Kubernetes(k8s) v1.24.0 IPv4

    感谢:二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈 - 小陈运维 kubernetes 1.24 变化较大,详细见:Kubernetes 1.24 的删除和弃用 | ...

  3. 二进制安装Kubernetes(k8s)IPv4/IPv6双栈 v1.24.0

    二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈 介绍 kubernetes二进制安装 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24 ...

  4. openEuler 22.09环境二进制安装Kubernetes(k8s) v1.26

    本文档描述了如何在openEuler 22.09上以二进制模式部署高可用Kubernetes集群(适用k8s v1.26版本). 注意:本文档中的所有操作均使用root权限执行. 1 部署环境 1.1 ...

  5. 二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈

    二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了

  6. 二进制安装Kubernetes(k8s) v1.22.10 IPv4/IPv6双栈

    二进制安装Kubernetes(k8s) v1.22.10 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了

  7. 二进制安装Kubernetes(k8s) v1.24.3 IPv4/IPv6双栈

    二进制安装Kubernetes(k8s) v1.24.3 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了

  8. 二进制安装Kubernetes(k8s) v1.21.13 IPv4/IPv6双栈

    二进制安装Kubernetes(k8s) v1.21.13 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了

  9. K8S V1.23 安装--Kubeadm+contained+公网 IP 多节点部署

    简介 基于两台公网的服务器节点,两个服务器不再局域网内,只能通过公网 IP 相互访问,搭建 K8S 集群,并且按照 Dashboard,通过网页查看 K8S 相关的东西 环境及机器说明 两台机器,其中 ...

最新文章

  1. 微生物组数据揭示中国稻谷产毒真菌分布及仓储动态变化
  2. Web.config配置文件的加密,解密及读写操作
  3. imperial college application status check portal
  4. python实现单例模式的三种方法
  5. poj 1144 割点和桥
  6. Android 功耗(15)---Android系统耗电
  7. Xshell 连接腾讯云、阿里云centos服务器
  8. 机器视觉:系统不稳定性因素分析
  9. java 面试题 生产者 消费者_面试大厂必看!就凭借这份Java多线程和并发面试题,我拿到了字节和美团的offer!...
  10. jqueryui时间插件_jQueryUI AutoComplete插件
  11. 【思科模拟器实验】三层交换机(1)
  12. 高性能网络编程-反应堆模型(reactor)
  13. linux系统文件颜色所代表的意思
  14. MySQL表的四种分区类型
  15. 前端工程师 后段工程师_如何像工程师一样思考
  16. python 怎么输出实际的根号2_Python怎么输出根式?
  17. 备案,备案!段总的Blog被叫停了!
  18. 历代Android开机动画,回顾Windows历代版本开机画面:XP最经典
  19. matlab矩阵旋转
  20. postman 1—官网下载及安装

热门文章

  1. 如何获取支付宝UserId
  2. Java中用循环语句求3+33+333+....+33333333的和
  3. 1700V以下通用型IGBT驱动器使用手册-100%国产化元器件
  4. DLG\DOM\DEM...免费提供!河北省向全社会公开2020版地理信息数据资源
  5. 中小学教学与管理的44个小创意(转)
  6. (1)数据库系统简介
  7. Sql语言如何拼接数据?
  8. 从零实现3D图像引擎:(10)Hello3DWorld
  9. 【微信多开】windows系统同时登陆多个微信
  10. 3DMax提示缺少Vrey的DLL