基础配置

集群网段划分

192.168.1.0/24   主机网段
10.244.0.0/16   service网段
172.16.0.0/12   pod网段
测试:  官网  versions  v1.23  v1.22
生产: v1.23.5+ 第三个版本号大于5

k8s-ha-install

cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
git chectout manual-installation-v1.25.x

CentOS 7基础配置

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum -y install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.reposystemctl disable --now firewalld
systemctl disable --now dnsmasq
#systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
yum -y  install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
ulimit -SHn 65535
cat >>/etc/security/limits.conf<<EOF
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOFyum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统

ntp加入到crontab

*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

配置hosts

cat >>/etc/hosts<<EOF
192.168.1.10 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.1.11 k8s-master01
192.168.1.12 k8s-master02
192.168.1.13 k8s-master03
192.168.1.14 k8s-node01
192.168.1.15 k8s-node02
EOF

升级内核

CentOS7 需要升级内核至4.18+,本地升级的版本为4.19

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm查看最新版内核yum --disablerepo="*" --enablerepo="elrepo-kernel" list available[root@k8s-node01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
Available Packages
elrepo-release.noarch                                              7.0-6.el7.elrepo                                      elrepo-kernel
kernel-lt.x86_64                                                   5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-lt-devel.x86_64                                             5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-lt-doc.noarch                                               5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-lt-headers.x86_64                                           5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-lt-tools.x86_64                                             5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-lt-tools-libs.x86_64                                        5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                                  5.4.225-1.el7.elrepo                                  elrepo-kernel
kernel-ml.x86_64                                                   6.0.9-1.el7.elrepo                                    elrepo-kernel
kernel-ml-devel.x86_64                                             6.0.9-1.el7.elrepo                                    elrepo-kernel
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpmcd /root && yum localinstall -y kernel-ml*

更改内核启动顺序

grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel   #检查默认内核是不是4.19  /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
reboot
uname -a    #所有节点重启,然后检查内核是不是4.19

Master01节点免密钥登录

ssh-keygen -t rsa -b 2048 -N '' -f /root/.ssh/id_rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;donefor i in  k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp /etc/hosts $i:/etc/hosts;done

从master01节点传到其他节点,升级步骤同master01

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp ke rnel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
cd /root && yum localinstall -y kernel-ml*

所有节点安装ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y

配置ipvs模块

在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrackcat >>/etc/modules-load.d/ipvs.conf<<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOFsystemctl enable --now systemd-modules-load.service

开启一些k8s集群中必须的内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
reboot
lsmod | grep  -e ip_vs -e nf_conntrack

K8s相关配置&Runtime安装

如果安装的版本低于1.24,选择Docker和Containerd均可,高于1.24选择Containerd作为Runtime。

节点通用均要配置

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

可以无需启动Docker,只需要配置和启动Containerd即可。

配置Containerd所需的模块(所有节点):

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe -- overlay
modprobe -- br_netfilter

配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system

配置Containerd的配置文件:

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

找到containerd.runtimes.runc.options,下添加SystemdCgroup = true如果已存在直接修改,否则会报错)

vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
...SystemdCgroup = true
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
#将sandbox_image的Pause镜像改成符合自己版本的地址
systemctl daemon-reload
systemctl enable --now containerd

配置crictl客户端连接的运行时位置

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
ctr image ls   #验证containerd

Docker作为Runtime

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{"registry-mirrors": ["https://registry.docker-cn.com","http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn"],"exec-opts": ["native.cgroupdriver=systemd"],"max-concurrent-downloads": 10,"max-concurrent-uploads": 5,"log-opts": {"max-size": "300m","max-file": "2"},"live-restore": true
}
EOFsystemctl daemon-reload && systemctl enable --now docker

K8s及etcd证书准备

https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG版本更改,下beta发布的版本

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md

解压kubernetes安装文件

wget https://dl.k8s.io/v1.23.0/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}ls /usr/local/bin
kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler

下载etcd安装包

wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{,ctl}

版本查看

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.23.0
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.1
API version: 3.5

将组件发送到其他节点

MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

生成证书

wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
#所有Master节点创建etcd证书目录
mkdir /etc/etcd/ssl -p
#所有节点
mkdir -p /etc/kubernetes/pki
mkdir -p /opt/cni/bin

Master01节点生成etcd证书

cd /root/;git clone https://github.com/dotbalo/k8s-ha-install.git
git branch -a
git checkout manual-installation-v1.23.x

生成etcd CA证书和CA证书的key

etcd-ca-csr.json

cat etcd-ca-csr.json
{"CN": "etcd","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "Etcd Security"}],"ca": {"expiry": "876000h"}
}
cd /root/k8s-ha-install/pki
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
ls /etc/etcd/ssl/
etcd-ca.csr  etcd-ca-key.pem  etcd-ca.pem

生成etcd CA证书和CA证书的key

ca-config.json

cat ca-config.json
{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}
cfssl gencert \-ca=/etc/etcd/ssl/etcd-ca.pem \-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \-config=ca-config.json \-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.11,192.168.1.12,192.168.1.13 \-profile=kubernetes \etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
#etcd节点的主机名,ip地址,可以预留几个备用
ls  /etc/etcd/ssl/
...   etcd.csr  etcd-key.pem  etcd.pem

将证书复制到其他etcd节点

for NODE in k8s-master02 k8s-master03; dossh $NODE "mkdir -p /etc/etcd/ssl"for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; doscp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}donedone

kubernetes证书

cat ca-csr.json
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}],"ca": {"expiry": "876000h"}
}
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
ls  /etc/kubernetes/pki/
ca.csr  ca-key.pem  ca.pem
cat apiserver-csr.json
{"CN": "kube-apiserver","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}]
}
cfssl gencert   -ca=/etc/kubernetes/pki/ca.pem   -ca-key=/etc/kubernetes/pki/ca-key.pem   -config=ca-config.json   -hostname=10.244.0.1,192.168.1.10,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.1.11,192.168.1.12,192.168.1.13   -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
#hostname=第一个IP是k8s service的网段第一个ip,第二个IP为高可用集群VIP,非高可用为Master01的IP,IP可以多备几个,
ls  /etc/kubernetes/pki/
apiserver.csr  apiserver-key.pem  apiserver.pem   ...

生成apiserver的聚合证书

cat front-proxy-ca-csr.json
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"ca": {"expiry": "876000h"}
}
cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
ls /etc/kubernetes/pki/
front-proxy-ca.csr  front-proxy-ca-key.pem    front-proxy-ca.pem  ...
cat front-proxy-client-csr.json
{"CN": "front-proxy-client","key": {"algo": "rsa","size": 2048}
}
cfssl gencert   -ca=/etc/kubernetes/pki/front-proxy-ca.pem   -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   -config=ca-config.json   -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
ls /etc/kubernetes/pki/
front-proxy-client.csr  front-proxy-client-key.pem  front-proxy-client.pem

生成controller-manage的证书

manager-csr.json

cat manager-csr.json
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes-manual"}]
}
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
ls /etc/kubernetes/pki/
controller-manager.csr  controller-manager-key.pem   controller-manager.pem

生成kubeconfig文件

注意,如果不是高可用集群,192.168.1.10:8443改为master01的地址,8443改为apiserver的端口,默认是6443

# set-cluster:设置一个集群项,kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.10:8443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# set-credentials 设置一个用户项kubectl config set-credentials system:kube-controller-manager \--client-certificate=/etc/kubernetes/pki/controller-manager.pem \--client-key=/etc/kubernetes/pki/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 使用某个环境当做默认环境kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

scheduler证书

cat  scheduler-csr.json
{"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes-manual"}]
}
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/schedulerls /etc/kubernetes/pki/
scheduler.csr scheduler-key.pem  scheduler.pem

注意,如果不是高可用集群,192.168.1.10:8443改为master01的地址,8443改为apiserver的端口,默认是6443

kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.10:8443 \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \--client-certificate=/etc/kubernetes/pki/scheduler.pem \--client-key=/etc/kubernetes/pki/scheduler-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig

管理员权限admin.kubeconfig

admin-csr.json

{"CN": "admin","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes-manual"}]
}
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \admin-csr.json | cfssljson -bare /etc/kubernetes/pki/adminls /etc/kubernetes/pki/
admin.csr    admin-key.pem   admin.pem

如果不是高可用集群,192.168.1.10:8443改为master01的地址,8443改为apiserver的端口,默认是6443

kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.1.10:8443     --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin     --client-certificate=/etc/kubernetes/pki/admin.pem     --client-key=/etc/kubernetes/pki/admin-key.pem     --embed-certs=true     --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes     --cluster=kubernetes     --user=kubernetes-admin     --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes     --kubeconfig=/etc/kubernetes/admin.kubeconfig

创建ServiceAccount Key à secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.publs -lrt /etc/kubernetes/pki/
-rw-r--r-- 1 root root 1675 Oct  9 00:39 sa.key
-rw-r--r-- 1 root root  451 Oct  9 00:40 sa.pub

发送证书至其他节点

for NODE in k8s-master02 k8s-master03
doecho $NODEfor FILE in $(ls /etc/kubernetes/pki | grep -v etcd)doscp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}donefor FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfigdoscp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}done
done

查看证书文件

ls /etc/kubernetes/pki/
admin.csr      apiserver.csr      ca.csr      controller-manager.csr      front-proxy-ca.csr      front-proxy-client.csr      sa.key         scheduler-key.pem
admin-key.pem  apiserver-key.pem  ca-key.pem  controller-manager-key.pem  front-proxy-ca-key.pem  front-proxy-client-key.pem  sa.pub         scheduler.pem
admin.pem      apiserver.pem      ca.pem      controller-manager.pem      front-proxy-ca.pem      front-proxy-client.pem      scheduler.csr
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l
23

Kubernetes系统组件配置

etcd

etd配置文件

cat  >/etc/etcd/etcd.config.yml<<EOF
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.11:2380'
listen-client-urls: 'https://192.168.1.11:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.11:2380'
advertise-client-urls: 'https://192.168.1.11:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
cat  >/etc/etcd/etcd.config.yml<<EOF
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.12:2380'
listen-client-urls: 'https://192.168.1.12:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.12:2380'
advertise-client-urls: 'https://192.168.1.12:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
cat  >/etc/etcd/etcd.config.yml<<EOF
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.13:2380'
listen-client-urls: 'https://192.168.1.13:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.13:2380'
advertise-client-urls: 'https://192.168.1.13:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

etcd service文件

cat  >/usr/lib/systemd/system/etcd.service<<EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF

所有Master节点创建etcd的证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

查看etcd状态

export ETCDCTL_API=3
etcdctl --endpoints="192.168.1.13:2379,192.168.1.12:2379,192.168.1.11:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.1.13:2379 | 40ba37809e1a423f |   3.5.1 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
| 192.168.1.12:2379 |  ac7e57d44f030e8 |   3.5.1 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.1.11:2379 | ace8d5b0766b3d92 |   3.5.1 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

keepalived和haproxy高可用配置

所有Master节点安装keepalived和haproxy

yum install keepalived haproxy -y

所有Master节点haproxy配置一样

cat  >/etc/haproxy/haproxy.cfg<<EOFglobalmaxconn  2000ulimit-n  16384log  127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode  httpoption  httplogtimeout connect 5000timeout client  50000timeout server  50000timeout http-request 15stimeout http-keep-alive 15sfrontend k8s-masterbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server k8s-master01    192.168.1.11:6443  checkserver k8s-master02    192.168.1.12:6443  checkserver k8s-master03    192.168.1.13:6443  check
EOF

keepalived健康检查配置一样

vim  /etc/keepalived/check_apiserver.sh
#!/bin/basherr=0
for k in $(seq 1 3)
docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fichmod +x /etc/keepalived/check_apiserver.sh

所有Master节点keepalived配置不一样

cat >/etc/keepalived/keepalived.conf<<EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1
}
vrrp_instance VI_1 {state MASTERinterface ens33mcast_src_ip 192.168.1.11virtual_router_id 51priority 101nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.10}track_script {chk_apiserver
} }
EOF
cat >/etc/keepalived/keepalived.conf<<EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1}
vrrp_instance VI_1 {state BACKUPinterface ens33mcast_src_ip 192.168.1.12virtual_router_id 51priority 100nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.10}track_script {chk_apiserver
} }
EOF
cat >/etc/keepalived/keepalived.conf<<EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2  rise 1
}
vrrp_instance VI_1 {state BACKUPinterface ens33mcast_src_ip 192.168.1.13virtual_router_id 51priority 100nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.10}track_script {chk_apiserver
} }
EOF
systemctl daemon-reload  && systemctl enable --now haproxy   &&  systemctl enable --now keepalived
ping 192.168.1.10
ip a2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:0e:51:60 brd ff:ff:ff:ff:ff:ffinet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.1.10/32 scope global ens33   #稍等一下,不会离开出来

Kubernetes组件配置

所有节点创建相关目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

Apiserver配置

所有Master节点创建kube-apiserver service

vim   /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address=192.168.1.11 \--service-cluster-ip-range=10.244.0.0/16  \--service-node-port-range=30000-32767  \--etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
vim  /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address=192.168.1.12 \--service-cluster-ip-range=10.244.0.0/16  \--service-node-port-range=30000-32767  \--etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address=192.168.1.13 \--service-cluster-ip-range=10.244.0.0/16  \--service-node-port-range=30000-32767  \--etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
systemctl daemon-reload  && systemctl enable --now kube-apiserver  && systemctl status kube-apiserver.service

kube-controller-manager service

/usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--logtostderr=true \--address=127.0.0.1 \--root-ca-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/pki/sa.key \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--pod-eviction-timeout=2m0s \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr=172.16.0.0/12 \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--node-cidr-mask-size=24Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
systemctl daemon-reload  && systemctl enable --now kube-controller-manager && systemctl status kube-controller-manager

所有Master节点配置kube-scheduler service

/usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--logtostderr=true \--address=127.0.0.1 \--leader-elect=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
systemctl daemon-reload  && systemctl enable --now kube-scheduler && systemctl status kube-scheduler

TLS Bootstrapping配置

如果不是高可用集群,192.168.1.10:8443改为master01的地址,8443改为apiserver的端口,默认是6443

cd /root/k8s-ha-install/bootstrap;cat bootstrap.secret.yaml
...
stringData:description: "The default bootstrap token generated by 'kubelet '."token-id: c8ad9ctoken-secret: 2e4d610cf3e7426e    # id与secret拼接组成下面的 --token=kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.1.10:8443     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user     --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes     --cluster=kubernetes     --user=tls-bootstrap-token-user     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
[root@vm1 bootstrap]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true","reason":""}
etcd-2               Healthy   {"health":"true","reason":""}
etcd-0               Healthy   {"health":"true","reason":""}
kubectl create -f bootstrap.secret.yaml

Node节点配置

复制证书

cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; dossh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/sslfor FILE in etcd-ca.pem etcd.pem etcd-key.pem; doscp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/donefor FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; doscp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}donedone

Kubelet配置

#所有节点创建相关目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

配置kubelet service

cat   >/usr/lib/systemd/system/kubelet.service<<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes[Service]
ExecStart=/usr/local/bin/kubeletRestart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
EOF

拍个快照,测试 不同的docker runtime

如果Runtime为Containerd,请使用如下Kubelet的配置:

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

如果Runtime为Docker,请使用文章结尾的Kubelet的配置:

创建kubelet的配置文件

如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.244.0.10

vim /etc/kubernetes/kubelet-conf.yml

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.244.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
yum -y  install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com

启动所有节点kubelet

同步下时间。实验中发现又没同步时间的节点导致实验失败

systemctl daemon-reload  && systemctl enable --now kubelet  && systemctl status kubelet
[root@vm1 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   14s   v1.23.0
k8s-master02   NotReady   <none>   10s   v1.23.0
k8s-master03   NotReady   <none>   8s    v1.23.0
k8s-node01     NotReady   <none>   5s    v1.23.0
k8s-node02     NotReady   <none>   35s   v1.23.0

kube-proxy配置

如果不是高可用集群,192.168.1.10:8443改为master01的地址,8443改为apiserver的端口,默认是6443

cd /root/k8s-ha-install
kubectl -n kube-system create serviceaccount kube-proxykubectl create clusterrolebinding system:kube-proxy         --clusterrole system:node-proxier         --serviceaccount kube-system:kube-proxySECRET=$(kubectl -n kube-system get sa/kube-proxy \--output=jsonpath='{.secrets[0].name}')JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kuberneteskubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.1.10:8443     --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfigkubectl config set-credentials kubernetes     --token=${JWT_TOKEN}     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kubernetes     --cluster=kubernetes     --user=kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

将kubeconfig发送至其他节点

for NODE in k8s-master02 k8s-master03; doscp /etc/kubernetes/kube-proxy.kubeconfig  $NODE:/etc/kubernetes/kube-proxy.kubeconfigdonefor NODE in k8s-node01 k8s-node02; doscp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfigdone

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yaml \--v=2Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target

vim /etc/kubernetes/kube-proxy.yaml

如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
systemctl daemon-reload && systemctl enable --now kube-proxy  && systemctl status kube-proxy

安装Calico

cd /root/k8s-ha-install/calico/
sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml
grep "IPV4POOL_CIDR" calico.yaml  -A 1
...- name: CALICO_IPV4POOL_CIDRvalue: "172.16.0.0/12"4106   typha:v3.22.0
4222、4290   cni:v3.22.0
4290    pod2daemon-flexvol:v3.22.0
4301    node:v3.22.0
4526    kube-controllers:v3.22.0
imagePullPolicy: IfNotPresent
kubectl apply -f calico.yaml     #执行前查看镜像以及拉取策略,按需更改
kubectl get po -n kube-system
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-6f6595874c-2d6pz   0/1     Pending    0          3m18s
calico-node-5w4d4                          0/1     Init:0/3   0          3m18s
calico-typha-6b6cf8cbdf-ncdgk              0/1     Pending    0          3m18s

安装CoreDNS

如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP

COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
echo $COREDNS_SERVICE_IP
10.244.0.10
sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml

安装最新版CoreDNS

COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
./deploy.sh -s -i ${COREDNS_SERVICE_IP} | kubectl apply -f -
查看状态# kubectl get po -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-5db5696c7-cqcbr   1/1     Running   0          8mcoredns:1.8.6

安装Metrics Server

cd /root/k8s-ha-install/metrics-server
kubectl  create -f .
kubectl  top nodemetrics-server:0.5.0

集群验证

安装busybox
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:name: busyboxnamespace: default
spec:containers:- name: busyboximage: busybox:1.28command:- sleep- "3600"imagePullPolicy: IfNotPresentrestartPolicy: Always
EOF
1.Pod必须能解析Service
2.Pod必须能解析跨namespace的Service
3.每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
4.Pod和Pod之前要能通
a)同namespace能通信
b)跨namespace能通信
c)跨机器能通信
kubectl exec  busybox -n default -- nslookup kubernetes
Server:    10.244.0.10
Address 1: 10.244.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes
Address 1: 10.244.0.1 kubernetes.default.svc.cluster.local
---
apiVersion: v1
kind: Service
metadata:name: ng-service
spec:ports:- protocol: TCPport: 80targetPort: 80selector:app: myapp-ngtype: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:name: ng-example
spec:selector:matchLabels:app: myapp-ngreplicas: 1template:metadata:labels:app: myapp-ngspec:containers:- name: ngimage: nginxports:- protocol: TCPcontainerPort: 80restartPolicy: Always

安装dashboard

cd /root/k8s-ha-install/dashboard/
kubectl  create -f .dashboard:v2.4.0
metrics-scraper:v1.0.7

安装最新版

官方GitHub地址:https://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
创建管理员用户
vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system
kubectl apply -f admin.yaml -n kube-system

Docker配置 采用containerd作为Runtime无需配置

vim /etc/docker/daemon.json
{"registry-mirrors": ["https://registry.docker-cn.com","http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn"],"exec-opts": ["native.cgroupdriver=systemd"],"max-concurrent-downloads": 10,"max-concurrent-uploads": 5,"log-opts": {"max-size": "300m","max-file": "2"},"live-restore": true
}
vim /usr/lib/systemd/system/kube-controller-manager.service
# --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \
--cluster-signing-duration=876000h0m0s \

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf

[Service]Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig"Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m"ExecStart=ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

vim /etc/kubernetes/kubelet-conf.yml

vim /etc/kubernetes/kubelet-conf.yml
rotateServerCertificates: true
allowedUnsafeSysctls:- "net.core*"- "net.ipv4.*"
kubeReserved:cpu: "1"memory: 1Giephemeral-storage: 10Gi
systemReserved:cpu: "1"memory: 1Giephemeral-storage: 10Gi

一个1-23二进制搭建方法相关推荐

  1. RabbitMQ的安装及集群搭建方法

    转自:http://blog.csdn.net/u013256816/article/details/53524766 RabbitMQ安装 1 安装erlang 下载地址:http://www.er ...

  2. Kubernetes二进制搭建集群(保姆级)

    Kubernetes二进制搭建集群(保姆级教程) 1.1 环境准备 版本说明 签名工具: 虚拟机/服务器配置 1.2 系统初始化(所有节点均需操作) 1.2.1 主机名解析 1.2.2 时间同步(所有 ...

  3. 二进制搭建kubernetes1.20.6

    二进制搭建kubernetes1.20.6 集群角色规划 集群角色 IP hostname 组件 Master 10.4.7.30 bst-30 apiserver.controller-manage ...

  4. 睿智的目标检测23——Pytorch搭建SSD目标检测平台

    睿智的目标检测23--Pytorch搭建SSD目标检测平台 学习前言 什么是SSD目标检测算法 源码下载 SSD实现思路 一.预测部分 1.主干网络介绍 2.从特征获取预测结果 3.预测结果的解码 4 ...

  5. 二进制搭建kubernetes多master集群【三、配置k8s master及高可用】

    前面两篇文章已经配置好了etcd和flannel的网络,现在开始配置k8s master集群. etcd集群配置参考:二进制搭建kubernetes多master集群[一.使用TLS证书搭建etcd集 ...

  6. 为了OFFER,菜鸟的我必须搞懂动态规划系列三个背包问题之多重背包(二进制优化方法)

    @Author:Runsen @Date:2020/9/17 多重背包有三层循环,如果数据非常的大,那么程序就会变得非常悲伤.在多重背包的问题,其实更多的是考查多重背包的二进制优化方法.学习二进制优化 ...

  7. javascript中alert函数的替代方案,一个自定义的对话框的方法(引用)

    大家好,我们平时在使用Javascript的时候,经常会需要给用户提供一些反馈信息,完成这个功能有很多种方法.但在平时开发中我们用的最多的可能就是alert这个函数了(这里只说一般情况,不排除个别高手 ...

  8. Redis Cluster搭建方法简介22211111

    Redis Cluster搭建方法简介 (2013-05-29 17:08:57) 转载▼ Redis Cluster即Redis的分布式版本,将是Redis继支持Lua脚本之后的又一重磅功能,官方声 ...

  9. javascript中alert函数的替代方案,一个自定义的对话框的方法

    大家好,我们平时在使用Javascript的时候,经常会需要给用户提供一些反馈信息,完成这个功能有很多种方法.但在平时开发中我们用的最多的可能就是alert这个函数了(这里只说一般情况,不排除个别高手 ...

最新文章

  1. 使用List中的remove方法遇到的坑,不信你没有踩过!
  2. [正能量系列]赋闲的程序员(三)
  3. c语言是字符串123变112233,Objective C学习第四节:OC里面的字符串和数值
  4. shared_ptr循环引用定置删除器
  5. Oracle就业课第六课之游标和触发器
  6. python字符串赋值与java区别_java和python细节总结和java中string 的+操作
  7. c语言 函数的参数传递示例_C ++中带有示例的nearint()函数
  8. Java的直接量——2017.08.01
  9. 【转】Retrofit
  10. 计算机网络学习1-网络层次
  11. Python-OpenCV训练一个人脸识别器
  12. 实现一个函数,打印乘法口诀表,口诀表的行数和列数自己指定。
  13. 联机侠控制台JAVA_我的世界MultiMc启动器
  14. CDA 数据分析师 Level1 基本知识(4)--统计学原理
  15. astah export sql mysql_Astah繪製UML圖形-入門篇
  16. windows7无声音,提示未插入扬声器或耳机的解决办法
  17. input file 选择图片并显示
  18. 凯撒密码(移位加密)
  19. 【Lilishop商城】No3-2.模块详细设计,系统设置(系统配置、行政区划、物流公司、滑块验证码图片、敏感词过滤)的详细设计
  20. 【华人学者风采】陈积明 浙江大学

热门文章

  1. 根据一段时间区间,按月份拆分成多个时间段
  2. 高效线性氮化镓射频功放芯片模组研究
  3. USB转52单片机下载串口的“转换芯片”MAX232与CH340G的区别
  4. 通过对虚拟磁盘进行碎片整理来提高VMware VM性能
  5. 一日一技:用Python绘画有多好玩
  6. Lecture 9: Practical Tips for Final Projects
  7. HDU 5594(ZYB's Prime-网络流)
  8. 新库上线 | 税收调查企业专利及引用被引用数据
  9. PaddleHub人体骨骼关键点检测(2.0环境)
  10. 互联网日报 | 全国版消费券今日起开抢;微信搜一搜正式开放服务搜索接入;高德打车上线“考生专车”服务...