kubernetes二进制部署文档-集群部分

文章目录

  • kubernetes二进制部署文档-集群部分
  • 一、系统规划
    • 1.1 系统组件分布
    • 1.2 部署拓扑
    • 1.3 系统环境
  • 二、初始化系统环境
    • 2.1 升级系统内核
    • 2.2内核优化
    • 2.3 启用Ipvs内核模块
    • 2.4 安装常用工具
    • 2.5 安装NTP时间服务
    • 2.6 关闭Swap分区
    • 2.7 关闭系统防火墙和selinux
    • 2.8 设置集群内互为免密登陆
  • 三、部署容器环境
    • 3.1 部署docker
    • 3.2 检查docker环境
  • 四、部署Kubernetes
    • 4.1 准备集群根证书
      • 4.1.1 安装cfssl
      • 4.1.2 创建生成CA证书签名请求(csr)的json配置文件
      • 4.1.3 生成CA证书、私钥和csr证书签名请求
      • 4.1.4 创建基于根证书的config配置文件
      • 4.1.6 创建metrics-server证书
        • 4.1.6.1 创建 metrics-server 使用的证书
        • 4.1.6.2 生成 metrics-server 证书和私钥
        • 4.1.6.3 分发证书
      • 4.1.7 创建Node节点kubeconfig文件
        • 4.1.7.1 创建TLS Bootstrapping Token、 kubelet kubeconfig、kube-proxy kubeconfig
        • 4.1.7.2 生成证书
        • 4.1.7.3 分发证书
    • 4.2 部署Etcd集群
      • 4.2.1 基础设置
      • 4.2.2 分发证书
      • 4.2.3 编写etcd配置文件脚本 etcd-server-startup.sh
      • 4.2.4 执行etcd.sh 生成配置脚本
      • 4.2.4 检查etcd01运行情况
      • 4.2.5 分发etcd-server-startup.sh脚本至master02、master03上
      • 4.2.6 启动etcd02并检查运行状态
      • 4.2.7 启动etcd03并检查运行状态
      • 4.2.8 检查etcd集群状态
    • 4.3 部署APISERVER
      • 4.3.1 下载kubernetes二进制安装包并解压
      • 4.3.4 配置master组件并运行
        • 4.3.4.1 创建 kube-apiserver 配置文件脚本
        • 4.3.4.2 创建生成 kube-controller-manager 配置文件脚本
        • 4.3.4.3 创建生成 kube-scheduler 配置文件脚本
      • 4.3.5 复制执行文件
      • 4.3.7 添加权限并分发至master02和master03
      • 4.3.8 运行配置文件
        • 4.3.8.1 运行k8smaster01
        • 4.3.8.2 运行k8smaster02
        • 4.3.8.3 运行k8smaster03
      • 4.3.9 验证集群健康状态
    • 4.4 部署kubelet
      • 4.4.1 配置kubelet证书自动续期和创建Node授权用户
        • 4.4.1.1 创建 `Node节点` 授权用户 `kubelet-bootstrap`
        • 4.4.1.2 创建自动批准相关 CSR 请求的 ClusterRole
        • 4.4.1.3 自动批准 kubelet-bootstrap 用户 TLS bootstrapping 首次申请证书的 CSR 请求
        • 4.4.1.4 自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求
        • 4.4.1.5 自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求
      • 4.4.2 各节点准备基础镜像
      • 4.4.3 创建`kubelet`配置脚本
      • 4.4.4 master节点启用`kubelet`
      • 4.4.5 worker节点启用`kubelet`
    • 4.5 部署kube-proxy
      • 4.5.6 创建 `kube-proxy` 配置脚本
      • 4.5.7 master节点启动`kube-proxy`
      • 4.5.8 worker节点启动`kube-proxy`
    • 4.6 检测状态
    • 4.7 其他补充设置
      • 4.7.1 master节点添加taint
      • 4.7.2命令补全
  • 五、部署网络组件
    • 5.1 部署calico网络 BGP模式
      • 5.1.1 修改部分配置
      • 5.1.2 应用calico文件
    • 5.2 部署coredns
      • 5.2.1 编辑coredns配置文件
      • 5.2.2 应用部署coredns
  • 六、部署traefik
    • 6.1 编辑traefik资源配置文件
      • 6.1.1 编辑svc
      • 6.1.2 编辑rbac
      • 6.1.3 编辑ingress-traefik
      • 6.1.4 编辑deployment
    • 6.2 应用配置文件
    • 6.3 检查状态
    • 6.4 添加域名解析及反向代理
    • 6.6 测试traefikdashboard
  • 七、部署Kubernetes Dashboard
    • 7.1 下载Dashboard的yaml文件
    • 7.2 创建Dashboard的命令空间和存储
    • 7.3 应用recommended
    • 7.4 创建认证
      • 7.4.1 创建登陆认证和获取token
      • 7.4.2 创建自签证书
      • 7.4.3 创建secret对象来引用证书文件
      • 7.4.4 创建dashboard的ingressrrouter
      • 7.4.5 配置nginx反代代理
        • 7.4.5.1 分发pem和key文件至nginx
        • 7.4.5.2 修改nginx.conf
    • 7.5 配置DNS解析
    • 7.6 登陆测试
    • 7.7 配置dashboard分权
      • 7.7.1 创建ServiceAccount 、ClusterRole和ClusterRoleBinding
  • 后续参考:

提供者:MappleZF

版本:1.0.0

一、系统规划

1.1 系统组件分布

主机名 IP 组件
lbvip.host.com 192.168.13.100 虚拟VIP地址,由PCS+Haproxy提供
k8smaster01.host.com 192.168.13.101 etcd kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy docker calico Ipvs Ceph: mon mgr mds osd
k8smaster02.host.com 192.168.13.102 etcd kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy docker calico Ipvs Ceph:mon mgr mds ods
k8smaster03.host.com 192.168.13.103 etcd kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy docker calico Ipvs Ceph: mon mgr mds
k8sworker01.host.com 192.168.13.105 kubelet kube-proxy docker calico Ipvs osd
k8sworker02.host.com 192.168.13.106 kubelet kube-proxy docker calico Ipvs osd
k8sworker03.host.com 192.168.13.107 kubelet kube-proxy docker calico Ipvs osd
k8sworker04.host.com 192.168.13.108 kubelet kube-proxy docker calico Ipvs osd
k8sworker05.host.com 192.168.13.109 kubelet kube-proxy docker calico Ipvs osd
lb01.host.com 192.168.13.97 NFS PCS pacemaker corosync Haproxy nginx
lb02.host.com 192.168.13.98 harbor PCS pacemaker corosync Haproxy nginx
lb03.host.com 192.168.13.99 DNS PCS pacemaker corosync Haproxy nginx

1.2 部署拓扑

集群基础组件分布图

集群辅助组件分布图

集群网络访问示意图

1.3 系统环境

基础系统: CentOS Linux release 7.8.2003 (Core Linux 5.7.10-1.el7.elrepo.x86_64)

Kubernetes 版本: v1.19.0

后端存储: Ceph集群+NFS混合

网络链路: k8s业务网络 + Ceph存储数据网络
网络插件: calico组件,采用BGP模式,类似于flannel的host-gw模式;IPVS

负载均衡及反向代理:
Haproxy 负责k8s集群外的TCP四层流量负载转发和反向代理;
​ Nginx 负责k8s集群外的TCP七层HTTPS和HTTP流量负载转发和反向代理;
Traefik 负责k8s集群内部的流量负载转发和反向代理。
注:流量特别大的情况下,可尝试在Haproxy之前架设LVS负载均衡集群或物理负载均衡设备,将流量调度至其他k8s集群。

二、初始化系统环境

​ 初始化系统环境,所有主机上操作,且基本相同。

2.1 升级系统内核

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache && yum update -y
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml -y
grubby --default-kernelgrub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /boot/grub2/grub.cfg
rebootyum update -y && yum makecache注:如果内核升级失败,需要到BIOS里面将securt boot 设置为disable

2.2内核优化

cat >> /etc/sysctl.d/k8s.conf << EOF
net.ipv4.tcp_fin_timeout = 10
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.ip_local_port_range = 4000    65000
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.somaxconn = 32768
net.core.netdev_max_backlog = 16384
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.optmem_max=16777216
net.netfilter.nf_conntrack_max=2097152
net.nf_conntrack_max=2097152
net.netfilter.nf_conntrack_tcp_timeout_fin_wait=30
net.netfilter.nf_conntrack_tcp_timeout_time_wait=30
net.netfilter.nf_conntrack_tcp_timeout_close_wait=15
net.netfilter.nf_conntrack_tcp_timeout_established=300
net.ipv4.tcp_max_orphans = 524288
fs.file-max=2097152
fs.nr_open=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
sysctl --systemcat >> /etc/systemd/system.conf << EOF
DefaultLimitNOFILE=1048576
DefaultLimitNPROC=1048576
EOFcat /etc/systemd/system.confcat >> /etc/security/limits.conf << EOF
* soft nofile 1048576
* hard nofile 1048576
* soft nproc  1048576
* hard nproc  1048576
EOFcat /etc/security/limits.confreboot
最好重启检查

2.3 启用Ipvs内核模块

#创建内核模块载入相关的脚本文件/etc/sysconfig/modules/ipvs.modules,设定自动载入的内核模块。文件内容如下:
​每个节点全要开启
# 新建脚本
vim /etc/sysconfig/modules/ipvs.modules
​
#脚本内容如下
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_modules_dir | grep -o "^[^.]*");do/sbin/modinfo -F filename $i  &> /dev/nullif [ $? -eq 0 ]; then/sbin/modprobe $ifi
done
​
​
#修改文件权限,并手动为当前系统加载内核模块:
chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules  #检测加载情况
lsmod | grep ip_vs

2.4 安装常用工具

yum install -y nfs-utils curl yum-utils device-mapper-persistent-data lvm2 net-tools
yum install -y conntrack-tools wget sysstat  vim-enhanced bash-completion psmisc traceroute iproute* tree
yum install -y libseccomp libtool-ltdl ipvsadm ipset

2.5 安装NTP时间服务

vim /etc/chrony.conf
修改
server 192.168.20.4 iburst
server time4.aliyun.com iburstsystemctl enable chronyd
systemctl start chronyd
chronyc sources

2.6 关闭Swap分区

swapoff -a
vim /etc/fstab文件中注释掉 swap一行

2.7 关闭系统防火墙和selinux

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2.8 设置集群内互为免密登陆

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub k8smaster01
...省略

三、部署容器环境

3.1 部署docker

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum makecache fast
yum -y install docker-cemkdir -p /data/docker /etc/docker
修改/usr/lib/systemd/system/docker.service文件,在“ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock”一行之后新增一行如下内容:ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPTcat <<EOF > /etc/docker/daemon.json
{"graph": "/data/docker","storage-driver": "overlay2","registry-mirrors": ["https://kk0rm6dj.mirror.aliyuncs.com","https://registry.docker-cn.com"],"insecure-registries": ["harbor.iot.com"],"bip": "10.244.101.1/24","exec-opts": ["native.cgroupdriver=systemd"],"live-restore": true
}EOF注意:https://kk0rm6dj.mirror.aliyuncs.com阿里云的镜像加速地址最好自己申请(免费的)
systemctl enable docker
systemctl daemon-reload
systemctl restart docker
注: big的地址根据主机IP进行相应的变更

3.2 检查docker环境

[root@k8smaster01.host.com:/root]# docker info
Client:Debug Mode: falseServer:Containers: 0Running: 0Paused: 0Stopped: 0Images: 0Server Version: 19.03.12Storage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueLogging Driver: json-fileCgroup Driver: systemdPlugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429runc version: dc9208a3303feef5b3839f4323d9beb36df0a9ddinit version: fec3683Security Options:seccompProfile: defaultKernel Version: 5.8.6-2.el7.elrepo.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 20Total Memory: 30.84GiBName: k8smaster01.host.comID: FZAG:XAY6:AYKI:7SUR:62AP:ANDH:JM33:RG5O:Q2XJ:NHGJ:ROAO:6LEMDocker Root Dir: /data/dockerDebug Mode: falseRegistry: https://index.docker.io/v1/Labels:Experimental: falseInsecure Registries:harbor.iot.comregistry.lowaniot.com127.0.0.0/8Registry Mirrors:https://*********.mirror.aliyuncs.com/https://registry.docker-cn.com/Live Restore Enabled: true

四、部署Kubernetes

4.1 准备集群根证书

kubernetes各组件使用证书如下:

  • etcd:使用 ca.pem、etcd-key.pem、etcd.pem;(etcd对外提供服务、节点间通信(etcd peer)使用同一套证书)

  • kube-apiserver:使用 ca.pem、ca-key.pem、kube-apiserver-key.pem、kube-apiserver.pem;

  • kubelet:使用 ca.pem ca-key.pem;

  • kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;

  • kubectl:使用 ca.pem、admin-key.pem、admin.pem;

  • kube-controller-manager:使用 ca-key.pem、ca.pem、kube-controller-manager.pem、kube-controller-manager-key.pem;

  • kube-scheduler:使用ca-key.pem、ca.pem、kube-scheduler-key.pem、kube-scheduler.pem;

安装证书生成工具 cfssl,并生成相关证书

关于cfssl工具:

​ cfssl:证书签发的主要工具

​ cfssl-json:将cfssl生成的整数(json格式)变为文件承载式证书

​ cfssl-certinfo:验证证书的信息

4.1.1 安装cfssl
//创建目录用于存放 SSL 证书
[root@k8smaster01.host.com:/opt/src]# mkdir /data/ssl -p
// 下载生成证书命令
[root@k8smaster01.host.com:/opt/src]# wget -c https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8smaster01.host.com:/opt/src]# wget -c https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8smaster01.host.com:/opt/src]# wget -c https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
// 添加执行权限并查看
[root@k8smaster01.host.com:/opt/src]# chmod +x cfssl-certinfo_linux-amd64 cfssljson_linux-amd64 cfssl_linux-amd64
[root@k8smaster01.host.com:/opt/src]# ll
total 18808
-rwxr-xr-x. 1 root root  6595195 Mar 30  2016 cfssl-certinfo_linux-amd64
-rwxr-xr-x. 1 root root  2277873 Mar 30  2016 cfssljson_linux-amd64
-rwxr-xr-x. 1 root root 10376657 Mar 30  2016 cfssl_linux-amd64// 移动到 /usr/local/bin 目录下
[root@k8smaster01.host.com:/opt/src]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8smaster01.host.com:/opt/src]# mv cfssljson_linux-amd64 /usr/local/bin/cfssl-json
[root@k8smaster01.host.com:/opt/src]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
// 进入证书目录
[root@k8smaster01.host.com:/opt/src]# cd /data/ssl/PS : cfssl-certinfo -cert serverca
4.1.2 创建生成CA证书签名请求(csr)的json配置文件

自签证书会有个根证书ca(需权威机构签发/可自签 )

CN:浏览器使用该字段验证网站是否合法,一般写的是域名,非常重要

C:国家

ST:州/省

L:地区/城市

O:组织名称/公司名称

OU:组织单位名称,公司部门

[root@k8smaster01.host.com:/data/ssl]#
cat > /data/ssl/ca-csr.json <<EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}],"ca": {"expiry": "175200h"}
}
EOF
4.1.3 生成CA证书、私钥和csr证书签名请求

会生成运行CA所需求的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名。

[root@k8smaster01.host.com:/data/ssl]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2020/09/09 17:53:11 [INFO] generating a new CA key and certificate from CSR
2020/09/09 17:53:11 [INFO] generate received request
2020/09/09 17:53:11 [INFO] received CSR
2020/09/09 17:53:11 [INFO] generating key: rsa-2048
2020/09/09 17:53:11 [INFO] encoded CSR
2020/09/09 17:53:11 [INFO] signed certificate with serial number 633952028496783804989414964353657856875757870668
[root@k8smaster01.host.com:/data/ssl]# ls
ca.csr  ca-csr.json  ca-key.pem  ca.pem
[root@k8smaster01.host.com:/data/ssl]# ll
total 16
-rw-r--r-- 1 root root 1001 Sep  9 17:53 ca.csr
-rw-r--r-- 1 root root  307 Sep  9 17:52 ca-csr.json
-rw------- 1 root root 1679 Sep  9 17:53 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep  9 17:53 ca.pem
4.1.4 创建基于根证书的config配置文件
解析:
ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
server auth:表示client可以用该 CA 对server提供的证书进行验证;
client auth:表示server可以用该CA对client提供的证书进行验证;
expiry: 表示证书过期时间,这里设置10年,可以适当减少
[root@k8smaster01.host.com:/data/ssl]#
cat > /data/ssl/ca-config.json <<EOF
{"signing": {"default": {"expiry": "175200h"},"profiles": {"kubernetes": {"expiry": "175200h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOF
[root@k8smaster01.host.com:/data/ssl]#
cat > server-csr.json <<EOF
{"CN": "kubernetes","hosts": ["127.0.0.1","192.168.13.100","192.168.13.101","192.168.13.102","192.168.13.103","192.168.13.104","192.168.13.105","192.168.13.106","192.168.13.107","192.168.13.108","192.168.13.109","192.168.13.110","10.10.0.1","lbvip.host.com","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
[root@k8smaster01.host.com:/data/ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssl-json -bare server
2020/09/09 18:03:25 [INFO] generate received request
2020/09/09 18:03:25 [INFO] received CSR
2020/09/09 18:03:25 [INFO] generating key: rsa-2048
2020/09/09 18:03:25 [INFO] encoded CSR
2020/09/09 18:03:25 [INFO] signed certificate with serial number 520236944854379069364132464186673377003193743849
2020/09/09 18:03:25 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").[root@k8smaster01.host.com:/data/ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@k8smaster01.host.com:/data/ssl]#
cat > admin-csr.json <<EOF
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOF
[root@k8smaster01.host.com:/data/ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssl-json -bare admin
2020/09/09 18:08:34 [INFO] generate received request
2020/09/09 18:08:34 [INFO] received CSR
2020/09/09 18:08:34 [INFO] generating key: rsa-2048
2020/09/09 18:08:34 [INFO] encoded CSR
2020/09/09 18:08:34 [INFO] signed certificate with serial number 662043138212200112498221957779202406541973930898
2020/09/09 18:08:34 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8smaster01.host.com:/data/ssl]# ls
admin.csr  admin-csr.json  admin-key.pem  admin.pem  ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@k8smaster01.host.com:/data/ssl]#
cat > kube-proxy-csr.json <<EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
[root@k8smaster01.host.com:/data/ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssl-json -bare kube-proxy
2020/09/09 18:11:10 [INFO] generate received request
2020/09/09 18:11:10 [INFO] received CSR
2020/09/09 18:11:10 [INFO] generating key: rsa-2048
2020/09/09 18:11:10 [INFO] encoded CSR
2020/09/09 18:11:10 [INFO] signed certificate with serial number 721336273441573081460131627771975553540104946357
2020/09/09 18:11:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8smaster01.host.com:/data/ssl]# ls
admin.csr       admin-key.pem  ca-config.json  ca-csr.json  ca.pem          kube-proxy-csr.json  kube-proxy.pem  server-csr.json  server.pem
admin-csr.json  admin.pem      ca.csr          ca-key.pem   kube-proxy.csr  kube-proxy-key.pem   server.csr      server-key.pem
4.1.6 创建metrics-server证书
4.1.6.1 创建 metrics-server 使用的证书
# 注意: "CN": "system:metrics-server" 一定是这个,因为后面授权时用到这个名称,否则会报禁止匿名访问
[root@k8smaster01.host.com:/data/ssl]#
cat > metrics-server-csr.json <<EOF
{"CN": "system:metrics-server","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "system"}]
}
EOF
4.1.6.2 生成 metrics-server 证书和私钥
[root@k8smaster01.host.com:/data/ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes metrics-server-csr.json | cfssl-json -bare metrics-server2020/09/09 18:33:24 [INFO] generate received request
2020/09/09 18:33:24 [INFO] received CSR
2020/09/09 18:33:24 [INFO] generating key: rsa-2048
2020/09/09 18:33:24 [INFO] encoded CSR
2020/09/09 18:33:24 [INFO] signed certificate with serial number 611188048814430694712004119513191220396672578396
2020/09/09 18:33:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8smaster01.host.com:/data/ssl]# ls
admin.csr       admin.pem       ca-csr.json  kube-proxy.csr       kube-proxy.pem           metrics-server-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy-csr.json  metrics-server.csr       metrics-server.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-key.pem   metrics-server-csr.json  server.csr              server.pem
4.1.6.3 分发证书
[root@k8smaster01.host.com:/data/ssl]# cp metrics-server-key.pem metrics-server.pem /opt/kubernetes/ssl/
[root@k8smaster01.host.com:/data/ssl]# scp metrics-server-key.pem metrics-server.pem k8smaster02:/opt/kubernetes/ssl/
[root@k8smaster01.host.com:/data/ssl]# scp metrics-server-key.pem metrics-server.pem k8smaster03:/opt/kubernetes/ssl/
4.1.7 创建Node节点kubeconfig文件
4.1.7.1 创建TLS Bootstrapping Token、 kubelet kubeconfig、kube-proxy kubeconfig
[root@k8smaster01.host.com:/data/ssl]# vim kubeconfig.sh
# 创建 TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF#----------------------# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://lbvip.host.com:7443"# 设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=./ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=bootstrap.kubeconfig# 设置上下文参数
kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#----------------------# 创建kube-proxy kubeconfig文件kubectl config set-cluster kubernetes \--certificate-authority=./ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
4.1.7.2 生成证书
[root@k8smaster01.host.com:/data/ssl]# bash kubeconfig.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@k8smaster01.host.com:/data/ssl]# ls
admin.csr       admin.pem             ca.csr       ca.pem          kube-proxy-csr.json    kube-proxy.pem           metrics-server-key.pem  server-csr.json  token.csv
admin-csr.json  bootstrap.kubeconfig  ca-csr.json  kubeconfig.sh   kube-proxy-key.pem     metrics-server.csr       metrics-server.pem      server-key.pem
admin-key.pem   ca-config.json        ca-key.pem   kube-proxy.csr  kube-proxy.kubeconfig  metrics-server-csr.json  server.csr              server.pem
4.1.7.3 分发证书
[root@k8smaster01.host.com:/data/ssl]# cp *kubeconfig /opt/kubernetes/cfg
[root@k8smaster01.host.com:/data/ssl]# scp *kubeconfig k8smaster02:/opt/kubernetes/cfg
[root@k8smaster01.host.com:/data/ssl]# scp *kubeconfig k8smaster03:/opt/kubernetes/cfg

4.2 部署Etcd集群

k8smaster01上操作,把执行文件copy到master02 master03

摘要:etcd部署在master三台节点上做高可用,etcd集群采用raft算法选举Leader, 由于Raft算法在做决策时需要多数节点的投票,所以etcd一般部署集群推荐奇数个节点,推荐的数量为3、5或者7个节点构成一个集群。

二进制包下载地址:(https://github.com/etcd-io/etcd/releases)

4.2.1 基础设置

k8smaster01上操作


# 创建存储etcd数据目录
[root@k8smaster01.host.com:/root]# mkdir -p /data/etcd/
# 创建etcd日志目录
[root@k8smaster01.host.com:/root]# mkdir -p /data/logs/etcd-server
# 创建kubernetes 集群配置目录
[root@k8smaster01.host.com:/root]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
# 下载二进制etcd包,并把执行文件放到 /opt/kubernetes/bin/ 目录
[root@k8smaster01.host.com:/root]# cd /opt/src/
[root@k8smaster01.host.com:/opt/src]# wget -c https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[root@k8smaster01.host.com:/opt/src]# tar -xzf etcd-v3.4.13-linux-amd64.tar.gz -C /data/etcd
[root@k8smaster01.host.com:/opt/src]# cd /data/etcd/etcd-v3.4.13-linux-amd64/
[root@k8smaster01.host.com:/data/etcd/etcd-v3.4.13-linux-amd64]# cp -a etcd etcdctl /opt/kubernetes/bin/
[root@k8smaster01.host.com:/data/etcd/etcd-v3.4.13-linux-amd64]# cd
# 把 /opt/kubernetes/bin 目录加入到 PATH
[root@k8smaster01.host.com:/root]# echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
[root@k8smaster01.host.com:/root]# source /etc/profile

k8smaster02和k8smaster03上操作


mkdir -p /data/etcd/  /data/logs/etcd-server
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
source /etc/profile

在各worker节点上执行

mkdir -p /data/worker
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
source /etc/profile
4.2.2 分发证书
# 进入 K8S 集群证书目录
[root@k8smaster01.host.com:/root]# cd /data/ssl/
[root@k8smaster01.host.com:/data/ssl]# ls
admin.csr  admin-key.pem  ca-config.json  ca-csr.json  ca.pem  kube-proxy-csr.json  kube-proxy.pem  server-csr.json  server.pem
admin-csr.json  admin.pem ca.csr ca-key.pem kube-proxy.csr kube-proxy-key.pem server.csr   server-key.pem# 把证书 copy 到 k8s-master1 机器 /opt/kubernetes/ssl/ 目录
[root@k8smaster01.host.com:/data/ssl]# cp ca*pem  server*pem /opt/kubernetes/ssl/
[root@k8smaster01.host.com:/data/ssl]# cd /opt/kubernetes/ssl/  && ls
ca-key.pem  ca.pem  server-key.pem  server.pem其中 etcd-peer-key.pem 是私钥 权限为600[root@k8smaster01.host.com:/opt/kubernetes/ssl]# cd ..
# 把etcd执行文件与证书 copy 到 k8smaster02  k8smaster03
# 把ca.pem证书copy到worker各节点
[root@k8smaster01.host.com:/opt/kubernetes]# scp -r /opt/kubernetes/* k8smaster02:/opt/kubernetes/
[root@k8smaster01.host.com:/opt/kubernetes]# scp -r /opt/kubernetes/* k8smaster03:/opt/kubernetes/
4.2.3 编写etcd配置文件脚本 etcd-server-startup.sh

k8smaster01上操作

[root@k8smaster01.host.com:/opt/kubernetes]# cd /data/etcd/
[root@k8smaster01.host.com:/data/etcd]# vim etcd-server-startup.sh
#!/bin/bashETCD_NAME=${1:-"etcd01"}
ETCD_IP=${2:-"127.0.0.1"}
ETCD_CLUSTER=${3:-"etcd01=https://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/etcd.yml
name: ${ETCD_NAME}
data-dir: /var/lib/etcd/default.etcd
listen-peer-urls: https://${ETCD_IP}:2380
listen-client-urls: https://${ETCD_IP}:2379,https://127.0.0.1:2379advertise-client-urls: https://${ETCD_IP}:2379
initial-advertise-peer-urls: https://${ETCD_IP}:2380
initial-cluster: ${ETCD_CLUSTER}
initial-cluster-token: etcd-cluster
initial-cluster-state: newclient-transport-security:cert-file: /opt/kubernetes/ssl/server.pemkey-file: /opt/kubernetes/ssl/server-key.pemclient-cert-auth: falsetrusted-ca-file: /opt/kubernetes/ssl/ca.pemauto-tls: falsepeer-transport-security:cert-file: /opt/kubernetes/ssl/server.pemkey-file: /opt/kubernetes/ssl/server-key.pemclient-cert-auth: falsetrusted-ca-file: /opt/kubernetes/ssl/ca.pemauto-tls: falsedebug: false
logger: zap
log-outputs: [stderr]
EOFcat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/etcd-io/etcd
Conflicts=etcd.service
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
LimitNOFILE=65536
Restart=on-failure
RestartSec=5s
TimeoutStartSec=0
ExecStart=/opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
4.2.4 执行etcd.sh 生成配置脚本
[root@k8smaster01.host.com:/data/etcd]# chmod +x etcd-server-startup.sh[root@k8smaster01.host.com:/data/etcd]# ./etcd-server-startup.sh etcd01 192.168.13.101 etcd01=https://192.168.13.101:2380,etcd02=https://192.168.13.102:2380,etcd03=https://192.168.13.103:2380注: etcd与peer三个主机内部通讯是走2380端口,与etcd外部通讯是走2379端口
注: 此处etcd启动时卡启动状态,但是服务已经起来了.需要等etcd02和etcd03启动后,状态才正常。
4.2.4 检查etcd01运行情况
[root@k8smaster01.host.com:/data/etcd]# netstat -ntplu | grep etcd
tcp        0      0 192.168.13.101:2379     0.0.0.0:*               LISTEN      25927/etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      25927/etcd
tcp        0      0 192.168.13.101:2380     0.0.0.0:*               LISTEN      25927/etcd          [root@k8smaster01.host.com:/data/etcd]# ps -ef | grep etcd
root     25878  6156  0 15:33 pts/3    00:00:00 /bin/bash ./etcd-server-startup.sh etcd01 192.168.13.101 etcd01=https://192.168.13.101:2380,etcd02=https://192.168.13.102:2380,etcd03=https://192.168.13.103:2380
root     25921 25878  0 15:33 pts/3    00:00:00 systemctl restart etcd
root     25927     1  3 15:33 ?        00:00:16 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
root     25999  3705  0 15:41 pts/0    00:00:00 grep --color=auto etcd
4.2.5 分发etcd-server-startup.sh脚本至master02、master03上
[root@k8smaster01.host.com:/data/etcd]# scp /data/etcd/etcd-server-startup.sh k8smaster02:/data/etcd/
[root@k8smaster01.host.com:/data/etcd]# scp /data/etcd/etcd-server-startup.sh k8smaster03:/data/etcd/
4.2.6 启动etcd02并检查运行状态
[root@k8smaster02.host.com:/root]# cd /data/etcd
[root@k8smaster02.host.com:/data/etcd]# ./etcd-server-startup.sh etcd02 192.168.13.102 etcd01=https://192.168.13.101:2380,etcd02=https://192.168.13.102:2380,etcd03=https://192.168.13.103:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8smaster02.host.com:/data/etcd]# systemctl status etcd -l
● etcd.service - Etcd ServerLoaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)Active: active (running) since Wed 2020-09-09 12:48:49 CST; 35s agoDocs: https://github.com/etcd-io/etcdMain PID: 24630 (etcd)Tasks: 29Memory: 17.6MCGroup: /system.slice/etcd.service└─24630 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.ymlSep 09 12:48:54 k8smaster02.host.com etcd[24630]: {"level":"warn","ts":"2020-09-09T12:48:54.257+0800","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8ebc349f826f93fe","rtt":"0s","error":"dial tcp 192.168.13.103:2380: connect: connection refused"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:55.144+0800","caller":"rafthttp/stream.go:250","msg":"set message encoder","from":"a1f9be88a19d2f3c","to":"a1f9be88a19d2f3c","stream-type":"stream MsgApp v2"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:55.144+0800","caller":"rafthttp/peer_status.go:51","msg":"peer became active","peer-id":"8ebc349f826f93fe"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"warn","ts":"2020-09-09T12:48:55.144+0800","caller":"rafthttp/stream.go:277","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a1f9be88a19d2f3c","remote-peer-id":"8ebc349f826f93fe"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:55.144+0800","caller":"rafthttp/stream.go:250","msg":"set message encoder","from":"a1f9be88a19d2f3c","to":"a1f9be88a19d2f3c","stream-type":"stream Message"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"warn","ts":"2020-09-09T12:48:55.144+0800","caller":"rafthttp/stream.go:277","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a1f9be88a19d2f3c","remote-peer-id":"8ebc349f826f93fe"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:55.159+0800","caller":"rafthttp/stream.go:425","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a1f9be88a19d2f3c","remote-peer-id":"8ebc349f826f93fe"}
Sep 09 12:48:55 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:55.159+0800","caller":"rafthttp/stream.go:425","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a1f9be88a19d2f3c","remote-peer-id":"8ebc349f826f93fe"}
Sep 09 12:48:57 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:57.598+0800","caller":"membership/cluster.go:546","msg":"updated cluster version","cluster-id":"6204defa6d59332e","local-member-id":"a1f9be88a19d2f3c","from":"3.0","from":"3.4"}
Sep 09 12:48:57 k8smaster02.host.com etcd[24630]: {"level":"info","ts":"2020-09-09T12:48:57.598+0800","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}[root@k8smaster02.host.com:/data/etcd]# netstat -ntplu | grep etcd
tcp        0      0 192.168.13.102:2379     0.0.0.0:*               LISTEN      24630/etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      24630/etcd
tcp        0      0 192.168.13.102:2380     0.0.0.0:*               LISTEN      24630/etcd
[root@k8smaster02.host.com:/data/etcd]# ps -ef | grep etcd
root     24630     1  1 12:48 ?        00:00:03 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
root     24675  2602  0 12:52 pts/0    00:00:00 grep --color=auto etcd
4.2.7 启动etcd03并检查运行状态
[root@k8smaster03.host.com:/root]#  cd /data/etcd/
[root@k8smaster03.host.com:/data/etcd]# ./etcd-server-startup.sh etcd03 192.168.13.103 etcd01=https://192.168.13.101:2380,etcd02=https://192.168.13.102:2380,etcd03=https://192.168.13.103:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8smaster03.host.com:/data/etcd]# systemctl status etcd -l
● etcd.service - Etcd ServerLoaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)Active: active (running) since Wed 2020-09-09 12:48:55 CST; 7s agoDocs: https://github.com/etcd-io/etcdMain PID: 1322 (etcd)Tasks: 20Memory: 27.7MCGroup: /system.slice/etcd.service└─1322 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.ymlSep 09 12:48:55 k8smaster03.host.com systemd[1]: Started Etcd Server.
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:55.190+0800","caller":"embed/serve.go:139","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"192.168.13.103:2379"}
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:55.190+0800","caller":"embed/serve.go:139","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"}
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:55.192+0800","caller":"rafthttp/stream.go:250","msg":"set message encoder","from":"8ebc349f826f93fe","to":"8ebc349f826f93fe","stream-type":"stream Message"}
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"warn","ts":"2020-09-09T12:48:55.192+0800","caller":"rafthttp/stream.go:277","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"8ebc349f826f93fe","remote-peer-id":"b255a2baf0f6222c"}
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:55.192+0800","caller":"rafthttp/stream.go:250","msg":"set message encoder","from":"8ebc349f826f93fe","to":"8ebc349f826f93fe","stream-type":"stream MsgApp v2"}
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"warn","ts":"2020-09-09T12:48:55.192+0800","caller":"rafthttp/stream.go:277","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"8ebc349f826f93fe","remote-peer-id":"b255a2baf0f6222c"}
Sep 09 12:48:55 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:55.193+0800","caller":"etcdserver/server.go:716","msg":"initialized peer connections; fast-forwarding election ticks","local-member-id":"8ebc349f826f93fe","forward-ticks":8,"forward-duration":"800ms","election-ticks":10,"election-timeout":"1s","active-remote-members":2}
Sep 09 12:48:57 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:57.597+0800","caller":"membership/cluster.go:546","msg":"updated cluster version","cluster-id":"6204defa6d59332e","local-member-id":"8ebc349f826f93fe","from":"3.0","from":"3.4"}
Sep 09 12:48:57 k8smaster03.host.com etcd[1322]: {"level":"info","ts":"2020-09-09T12:48:57.598+0800","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}[root@k8smaster03.host.com:/data/etcd]# netstat -ntplu | grep etcd
tcp        0      0 192.168.13.103:2379     0.0.0.0:*               LISTEN      1322/etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      1322/etcd
tcp        0      0 192.168.13.103:2380     0.0.0.0:*               LISTEN      1322/etcd
[root@k8smaster03.host.com:/data/etcd]# ps -ef | grep etcd
root      1322     1  1 12:48 ?        00:00:04 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
root      1353  6455  0 12:53 pts/0    00:00:00 grep --color=auto etcd
4.2.8 检查etcd集群状态
[root@k8smaster01.host.com:/root]#  ETCDCTL_API=3 etcdctl --write-out=table \
--cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem --key=/opt/kubernetes/ssl/server-key.pem \
--endpoints=https://192.168.13.101:2379,https://192.168.13.102:2379,https://192.168.13.103:2379 endpoint health
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.13.103:2379 |   true | 20.491966ms |       |
| https://192.168.13.102:2379 |   true | 22.203277ms |       |
| https://192.168.13.101:2379 |   true | 24.576499ms |       |
+-----------------------------+--------+-------------+-------+[root@k8smaster01.host.com:/data/etcd]# ETCDCTL_API=3 etcdctl --write-out=table \
--cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem \
--key=/opt/kubernetes/ssl/server-key.pem \
--endpoints=https://192.168.13.101:2379,https://192.168.13.102:2379,https://192.168.13.103:2379 member list
+------------------+---------+--------+-----------------------------+-----------------------------+------+
|        ID        | STATUS  |  NAME  |         PEER ADDRS          |        CLIENT ADDRS        |ISLEARNER |
+------------------+---------+--------+-----------------------------+-----------------------------+------+
|   968cc9007004cb | started | etcd03 | https://192.168.13.103:2380 | https://192.168.13.103:2379 | false |
| 437d840672a51376 | started | etcd02 | https://192.168.13.102:2380 | https://192.168.13.102:2379 | false |
| fef4dd15ed09253e | started | etcd01 | https://192.168.13.101:2380 | https://192.168.13.101:2379 | false |
+------------------+---------+--------+-----------------------------+-----------------------------+------+

4.3 部署APISERVER

4.3.1 下载kubernetes二进制安装包并解压
[root@k8smaster01.host.com:/opt/src]# wget -c https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz
[root@k8smaster01.host.com:/data]#  mkdir -p /data/package
[root@k8smaster01.host.com:/opt/src]# tar -xf kubernetes-server-linux-amd64.tar.gz -C /data/package/
注: 解压包中的 kubernetes-src.tar.gz 为go语言编写的源码包,可删除server目录下的*.tar 和docker_tag为docker镜像文件,可删除
[root@k8smaster01.host.com:/opt/src]# cd /data/package/ && cd kubernetes/
[root@k8smaster01.host.com:/data/package/kubernetes]# rm -rf kubernetes-src.tar.gz
[root@k8smaster01.host.com:/data/package/kubernetes]# cd server/bin
[root@k8smaster01.host.com:/data/package/kubernetes/server/bin]# rm -rf *.tar
[root@k8smaster01.host.com:/data/package/kubernetes/server/bin]# rm -rf *.docker_tag
[root@k8smaster01.host.com:/data/package/kubernetes/server/bin]# ls
apiextensions-apiserver  kubeadm  kube-aggregator  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler  mounter[root@k8smaster01.host.com:/data/package/kubernetes/server/bin]# cp -a kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kubeadm /opt/kubernetes/bin
4.3.4 配置master组件并运行

登录 k8smaster01 k8smaster02 k8smaster03操作

# 创建 /data/master 目录,用于存放 master 配置执行脚本
mkdir -p /data/master
4.3.4.1 创建 kube-apiserver 配置文件脚本
[root@k8smaster01.host.com:/data]# cd /data/master/
[root@k8smaster01.host.com:/data/master]# vim apiserver.sh
#!/bin/bashMASTER_ADDRESS=${1:-"192.168.13.101"}
ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.0.0/16 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=5000-65000 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \\
--runtime-config=api/all=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-truncate-enabled=true \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log"
EOFcat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
4.3.4.2 创建生成 kube-controller-manager 配置文件脚本
1、Kubernetes控制器管理器是一个守护进程它通过apiserver监视集群的共享状态,并进行更改以尝试将当前状态移向所需状态。
2、kube-controller-manager是有状态的服务,会修改集群的状态信息。如果多个master节点上的相关服务同时生效,则会有同步与一致性问题,所以多master节点中的kube-controller-manager服务只能是主备的关系,kukubernetes采用租赁锁(lease-lock)实现leader的选举,具体到kube-controller-manager,设置启动参数"--leader-elect=true"
[root@k8smaster01.host.com:/data/master]# vim controller-manager.sh
#!/bin/bashMASTER_ADDRESS=${1:-"127.0.0.1"}cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=2 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--bind-address=0.0.0.0 \\
--service-cluster-ip-range=10.10.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--feature-gates=RotateKubeletServerCertificate=true \\
--feature-gates=RotateKubeletClientCertificate=true \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOFcat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
4.3.4.3 创建生成 kube-scheduler 配置文件脚本
摘要:
1、Kube-scheduler作为组件运行在master节点,主要任务是把从kube-apiserver中获取的未被调度的pod通过一系列调度算法找到最适合的node,最终通过向kube-apiserver中写入Binding对象(其中指定了pod名字和调度后的node名字)来完成调度
2、kube-scheduler与kube-controller-manager一样,如果高可用,都是采用leader选举模式。启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
[root@k8smaster01.host.com:/data/master]# vim scheduler.sh
#!/bin/bashMASTER_ADDRESS=${1:-"127.0.0.1"}cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=2 \\
--master=${MASTER_ADDRESS}:8080 \\
--address=0.0.0.0 \\
--leader-elect"
EOFcat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
4.3.5 复制执行文件

执行文件复制至k8smaster02和k8smaster03的/opt/kubernetes/bin/

[root@k8smaster01.host.com:/data/package/kubernetes/server/bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kubeadm k8smaster02:/opt/kubernetes/bin
[root@k8smaster01.host.com:/data/package/kubernetes/server/bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kubeadm k8smaster03:/opt/kubernetes/bin
4.3.7 添加权限并分发至master02和master03
[root@k8smaster01.host.com:/data/master]# chmod +x *.sh[root@k8smaster01.host.com:/data/master]# scp apiserver.sh controller-manager.sh scheduler.sh k8smaster02:/data/master/
[root@k8smaster01.host.com:/data/master]# scp apiserver.sh controller-manager.sh scheduler.sh k8smaster03:/data/master/
4.3.8 运行配置文件
4.3.8.1 运行k8smaster01
[root@k8smaster01.host.com:/data/master]# ./apiserver.sh 192.168.13.101 https://192.168.13.101:2379,https://192.168.13.102:2379,https://192.168.13.103:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[root@k8smaster01.host.com:/data/master]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[root@k8smaster01.host.com:/data/master]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.查看三个服务的状态
[root@k8smaster01.host.com:/data/master]# ps -ef | grep kube
root      8918     1  0 01:11 ?        00:02:36 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=2 --config=/opt/kubernetes/cfg/kube-proxy-config.yml
root     11688     1  2 12:48 ?        00:00:48 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
root     18680     1  2 11:17 ?        00:02:35 /opt/kubernetes/bin/kubelet --logtostderr=true --v=2 --hostname-override=192.168.13.101 --anonymous-auth=false --cgroup-driver=systemd  --cluster-domain=cluster.local. --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --client-ca-file=/opt/kubernetes/ssl/ca.pem --tls-cert-file=/opt/kubernetes/ssl/kubelet.pem --tls-private-key-file=/opt/kubernetes/ssl/kubelet-key.pem --image-gc-high-threshold=85 --image-gc-low-threshold=80 --kubeconfig=/opt/kubrnetes/cfg/kubelet.kubeconfig --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root     20978     1  7 13:21 ?        00:00:27 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --apiserver-count=3 --authorization-mode=RBAC --client-ca-file=/opt/kubernetes/ssl/ca.pem --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/client.pem --etcd-keyfile=/optkubernetes/ssl/client-key.pem --etcd-servers=http://192.168.13.101:2379,http://192.168.13.102:2379,http://192.168.13.103:2379 --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --service-cluster-ip-range=10.10.0.0/16 --service-node-port-range=30000-50000 --target-ram-mb=1024 --kubelet-client-certificate=/opt/kubernetes/ssl/client.pem --kubelet-client-key=/opt/kubernetes/ssl/client-key.pem --log-dir=/var/log/kubernetes --tls-cert-file=/opt/kubernetes/ssl/apiserver.pem --tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem
root     21830     1  2 13:24 ?        00:00:05 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=2 --master=http://127.0.0.1:8080 --leader-elect=true --service-cluster-ip-range=10.10.0.0/16 --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --cluster-cidr=10.244.0.0/16 --root-ca-file=/opt/kubernetes/ssl/ca.pem
root     22669     1  2 13:26 ?        00:00:01 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=2 --master=http://127.0.0.1:8080 --leader-elect
root     22937 18481  0 13:27 pts/0    00:00:00 grep --color=auto kube[root@k8smaster01.host.com:/data/master]# netstat -ntlp | grep kube-
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      20978/kube-apiserve
tcp6       0      0 :::10251                :::*                    LISTEN      22669/kube-schedule
tcp6       0      0 :::6443                 :::*                    LISTEN      20978/kube-apiserve
tcp6       0      0 :::10252                :::*                    LISTEN      21830/kube-controll
tcp6       0      0 :::10257                :::*                    LISTEN      21830/kube-controll
tcp6       0      0 :::10259                :::*                    LISTEN      22669/kube-schedule
[root@k8smaster01.host.com:/data/master]# systemctl status kube-apiserver kube-scheduler kube-controller-manager | grep activeActive: active (running) since Wed 2020-09-09 13:21:27 CST; 7min agoActive: active (running) since Wed 2020-09-09 13:26:57 CST; 1min 57s agoActive: active (running) since Wed 2020-09-09 13:24:08 CST; 4min 47s ago
4.3.8.2 运行k8smaster02
[root@k8smaster02.host.com:/data/master]# ./apiserver.sh  http://192.168.13.101:2379,http://192.168.13.102:2379,http://192.168.13.103:2379
[root@k8smaster02.host.com:/data/master]# ./controller-manager.sh 127.0.0.1
[root@k8smaster02.host.com:/data/master]# ./scheduler.sh 127.0.0.1
[root@k8smaster02.host.com:/data/master]#  ps -ef | grep kube
root     24630     1  1 12:48 ?        00:00:43 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
root     25199     1  8 13:23 ?        00:00:31 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --apiserver-count=3 --authorization-mode=RBAC --client-ca-file=/opt/kubernetes/ssl/ca.pem --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/client.pem --etcd-keyfile=/optkubernetes/ssl/client-key.pem --etcd-servers=http://192.168.13.101:2379,http://192.168.13.102:2379,http://192.168.13.103:2379 --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --service-cluster-ip-range=10.10.0.0/16 --service-node-port-range=30000-50000 --target-ram-mb=1024 --kubelet-client-certificate=/opt/kubernetes/ssl/client.pem --kubelet-client-key=/opt/kubernetes/ssl/client-key.pem --log-dir=/var/log/kubernetes --tls-cert-file=/opt/kubernetes/ssl/apiserver.pem --tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem
root     25301     1  0 13:24 ?        00:00:02 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=2 --master=http://127.0.0.1:8080 --leader-elect=true --service-cluster-ip-range=10.10.0.0/16 --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --cluster-cidr=10.244.0.0/16 --root-ca-file=/opt/kubernetes/ssl/ca.pem
root     25392     1  2 13:27 ?        00:00:03 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=2 --master=http://127.0.0.1:8080 --leader-elect
root     25429  2602  0 13:29 pts/0    00:00:00 grep --color=auto kube
[root@k8smaster02.host.com:/data/master]# netstat -ntlp | grep kube-
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      25199/kube-apiserve
tcp6       0      0 :::10251                :::*                    LISTEN      25392/kube-schedule
tcp6       0      0 :::6443                 :::*                    LISTEN      25199/kube-apiserve
tcp6       0      0 :::10252                :::*                    LISTEN      25301/kube-controll
tcp6       0      0 :::10257                :::*                    LISTEN      25301/kube-controll
tcp6       0      0 :::10259                :::*                    LISTEN      25392/kube-schedule
[root@k8smaster02.host.com:/data/master]# systemctl status kube-apiserver kube-scheduler kube-controller-manager | grep activeActive: active (running) since Wed 2020-09-09 13:23:19 CST; 6min agoActive: active (running) since Wed 2020-09-09 13:27:18 CST; 2min 26s agoActive: active (running) since Wed 2020-09-09 13:24:37 CST; 5min ago
4.3.8.3 运行k8smaster03
[root@k8smaster03.host.com:/data/master]# ./apiserver.sh http://192.168.13.101:2379,http://192.168.13.102:2379,http://192.168.13.103:2379
[root@k8smaster03.host.com:/data/master]# ./controller-manager.sh 127.0.0.1
[root@k8smaster03.host.com:/data/master]# ./scheduler.sh 127.0.0.1
[root@k8smaster03.host.com:/data/master]# ps -ef | grep kube
root      1322     1  1 12:48 ?        00:00:38 /opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
root      1561     1  4 13:25 ?        00:00:12 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --apiserver-count=3 --authorization-mode=RBAC --client-ca-file=/opt/kubernetes/ssl/ca.pem --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/client.pem --etcd-keyfile=/optkubernetes/ssl/client-key.pem --etcd-servers=http://192.168.13.101:2379,http://192.168.13.102:2379,http://192.168.13.103:2379 --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --service-cluster-ip-range=10.10.0.0/16 --service-node-port-range=30000-50000 --target-ram-mb=1024 --kubelet-client-certificate=/opt/kubernetes/ssl/client.pem --kubelet-client-key=/opt/kubernetes/ssl/client-key.pem --log-dir=/var/log/kubernetes --tls-cert-file=/opt/kubernetes/ssl/apiserver.pem --tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem
root      1633     1  0 13:26 ?        00:00:00 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=2 --master=http://127.0.0.1:8080 --leader-elect=true --service-cluster-ip-range=10.10.0.0/16 --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --cluster-cidr=10.244.0.0/16 --root-ca-file=/opt/kubernetes/ssl/ca.pem
root      1708     1  0 13:27 ?        00:00:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=2 --master=http://127.0.0.1:8080 --leader-elect
root      1739  6455  0 13:30 pts/0    00:00:00 grep --color=auto kube
[root@k8smaster03.host.com:/data/master]# netstat -ntlp | grep kube-
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      1561/kube-apiserver
tcp6       0      0 :::10251                :::*                    LISTEN      1708/kube-scheduler
tcp6       0      0 :::6443                 :::*                    LISTEN      1561/kube-apiserver
tcp6       0      0 :::10252                :::*                    LISTEN      1633/kube-controlle
tcp6       0      0 :::10257                :::*                    LISTEN      1633/kube-controlle
tcp6       0      0 :::10259                :::*                    LISTEN      1708/kube-scheduler
[root@k8smaster03.host.com:/data/master]# systemctl status kube-apiserver kube-scheduler kube-controller-manager | grep activeActive: active (running) since Wed 2020-09-09 13:25:49 CST; 4min 30s agoActive: active (running) since Wed 2020-09-09 13:27:33 CST; 2min 47s agoActive: active (running) since Wed 2020-09-09 13:26:17 CST; 4min 2s ago
4.3.9 验证集群健康状态
[root@k8smaster01.host.com:/data/master]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}   

4.4 部署kubelet

4.4.1 配置kubelet证书自动续期和创建Node授权用户
4.4.1.1 创建 Node节点 授权用户 kubelet-bootstrap
[root@k8smaster01.host.com:/data/ssl]# [root@k8smaster03.host.com:/data/master]# kubectl create clusterrolebinding  kubelet-bootstrap --clusterrole=system:node-bootstrapper  --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap createdapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:creationTimestamp: nullname: kubelet-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: kubelet-bootstrap
4.4.1.2 创建自动批准相关 CSR 请求的 ClusterRole
# 创建证书旋转配置存放目录
[root@k8smaster03.host.com:/data/master]# vim tls-instructs-csr.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
rules:
- apiGroups: ["certificates.k8s.io"]resources: ["certificatesigningrequests/selfnodeserver"]verbs: ["create"]
# 部署
[root@k8smaster03.host.com:/data/master]# kubectl apply -f tls-instructs-csr.yaml
clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeserver created
4.4.1.3 自动批准 kubelet-bootstrap 用户 TLS bootstrapping 首次申请证书的 CSR 请求
[root@k8smaster03.host.com:/data/master]# kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/node-client-auto-approve-csr createdapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:creationTimestamp: nullname: node-client-auto-approve-csr
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: kubelet-bootstrap
~                           
4.4.1.4 自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求
[root@k8smaster03.host.com:/data/master]# kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
clusterrolebinding.rbac.authorization.k8s.io/node-client-auto-renew-crt createdapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:creationTimestamp: nullname: node-client-auto-renew-crt
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes
4.4.1.5 自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求
[root@k8smaster03.host.com:/data/master]# kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
clusterrolebinding.rbac.authorization.k8s.io/node-server-auto-renew-crt createdapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:creationTimestamp: nullname: node-server-auto-renew-crt
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes
4.4.2 各节点准备基础镜像
docker pull registry.cn-hangzhou.aliyuncs.com/mapplezf/google_containers-pause-amd64:3.2
或者
docker pull harbor.iot.com/kubernetes/google_containers-pause-amd64:3.2
或者
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
或者
docker pull registry.aliyuncs.com/google_containers/pause
4.4.3 创建kubelet配置脚本
[root@k8smaster01.host.com:/data/master]# vim kubelet.sh
#!/bin/bashDNS_SERVER_IP=${1:-"10.10.0.2"}
HOSTNAME=${2:-"`hostname`"}
CLUETERDOMAIN=${3:-"cluster.local"}cat <<EOF >/opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=true \\
--v=2 \\
--hostname-override=${HOSTNAME} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--network-plugin=cni \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/cni/bin \\
--cgroup-driver=systemd \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/mapplezf/google_containers-pause-amd64:3.2"
EOF# harbor.iot.com/kubernetes/google_containers-pause-amd64:3.2cat <<EOF >/opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration # 使用对象
apiVersion: kubelet.config.k8s.io/v1beta1 # api版本
address: 0.0.0.0 # 监听地址
port: 10250 # 当前kubelet的端口
readOnlyPort: 10255 # kubelet暴露的端口
cgroupDriver: cgroupfs # 驱动,要于docker info显示的驱动一致
clusterDNS:- ${DNS_SERVER_IP}
clusterDomain: ${CLUETERDOMAIN}  # 集群域
failSwapOn: false # 关闭swap# 身份验证
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem# 授权
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s# Node 资源保留
evictionHard:imagefs.available: 15%memory.available: 1Gnodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s# 镜像删除策略
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s# 旋转证书
rotateCertificates: true # 旋转kubelet client 证书
featureGates:RotateKubeletServerCertificate: trueRotateKubeletClientCertificate: truemaxOpenFiles: 1000000
maxPods: 110
EOFcat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
4.4.4 master节点启用kubelet
操作k8smaster01
[root@k8smaster01.host.com:/data/master]# chmod +x kubelet.sh
[root@k8smaster01.host.com:/data/master]# ./kubelet.sh 10.10.0.2 k8smaster01.host.com cluster.local.
[root@k8smaster01.host.com:/data/master]# systemctl status kubelet
[root@k8smaster01.host.com:/data/master]# scp kubelet.sh k8smaster02:/data/master/
[root@k8smaster01.host.com:/data/master]# scp kubelet.sh k8smaster03:/data/master/
操作k8smaster02
[root@k8smaster02.host.com:/data/master]# ./kubelet.sh 10.10.0.2 k8smaster02.host.com cluster.local.
[root@k8smaster02.host.com:/data/master]# systemctl status kubelet操作k8smaster03
[root@k8smaster03.host.com:/data/master]# ./kubelet.sh 10.10.0.2 k8smaster03.host.com cluster.local.
[root@k8smaster03.host.com:/data/master]# systemctl status kubelet
4.4.5 worker节点启用kubelet
清点准备部署文件
[root@k8smaster01.host.com:/data/master]# scp kubelet.sh k8sworker01:/data/worker/
[root@k8smaster01.host.com:/data/master]# scp kubelet.sh k8sworker02:/data/worker/
[root@k8smaster01.host.com:/data/master]# scp kubelet.sh k8sworker03:/data/worker/
[root@k8smaster01.host.com:/data/master]# scp kubelet.sh k8sworker04:/data/worker/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kubelet.kubeconfig bootstrap.kubeconfig k8sworker01:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kubelet.kubeconfig bootstrap.kubeconfig k8sworker02:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kubelet.kubeconfig bootstrap.kubeconfig k8sworker03:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kubelet.kubeconfig bootstrap.kubeconfig k8sworker04:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/ssl]# scp ca.pem k8sworker01:/opt/kubernetes/ssl/
[root@k8smaster01.host.com:/opt/kubernetes/ssl]# scp ca.pem k8sworker01:/opt/kubernetes/ssl/
[root@k8smaster01.host.com:/opt/kubernetes/ssl]# scp ca.pem k8sworker01:/opt/kubernetes/ssl/
[root@k8smaster01.host.com:/opt/kubernetes/ssl]# scp ca.pem k8sworker01:/opt/kubernetes/ssl/
注:其实只需要kubelet kube-proxy就可以,(分发kubectl kubeadm是为了自己后期方便,一般不推荐)
[root@k8smaster01.host.com:/opt/kubernetes/bin]# scp kubelet kube-proxy kubectl kubeadm k8sworker01:/opt/kubernetes/bin/
[root@k8smaster01.host.com:/opt/kubernetes/bin]# scp kubelet kube-proxy kubectl kubeadm k8sworker02:/opt/kubernetes/bin/
[root@k8smaster01.host.com:/opt/kubernetes/bin]# scp kubelet kube-proxy kubectl kubeadm k8sworker03:/opt/kubernetes/bin/
[root@k8smaster01.host.com:/opt/kubernetes/bin]# scp kubelet kube-proxy kubectl kubeadm k8sworker04:/opt/kubernetes/bin/操作k8sworker01
[root@k8sworker01.host.com:/data/worker]# ./kubelet.sh 10.10.0.2 k8sworker01.host.com cluster.local.
[root@k8sworker01.host.com:/data/worker]# systemctl status kubelet操作k8sworker02
[root@k8sworker02.host.com:/data/worker]# ./kubelet.sh 10.10.0.2 k8sworker02.host.com cluster.local.
[root@k8sworker02.host.com:/data/worker]# systemctl status kubelet操作k8sworker03
[root@k8sworker03.host.com:/data/worker]# ./kubelet.sh 10.10.0.2 k8sworker03.host.com cluster.local.
[root@k8sworker03.host.com:/data/worker]# systemctl status kubelet操作k8sworker04
[root@k8sworker04.host.com:/data/worker]# ./kubelet.sh 10.10.0.2 k8sworker04.host.com cluster.local.
[root@k8sworker04.host.com:/data/worker]# systemctl status kubelet

4.5 部署kube-proxy

4.5.6 创建 kube-proxy 配置脚本
摘要:kube-proxy的作用主要是负责service的实现,具体来说,就是实现了内部从pod到service和外部的从node port向service的访问
[root@k8smaster01.host.com:/data/master]# vim proxy.sh
#!/bin/bashHOSTNAME=${1:-"`hostname`"}cat <<EOF >/opt/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=2 \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOFcat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0 # 监听地址
metricsBindAddress: 0.0.0.0:10249 # 监控指标地址,监控获取相关信息 就从这里获取
clientConnection:kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig # 读取配置文件
hostnameOverride: ${HOSTNAME} # 注册到k8s的节点名称唯一
clusterCIDR: 10.10.0.0/16 # service IP范围
#mode: iptables # 使用iptables模式# 使用 ipvs 模式
mode: ipvs # ipvs 模式
ipvs:scheduler: "nq"
iptables:masqueradeAll: true
EOFcat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
4.5.7 master节点启动kube-proxy
操作k8smaster01
[root@k8smaster01.host.com:/root]# chmod +x /data/master/proxy.sh
[root@k8smaster01.host.com:/data/master]# scp /data/package/kubernetes/server/bin/kube-proxy k8smaster02:/opt/kubernetes/bin
[root@k8smaster01.host.com:/data/master]# scp /data/package/kubernetes/server/bin/kube-proxy k8smaster03:/opt/kubernetes/bin
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp /opt/kubernetes/cfg/kube-proxy.kubeconfig k8smaster02:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp /opt/kubernetes/cfg/kube-proxy.kubeconfig k8smaster03:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/root]# cp /data/package/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin/
[root@k8smaster01.host.com:/root]# cd /data/master
[root@k8smaster01.host.com:/data/master]# scp proxy.sh k8smaster02:/data/master/
[root@k8smaster01.host.com:/data/master]# scp proxy.sh k8smaster03:/data/master/
[root@k8smaster01.host.com:/data/master]# ./proxy.sh k8smaster01.host.com
[root@k8smaster01.host.com:/data/master]# systemctl status kube-proxy.service
操作k8smaster02
[root@k8smaster02.host.com:/data/master]# ./proxy.sh k8smaster02.host.com
[root@k8smaster02.host.com:/data/master]# systemctl status kube-proxy.service
操作k8smaster03
[root@k8smaster03.host.com:/data/master]# ./proxy.sh k8smaster03.host.com
[root@k8smaster03.host.com:/data/master]# systemctl status kube-proxy.service查看服务启动情况:
systemctl status kubelet kube-proxy | grep active[root@k8smaster01.host.com:/data/master]# netstat -ntpl | egrep "kubelet|kube-proxy"
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2490/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      30250/kube-proxy
tcp        0      0 127.0.0.1:9943          0.0.0.0:*               LISTEN      2490/kubelet
tcp6       0      0 :::10250                :::*                    LISTEN      2490/kubelet
tcp6       0      0 :::10255                :::*                    LISTEN      2490/kubelet
tcp6       0      0 :::10256                :::*                    LISTEN      30250/kube-proxy
[root@k8smaster02.host.com:/data/master]# netstat -ntpl | egrep "kubelet|kube-proxy"
tcp        0      0 127.0.0.1:15495         0.0.0.0:*               LISTEN      13678/kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      13678/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      12346/kube-proxy
tcp6       0      0 :::10250                :::*                    LISTEN      13678/kubelet
tcp6       0      0 :::10255                :::*                    LISTEN      13678/kubelet
tcp6       0      0 :::10256                :::*                    LISTEN      12346/kube-proxy
[root@k8smaster03.host.com:/data/master]# netstat -ntpl | egrep "kubelet|kube-proxy"
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      6342/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      6220/kube-proxy
tcp        0      0 127.0.0.1:26793         0.0.0.0:*               LISTEN      6342/kubelet
tcp6       0      0 :::10250                :::*                    LISTEN      6342/kubelet
tcp6       0      0 :::10255                :::*                    LISTEN      6342/kubelet
tcp6       0      0 :::10256                :::*                    LISTEN      6220/kube-proxy
4.5.8 worker节点启动kube-proxy
清点准备部署文件
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kube-proxy.kubeconfig k8sworker01:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kube-proxy.kubeconfig k8sworker02:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kube-proxy.kubeconfig k8sworker03:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/opt/kubernetes/cfg]# scp kube-proxy.kubeconfig k8sworker04:/opt/kubernetes/cfg/
[root@k8smaster01.host.com:/data/master]# scp proxy.sh k8sworker01:/data/worker
[root@k8smaster01.host.com:/data/master]# scp proxy.sh k8sworker02:/data/worker
[root@k8smaster01.host.com:/data/master]# scp proxy.sh k8sworker03:/data/worker
[root@k8smaster01.host.com:/data/master]# scp proxy.sh k8sworker04:/data/worker操作k8sworker01
[root@k8sworker01.host.com:/data/worker]# ./proxy.sh k8sworker01.host.com
[root@k8sworker01.host.com:/data/worker]# systemctl status kube-proxy.service操作k8sworker02
[root@k8sworker02.host.com:/data/worker]# ./proxy.sh k8sworker02.host.com
[root@k8sworker02.host.com:/data/worker]# systemctl status kube-proxy.service操作k8sworker03
[root@k8sworker03.host.com:/data/worker]# ./proxy.sh k8sworker03.host.com
[root@k8sworker03.host.com:/data/worker]# systemctl status kube-proxy.service操作k8sworker01
[root@k8sworker04.host.com:/data/worker]# ./proxy.sh k8sworker04.host.com
[root@k8sworker04.host.com:/data/worker]# systemctl status kube-proxy.service

4.6 检测状态

[root@k8smaster01.host.com:/data/yaml]# kubectl get nodes
NAME                   STATUS   ROLES    AGE    VERSION
k8smaster01.host.com   Ready    <none>   165m   v1.19.0
k8smaster02.host.com   Ready    <none>   164m   v1.19.0
k8smaster03.host.com   Ready    <none>   164m   v1.19.0
k8sworker01.host.com   Ready    <none>   15m    v1.19.0
k8sworker02.host.com   Ready    <none>   14m    v1.19.0
k8sworker03.host.com   Ready    <none>   12m    v1.19.0
k8sworker04.host.com   Ready    <none>   12m    v1.19.0[root@k8smaster01.host.com:/data/master]# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR           CONDITION
csr-f5c7c   3m33s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
csr-hhn6n   9m14s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
csr-hqnfv   3m23s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

4.7 其他补充设置

4.7.1 master节点添加taint
[root@k8smaster01.host.com:/root]# kubectl describe node | grep -i taint
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>[root@k8smaster01.host.com:/root]# kubectl describe node k8smaster01 | grep -i taint
Taints:             <none>[root@k8smaster01.host.com:/root]# kubectl taint nodes k8smaster01.host.com node-role.kubernetes.io/master:NoSchedule
node/k8smaster01.host.com tainted
[root@k8smaster01.host.com:/root]# kubectl taint nodes k8smaster02.host.com node-role.kubernetes.io/master:NoSchedule
node/k8smaster02.host.com tainted
[root@k8smaster01.host.com:/root]# kubectl taint nodes k8smaster03.host.com node-role.kubernetes.io/master:NoSchedule
node/k8smaster03.host.com tainted[root@k8smaster01.host.com:/root]# kubectl describe node | grep -i taint
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>
4.7.2命令补全
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> /etc/rc.local

五、部署网络组件

5.1 部署calico网络 BGP模式

calico官方网址:https://docs.projectcalico.org/about/about-calico

5.1.1 修改部分配置

全部代码请查看附件或官方下载

大概位于文件的3500行
其中标记####下面需要修改 处需要重点关注
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodespec:nodeSelector:kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container performs upgrade from host-local IPAM to calico-ipam.# It can be deleted if this is a fresh installation, or if you have already# upgraded to use calico-ipam.- name: upgrade-ipamimage: calico/cni:v3.16.1command: ["/opt/cni/bin/calico-ipam", "-upgrade"]envFrom:- configMapRef:# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.name: kubernetes-services-endpointoptional: trueenv:- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backendvolumeMounts:- mountPath: /var/lib/cni/networksname: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dirsecurityContext:privileged: true# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.16.1command: ["/opt/cni/bin/install"]envFrom:- configMapRef:# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.name: kubernetes-services-endpointoptional: trueenv:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dirsecurityContext:privileged: true# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: calico/pod2daemon-flexvol:v3.16.1volumeMounts:- name: flexvol-driver-hostmountPath: /host/driversecurityContext:privileged: truecontainers:# Runs calico-node container on each Kubernetes node. This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.16.1envFrom:- configMapRef:# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.name: kubernetes-services-endpointoptional: trueenv:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# DaemonSet中添加该环境变量        ####如果是双网络环境下面需要修改- name: IP_AUTODETECTION_METHODvalue: skip-interface=enp0s20f0u1|em2# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type  ####下面需要修改- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.  ####下面需要修改- name: IPvalue: "autodetect"# Enable IPIP   ####下面需要修改- name: CALICO_IPV4POOL_IPIPvalue: "Never"# Enable or Disable VXLAN on the default IP pool.- name: CALICO_IPV4POOL_VXLANvalue: "Never"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the VXLAN tunnel device.- name: FELIX_VXLANMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the Wireguard tunnel device.- name: FELIX_WIREGUARDMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "10.244.101.0/24,10.244.102.0/24,10.244.103.0/24,10.244.105.0/24,10.244.106.0/24,10.244.107.0/24,10.244.108.0/24"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- name: policysyncmountPath: /var/run/nodeagent# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the# parent directory.- name: sysfsmountPath: /sys/fs/# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.# If the host is known to mount that filesystem already then Bidirectional can be omitted.mountPropagation: Bidirectionalvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate- name: sysfshostPath:path: /sys/fs/type: DirectoryOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the directory for host-local IPAM allocations. This is# used when upgrading from host-local to calico-ipam, and can be removed# if not using the upgrade-ipam init container.- name: host-local-net-dirhostPath:path: /var/lib/cni/networks# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
5.1.2 应用calico文件
[root@k8smaster01.host.com:/opt/calicoctl]# kubectl apply -f calico.yaml
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchangedcalico.yaml文件由于太长,代码详见附件

5.2 部署coredns

官网参考地址:https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

5.2.1 编辑coredns配置文件
[root@k8smaster01.host.com:/data/yaml/coredns]# vim coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns
rules:
- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch
- apiGroups:- ""resources:- nodesverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local. in-addr.arpa ip6.arpa {pods insecurefallthrough in-addr.arpa ip6.arpattl 30}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:securityContext:seccompProfile:type: RuntimeDefaultpriorityClassName: system-cluster-criticalserviceAccountName: corednsaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnametolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxcontainers:- name: corednsimage: registry.cn-hangzhou.aliyuncs.com/mapplezf/coredns:1.7.0imagePullPolicy: IfNotPresentresources:limits:memory: 1024Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.10.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
5.2.2 应用部署coredns
[root@k8smaster01.host.com:/data/yaml/coredns]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

六、部署traefik

6.1 编辑traefik资源配置文件

6.1.1 编辑svc
[root@k8smaster01.host.com:/data/yaml/traefik/ingress]# vim svc.yamlapiVersion: v1
kind: Service
metadata:name: traefiknamespace: kube-system
spec:type: LoadBalancerselector:app: traefikports:- protocol: TCPport: 80name: webtargetPort: 80- protocol: TCPport: 8080name: admintargetPort: 8080
6.1.2 编辑rbac
[root@k8smaster01.host.com:/data/yaml/traefik/ingress]# vim rbac-traefik-ingress.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: traefik-ingress-controller
rules:- apiGroups:- ""resources:- services- endpoints- secretsverbs:- get- list- watch- apiGroups:- extensionsresources:- ingressesverbs:- get- list- watch- apiGroups:- extensionsresources:- ingresses/statusverbs:- update---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: traefik-ingress-controller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: traefik-ingress-controller
subjects:- kind: ServiceAccountname: traefik-ingress-controllernamespace: kube-system
6.1.3 编辑ingress-traefik
[root@k8smaster01.host.com:/data/yaml/traefik/ingress]# vim ingress-traefik.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: traefik-ingressnamespace: kube-systemannotations:traefik.ingress.kubernetes.io/router.entrypoints: web
spec:rules:- host: traefik.lowan.comhttp:paths:- path: /backend:serviceName: traefik    #需要和service名字对上servicePort: 8080
6.1.4 编辑deployment
[root@k8smaster01.host.com:/data/yaml/traefik/ingress]# vim deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: traefik-ingress-controllernamespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:name: traefiknamespace: kube-systemlabels:app: traefikannotations:prometheus_io_scheme: "traefik"prometheus_io_path: "/metrics"prometheus_io_port: "8080"
spec:replicas: 7selector:matchLabels:app: traefiktemplate:metadata:labels:app: traefikspec:serviceAccountName: traefik-ingress-controllercontainers:- name: traefikimage: harbor.iot.com/kubernetes/traefik:v2.3args:- --log.level=DEBUG- --api- --api.insecure- --entrypoints.web.address=:80- --providers.kubernetesingress- --metrics.prometheus- --accesslog- --accesslog.filepath=/var/log/traefik_access.logports:- name: webcontainerPort: 80hostPort: 81- name: admincontainerPort: 8080securityContext:capabilities:drop:- ALLadd:- NET_BIND_SERVICE

6.2 应用配置文件

kubectl apply -f svc.yaml
kubectl apply -f rbac-traefik-ingress.yaml
kubectl apply -f ingress-traefik.yaml
kubectl apply -f deployment.yaml

6.3 检查状态

[root@k8smaster01.host.com:/data/yaml/traefik/ingress]# kubectl get pods -n kube-system -o wide | grep -i traefik
traefik-6787756cf4-7csbq        1/1     Running   0    12h   172.16.234.142   k8smaster02.host.com
traefik-6787756cf4-f6zvw        1/1     Running   0          12h   172.16.222.139   k8sworker04.host.com
traefik-6787756cf4-glkn8        1/1     Running   0          12h   172.16.15.203    k8sworker01.host.com
traefik-6787756cf4-glq9s        1/1     Running   0          12h   172.16.111.204   k8sworker02.host.com
traefik-6787756cf4-gsxqd        1/1     Running   0          12h   172.16.236.11    k8sworker03.host.com
traefik-6787756cf4-hdc79        1/1     Running   0          12h   172.16.53.78     k8smaster03.host.com
traefik-6787756cf4-scqzg        1/1     Running   0          12h   172.16.170.77    k8smaster01.host.com  

6.4 添加域名解析及反向代理

[root@lb03.host.com:/root]# vim /var/named/iot.com.zone
$ORIGIN lowan.com.
$TTL 600        ; 10 minutes
@       IN SOA  dns.lowan.com.   dnsadmin.lowan.com. (2020101004        ; serial10800             ; refresh (3 hours)900               ; retry (15 minutes)604800            ; expire (1 week)86400 )           ; minimum (1 day)NS      dns.lowan.com.
$TTL 60         ; 1 minutes
dns             A       192.168.13.99
traefik         A       192.168.13.100:x 保存退出root@lb03.host.com:/root]# systemctl restart named
[root@lb03.host.com:/root]# dig -t A traefik.lowan.com @192.168.13.99 +short
192.168.13.100

6.6 测试traefikdashboard

访问:http://traefik.lowan.com/

七、部署Kubernetes Dashboard

7.1 下载Dashboard的yaml文件

[root@k8smaster01.host.com:/data/yaml/dashboard]# wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

7.2 创建Dashboard的命令空间和存储

[root@k8smaster01.host.com:/data/yaml/dashboard]# vim namespace-dashboard.yaml
apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard[root@k8smaster01.host.com:/data/yaml/dashboard]# vim pv-nfs-dashboard.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: k8sdashboardlabels:volume: k8sdashboard
spec:capacity:storage: 100GivolumeMode: FilesystemaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: RecyclestorageClassName: slowmountOptions:- hard- nfsvers=4.1nfs:path: /nfs/k8sdashboardserver: lb01.host.com[root@k8smaster01.host.com:/data/yaml/dashboard]# vim pvc-nfs-dashboard.yamlapiVersion: v1
kind: PersistentVolumeClaim
metadata:name: k8sdashboardnamespace: kubernetes-dashboard
spec:accessModes:- ReadWriteManyvolumeMode: Filesystemresources:requests:storage: 100Gi

7.3 应用recommended

[root@k8smaster01.host.com:/data/yaml/dashboard]# vim recommended.yaml
修改: 添加NFSvolumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumenfs:path: /nfs/k8sdashboardserver: lb01.host.com[root@k8smaster01.host.com:/data/yaml/dashboard]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created[root@k8smaster01.host.com:/data/yaml/dashboard]# kubectl get pods -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP         NODE
dashboard-metrics-scraper-66f98b8bf9-fg5z4   1/1    Running   0    32s  172.16.170.82 k8smaster01.host.com
kubernetes-dashboard-6bc5bd9b79-v7xjz        1/1    Running  0     32s  172.16.170.83 k8smaster01.host.com 

7.4 创建认证

7.4.1 创建登陆认证和获取token
[root@k8smaster01.host.com:/data/yaml/dashboard]# vim dashboard-adminuser.yamlapiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard
~                                  [root@k8smaster01.host.com:/data/yaml/dashboard]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created[root@k8smaster01.host.com:/data/yaml/dashboard]# kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-92zvt
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 91a5c456-2ac0-47ac-b664-95b3f3d000b9Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1359 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkQ4QldRbFA2emtGNzktSHVUdjRQWU11dkppRTRrZ2tsV2dwelUzeXA3OVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTkyenZ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MWE1YzQ1Ni0yYWMwLTQ3YWMtYjY2NC05NWIzZjNkMDAwYjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.HhHMq34IWWjE8mv-hu573IUfMdBCOjj2Ys9kqBzfA-phJm5HzmXAtppvnpjuTXrkh80MPwcuYaV2kMnQ74jLwiXpX1jNQLz4Dyq6Ymzkjjrxx2WIJwlCX-FBk2iyy9bjhLyfqk9ytofgKvLnqcmvT_hDo-evsSD4RHN8S_kt_KT4CT1PxRZ3BwHKviC5KS0voFNZtg3F5ulDEHW2CT5U2HoPYdzJt5xcbBh-o9Qpm44j7OjF6y3eEkNmG6DGxva-lGN2ltoJdIBRhlAlrxB0jA0FnAqWuPgBqjHwg4edKNzmuMsuGuSr-PHeYbN-8RNjc-CR9bVeSa4JLUo0LhL59A
7.4.2 创建自签证书

此处使用openssl生成证书 ,当然也可以参照文章使用cfssl工具生成证书

[root@k8smaster01.host.com:/data/ssl]# cp ca.pem ca-key.pem /data/yaml/dashboard/
[root@k8smaster01.host.com:/data/yaml/dashboard]# (umask 077; openssl genrsa -out kubernetes.lowan.com.key 2048)
Generating RSA private key, 2048 bit long modulus
...........................................................+++
................................................................+++
e is 65537 (0x10001)[root@k8smaster01.host.com:/data/yaml/dashboard]#openssl req -new -key kubernetes.lowan.com.key -out kubernetes.lowan.com.csr -subj "/CN=kubernetes.lowan.com/C=CN/ST=Beijing/L=Beijing/O=k8s/OU=system"
[root@k8smaster01.host.com:/data/yaml/dashboard]# openssl x509 -req -in kubernetes.lowan.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kubernetes.lowan.com.pem -days 3650
Signature ok
subject=/CN=kubernetes.lowans.com/C=CN/ST=Beijing/L=Beijing/O=iot/OU=system
Getting CA Private Key
7.4.3 创建secret对象来引用证书文件
[root@k8smaster01.host.com:/data/yaml/dashboard]# kubectl create secret tls kubedashboard-tls --cert=/data/yaml/dashboard/kubernetes.lowan.com.crt --key=kubernete.lowan.com.key -n kubernetes-dashboard
7.4.4 创建dashboard的ingressrrouter
[root@k8smaster01.host.com:/data/yaml/dashboard]# vim ingressroute-kube.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:name: kubernetes-dashboard-routenamespace: kubernetes-dashboardannotations:# traefik.ingress.kubernetes.io/router.entrypoints: websecuretraefik.ingress.kubernetes.io/router.entrypoints: webkubernetes.io/ingress.class: "traefik"
spec:entryPoints:- web# tls:#   secretName: kubedashboard-tlsroutes:- match: Host(`kubernetes.lowan.com`) && PathPrefix(`/`)kind: Ruleservices:- name: kubernetes-dashboardport: 443~
7.4.5 配置nginx反代代理
7.4.5.1 分发pem和key文件至nginx
[root@k8smaster01.host.com:/data/yaml/dashboard]# scp kubernetes.lowan.com.key  kubernetes.lowan.com.pem lb01:/data/nginx/conf/certs/
[root@k8smaster01.host.com:/data/yaml/dashboard]# scp kubernetes.lowan.com.key  kubernetes.lowan.com.pem lb02:/data/nginx/conf/certs/
[root@k8smaster01.host.com:/data/yaml/dashboard]# scp kubernetes.lowan.com.key  kubernetes.lowan.com.pem lb03:/data/nginx/conf/certs/或者
for i in {01..03} do  scp kubernetes.lowan.com.key kubernetes.lowan.com.pem  lbi:/data/nginx/certs/scp rancher.lowan.com.key rancher.lowan.com.pem lbi:/data/nginx/certs/done
7.4.5.2 修改nginx.conf
[root@lb03.host.com:/data/nginx/conf]# vim nginx.conf
#user  nobody;
worker_processes  4;
worker_rlimit_nofile 40000;
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;#pid        logs/nginx.pid;events {worker_connections  10240;
}http {include       mime.types;default_type  application/octet-stream;include /data/nginx/conf/conf.d/*.conf;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';#access_log  logs/access.log  main;sendfile        on;#tcp_nopush     on;#keepalive_timeout  0;keepalive_timeout  65;#gzip  on;upstream default_backend_traefik {server 192.168.13.101:81     max_fails=3     fail_timeout=10s;server 192.168.13.102:81     max_fails=3     fail_timeout=10s;server 192.168.13.103:81     max_fails=3     fail_timeout=10s;server 192.168.13.105:81     max_fails=3     fail_timeout=10s;server 192.168.13.106:81     max_fails=3     fail_timeout=10s;server 192.168.13.107:81     max_fails=3     fail_timeout=10s;server 192.168.13.108:81     max_fails=3     fail_timeout=10s;}upstream https_traefik {server 192.168.13.101:10443     max_fails=3     fail_timeout=10s;server 192.168.13.102:10443     max_fails=3     fail_timeout=10s;server 192.168.13.103:10443     max_fails=3     fail_timeout=10s;server 192.168.13.105:10443     max_fails=3     fail_timeout=10s;server 192.168.13.106:10443     max_fails=3     fail_timeout=10s;server 192.168.13.107:10443     max_fails=3     fail_timeout=10s;server 192.168.13.108:10443     max_fails=3     fail_timeout=10s;}upstream harbor_registry {server harbor.iot.com:1443     max_fails=3     fail_timeout=10s;}server {listen       80;server_name  *.lowan.com;#charset koi8-r;#access_log  logs/host.access.log  main;location / {proxy_pass http://default_backend_traefik;proxy_set_header Host   $http_host;proxy_set_header x-Forwarded-For $proxy_add_x_forwarded_for;}#error_page  404              /404.html;# redirect server error pages to the static page /50x.html#error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}# proxy the PHP scripts to Apache listening on 127.0.0.1:80##location ~ \.php$ {#    proxy_pass   http://127.0.0.1;#}# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000##location ~ \.php$ {#    root           html;#    fastcgi_pass   127.0.0.1:9000;#    fastcgi_index  index.php;#    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;#    include        fastcgi_params;#}# deny access to .htaccess files, if Apache's document root# concurs with nginx's one##location ~ /\.ht {#    deny  all;#}}# another virtual host using mix of IP-, name-, and port-based configuration##server {#    listen       8000;#    listen       somename:8080;#    server_name  somename  alias  another.alias;#    location / {#        root   html;#        index  index.html index.htm;#    }#}# HTTPS server#server {listen       443 ssl http2;server_name  kubernetes.lowan.com;ssl_certificate      "certs/kubernetes.lowan.com.pem";ssl_certificate_key  "certs/kubernetes.lowan.com.key";ssl_session_cache    shared:SSL:1m;ssl_session_timeout  10m;ssl_ciphers  HIGH:!aNULL:!MD5;ssl_prefer_server_ciphers  on;location / {proxy_pass http://default_backend_traefik;client_max_body_size  1024m;proxy_set_header Host   $http_host;proxy_set_header x-Forwarded-For $proxy_add_x_forwarded_for;}}server {listen       443 ssl http2;server_name  harbor.iot.com;#   ssl_certificate      "certs/harbor.iot.com.pem";#   ssl_certificate_key  "certs/harbor.iot.com.key";ssl_session_cache    shared:SSL:1m;ssl_session_timeout  10m;ssl_protocols TLSv1 TLSv1.1 TLSv1.2;ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;ssl_prefer_server_ciphers  on;location / {proxy_pass https://harbor_registry;client_max_body_size  2048m;proxy_set_header Host   $host:$server_port;proxy_set_header    X-Real-IP      $remote_addr;proxy_set_header x-Forwarded-For $proxy_add_x_forwarded_for;}}}:x  保存退出
[root@lb03.host.com:/data/nginx/conf]# nginx -s reload
同理: 需要将nginx.conf scp至lb01 lb02节点上,三者文件保持一致

7.5 配置DNS解析

[root@lb03.host.com:/root]# vim /var/named/lowan.com.zone
添加:
kubernetes      A       192.168.13.100[root@lb03.host.com:/root]# systemctl restart named

7.6 登陆测试

登陆网址:https://kubernetes.lowan.com

7.7 配置dashboard分权

注:dashboard分权主要用于研发、测试人员使用有限权限登陆k8s dashboard进行管理

7.7.1 创建ServiceAccount 、ClusterRole和ClusterRoleBinding

注:ClusterRole和ClusterRoleBinding这两个组合的API对象的用法和Role与RoleBinding完全一样,只不过它们的定义里,没有了Namespace字段

[root@k8smaster01.host.com:/data/yaml/role-based-access-control]# vim lister-cluster-role.yamlapiVersion: v1
kind: ServiceAccount
metadata:name: lister-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: lister-clusterrole
rules:
- apiGroups: [""]resources: ["pods", "daemon sets", "deployments", "jobs", "replica sets", "stateful sets", "replication controllers", "service", "ingresses", "configmaps", "namespaces", "nodes"]verbs: ["get", "watch", "list"]---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: lister-clusterrolebinding
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: lister-clusterrole
subjects:
- kind: ServiceAccountname: lister-usernamespace: kubernetes-dashboard[root@k8smaster01.host.com:/data/yaml/role-based-access-control]# kubectl apply -f lister-cluster-role.yaml
serviceaccount/lister-user created
clusterrole.rbac.authorization.k8s.io/lister-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/lister-clusterrolebinding created[root@k8smaster01.host.com:/data/yaml/role-based-access-control]# kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | grep lister-user | awk '{print $1}')
Name:         lister-user-token-nbzzv
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: lister-userkubernetes.io/service-account.uid: b8d2892e-9848-4c17-a75d-4d7e78f160efType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1359 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkQ4QldRbFA2emtGNzktSHVUdjRQWU11dkppRTRrZ2tsV2dwelUzeXA3OVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsaXN0ZXItdXNlci10b2tlbi1uYnp6diIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJsaXN0ZXItdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI4ZDI4OTJlLTk4NDgtNGMxNy1hNzVkLTRkN2U3OGYxNjBlZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDpsaXN0ZXItdXNlciJ9.a2g9BTnrz2CB0XgkeUcdnvJOX6ceR_9-t_ux3zNcc1ktcjqSfe1-9J1I4tS0taKpJ8BesJQ8tD8IjOGYAv_jDVHpBTcEFdJlFwXKO9ot3YWRZ1Rpoic6b8NfvAAfZnuYCDoMbcIoRPngFIPpC_oFG2DkkGqWbHc2mJHoA5XiQyudAt-uPaTavdlw1cDC3fsKkwh_lVnEHaX8eFyMslUd_voAATAXTLYgMyYk_adElfa2tL63pE2IirRWmbVow4ZoZ2tmi--ORQI6ChXqxM4p9xdSpnnz8P43rBI9HK3ewNuc50iu4Zs7ANuP1qk9W0kWiEqQ5haHpoJB_JzcMtf7Rg注:此用户只有查看权限

**

后续参考:

02 kubernetes辅助环境设置
03 K8S集群网络ACL规则
04 Ceph集群部署
05 部署zookeeper和kafka集群
06 部署日志系统
07 部署Indluxdb-telegraf
08 部署jenkins
09 部署k3s和Helm-Rancher
10 部署maven软件
11 部署spinnaker的armory发行版

01 kubernetes二进制部署相关推荐

  1. Kubernetes二进制部署——Flannel网络

    Kubernetes二进制部署--Flannel网络 一.Flannel简介 二.Flannel原理 三.Flannel的作用 四.Flannel 网络配置 1.node 节点安装 docker 2. ...

  2. Kubernetes二进制部署——证书的制作和ETCD的部署

    Kubernetes二进制部署--证书的制作和ETCD的部署 一.实验环境 自签 SSL 证书 二.ETCD集群部署 1.环境部署 2.master节点 3.node1节点 4.node2节点 5.m ...

  3. 9-1 Kubernetes二进制部署的Prometheus实现服务发现

    文章目录 前言 创建用户 复制Token 配置文件 全局配置 Master节点发现 Node节点发现 Namespace Pod发现 自定义Pod发现 前言 在上一章节介绍了 8-5 在Prometh ...

  4. 【重要】kubernetes二进制部署单master节点

    目录 1.安装要求 2.安装规划 3.1.分步骤操作 3.2.一键执行脚本 4.1.安装cfssl证书生成工具 4.2.创建认证中心(根CA中心) 4.3.使用自签CA签发Etcd证书 4.4.部署E ...

  5. kubernetes二进制部署单master节点

    目录 1.安装要求 2.安装规划 3.1.分步骤操作 3.2.一键执行脚本 4.1.安装cfssl证书生成工具 4.2.创建认证中心(根CA中心) 4.3.使用自签CA签发Etcd证书 4.4.部署E ...

  6. CC00042.CloudKubernetes——|KuberNetes二进制部署.V20|5台Server|——|kubernetes配置|生产环境关键性配置|

    一.生产环境关键性配置 ### --- docker参数配置--所有节点 ~~~ docker参数:所有节点都需要更改.[root@k8s-master01 ~]# vim /etc/docker/d ...

  7. Kubernetes 二进制部署 多节点(基于单节点部署,超详细)3

    目录 一.环境准备 二.master02 节点部署 三.负载均衡部署 1.在lb01.lb02节点上操作 2.在 master01 节点上操作 四.部署 Dashboard UI 1.Dashboar ...

  8. CC00070.CloudKubernetes——|KuberNetes二进制部署.V23|3台Server|——|kubernetes部署总结|

    一.kubernetes安装总结 ### --- kubernetes安装方式~~~ kubeadm ~~~ 二进制 ~~~ 自动化安装 ### --- kubernetes自动化安装方式~~~ # ...

  9. CC00028.CloudKubernetes——|KuberNetes二进制部署.V06|5台Server|——|etcd配置|

    一.kubernetes系统组件配置 ### --- k8s-master01节点etcd.yaml配置~~~ etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址 ## ...

最新文章

  1. shuffle操作图解以及job-stage-task-partition区别
  2. etcd raft library设计原理和使用
  3. 前端学习(3171):react-hello-react之reduce
  4. 因特网 以太网 互联网的含义及区别
  5. django与grpc融合的过程
  6. 导入 kotlin(7)
  7. 小程序 Serverless: 解放生产力,驱动研发效能提升
  8. numpy 中的nan和常用的统计方法
  9. 数据中心建筑及装修施工工序工艺管理要点
  10. 深度解析,马斯克最新发射的先进火箭
  11. TSL2591STM32固件库开发
  12. 基于单幅图像的三维动物自动建模项目(The SMAL Model)学习笔记1
  13. ios wkweb设置图片_ios·WKWebView\UIWebView加载HTMLString,实现图片懒加载
  14. 斐波那契数列+pyton
  15. 表达式的LenB(123程序设计ABC)的值是
  16. stm32f105固件包_STM32F105/107
  17. 采用亥姆霍兹线圈进行稀土永磁性能测量
  18. Springmvc+Mybatis( 配置Conveter转换器 转换工厂 (来转换数据格式)),报错,搞好了
  19. vCenter Server Appliance(VCSA ) Windows 版本 6.7部署指南
  20. 端午节加班的我,跟大家汇报一下创业阶段性成果!

热门文章

  1. 沃尔玛申请了个物联网专利 能做到“补货”自动下单
  2. js等待ajax执行完,js等待方法执行完再执行
  3. 苏州新导RFID智能消防仓库管理系统,杭州创思已采用RFID仓库管理系统
  4. 蛙泳 学习 进阶 踩水
  5. OpenGL坐标系转换
  6. 大淘宝用户平台技术团队单元测试建设
  7. 加拿大11年级计算机课程代码,如何读懂加拿大高中课程代码?
  8. if与else if的使用区别
  9. Bishop的大作《模式识别与机器学习》Ready to read!
  10. 进入docker容器闪退,容器杀不掉