目录

写在之前

一. k8s简介

1. k8s 的整体架构

2. 部署说明

3. ansible host文件说明

二. 基础系统部署

1. 设置关闭防火墙及SELINUX

2. 关闭Swap

3.升级linux内核

4.优化内核参数

5. 开启 ipvs

6. 设置系统时区

7.创建相关目录

8.设置环境变量脚本

三. 创建证书

1. 安装cfssl工具集

2.创建证书

3. 分发证书

4. 部署kubectl命令行工具

5. 生成admin 证书

6.生成kubeconfig文件

四.ETCD部署

1. 分发文件

2. 创建etcd证书和私钥

3. 创建启动文件

五. 部署Flannel网络

1. 分发文件

2. 创建秘钥

3. 将网段信息写入etcd

4. 创建启动文件,并启动服务

六. 部署服务高可用

1. kube-nginx 部署

2. keepalived 部署

七. kube-apiserver 部署

1. 解压文件并分发

2. 创建Kubernetes 证书和私钥

3. 创建加密配置文件

4. 创建审计策略文件

5. 创建证书签名请求

6. 创建kube-apiserver启动文件

八. 部署kube-controller-manager

1. 创建证书签名

2. 创建和分发kubeconfig文件

3. 创建kube-controller-manager启动文件

4.启动服务

5.kube-controller-manager 创建权限

九. 部署kube-scheduler

2. 创建和分发 kubeconfig 文件

3.创建 kube-scheduler 配置文件

4. 创建启动文件

5. 启动服务并检查

十. docker的安装

1. 安装docker服务

2. 创建配置文件

3. 启动服务并检查

十一. 部署kubelet组件

1. 创建kubelet bootstrap kubeconfig文件

2. 查看kubeadm为各个节点创建的token

4. 创建和分发kubelet启动文件

5. approve CSR请求

6. bear token认证和授权

十二. 部署kube-proxy组件

1. 创建kube-proxy证书签名请求

2. 创建和分发 kubeconfig 文件

3. 创建kube-proxy配置文件

4. 启动 kube-proxy 服务

十三. 测试集群功能

1. 创建yml文件

2. 查看pod

3. 测试连通性

4. 查看服务情况

十四. CoreDNS安装

1. 创建yaml文件

2. 创建coredns

3. 测试dns功能

十五. 后记


写在之前

只2017年开始,断断续续的接触k8s也是有几个年头了,从1.11开始到现在马上都1.20了,真快呀!之前就有想整理下所有技术的想法,毕竟已经35岁的人,闯荡江湖已经12个年头了。是该总结下了,这行说实话干不了几年了。话不多说,等专门写一篇感慨的文章。开整!

一. k8s简介

这块我不会讲的太多,毕竟k8s一个很大的体系,不是一两句话或者两三句话就可以说清楚的。可能说的太深了我也未必能说的清楚,主要是说不清楚吧。但是简单的我还是要说下,主要是应对面试吧。但是说实话理论这事儿干的深了很重要的,在排错的时候,如果有理论加持会事半功倍,很容易地位错误。如果对于理论不大理解,那就是撞大运。想想是不是应该多写点理论? 还是算了吧,等等看,时间充足了,再说吧。

1. k8s 的整体架构

这块不太多说了,先上图。不是我自己画的,copy哪个大仙的,有侵权的喊我。我换其他大仙的。

总体来说k8s可以分为两个部分,master和node两个部分。

  • master 由四个模块组成,APIServer,schedule,controller-manager,etcd.

APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。

schedule: schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。

controller manager: 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而controller manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。

etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。

  • Node节点主要由三个模板组成:kublet, kube-proxy,Docker

kube-proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。

kublet:kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。

Docker:进行容器生成、配置和使用,作为pod节点的重要支撑。

这玩意儿就说到这里,再详细的不多学了,我不做大自然的搬运工了。面试的话基本上就问这些基本的,往深了就是pod,pv,pvc这类的。等专门写写这块内容。其他也就没嘛了,面试官也不太敢往深了搞,关键是他也的会哈,百分之八十的都是这样,当然牛人也是很多的。

2. 部署说明

首先需要说明的是,为了快捷很多东西直接用ansible做的,简单的ansible的命令,没有涉及到playbook的东西,以后可能会做一套这东西。但是现在觉得没拿必要。如果哪位同志想搞这块的内容,可以单独的找我,咱们一起探讨。还有,我内网做了本地dns,直接采用相关域名操作,没有进行主机的后饰条文件设置。最后,我自己做了本地的yum源,很多包可能在网络部署的时候会没有,这个别担心自己找找就可以,没什么大不了的事儿。不能指着一篇文档发家致富,工作很多时候还的靠自己。我这边做的是我们的测试环境,线上环境大同小异,也是这个样做的,我们的node节点大概会有十几台,我这里会写一台吧,如果说不清可能最多写两台node节点。另外我的ansible是单独的一台机器。各位战友可根据自己的环境进行调整,不多说了,具体规划如下:

角色

ip

主机名称

组件

master-01

10.120.200.2

k8s-master01.k8s.com

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

master-02

10.120.200.3

k8s-master02.k8s.com

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

node-01

10.120.200.10

k8s-node01.k8s.com

kubelet,kube-proxy,docker, etcd

node-02

10.120.200.4

k8s-node02.k8s.com

kubelet,kube-proxy,docker ,ansible

3. ansible host文件说明

[master]

k8s-master01.k8s.com
k8s-master02.k8s.com

[node]
k8s-node01.k8s.com
k8s-node02.k8s.com

[etcd]
k8s-master01.k8s.com
k8s-master02.k8s.com
k8s-node07.k8s.com

二. 基础系统部署

1. 设置关闭防火墙及SELINUX

ansible all -m shell -a "systemctl stop firewalld && systemctl disable firewalld"
ansible all -m shell -a "setenforce 0"
ansible all -m shell -a "yum install yum-utils -y"
ansible all -m shell -a "sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config"

2. 关闭Swap

ansible all -m shell -a "swapoff -a && sysctl -w vm.swappiness=0"
ansible all -m shell -a "sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab"

3.升级linux内核

ansible all -m shell -a "mkdir /data/"
ansible all -m copy -a "src=./kernel-ml-5.11.8-1.el7.elrepo.x86_64.rpm dest=/data/ "
ansible all -m shell -a "chdir=/data rpm -ivh kernel-ml-5.11.8-1.el7.elrepo.x86_64.rpm"
ansible all -m shell -a "grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg"
#查看启动默认内核指向上面安装的内核
ansible all -m shell -a "grubby --default-kernel"
#顺手更新下系统,由于使用yum update命令更新,时间或者显示的问题可能有用ansible会有错误。可以到每台机器上去执行命令
ansible all -m shell -a "yum update -y"
ansible all -m shell -a "reboot"

4.优化内核参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
#net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOFansible all -m copy -a "src=./kubernetes.conf dest=/etc/sysctl.d/"
ansible all -m shell -a "sysctl -p /etc/sysctl.d/kubernetes.conf"

5. 开启 ipvs

cat > ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOFansible all -m copy -a "src=./ipvs.modules dest=/etc/sysconfig/modules/"
ansible all -m shell -a "chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4"

6. 设置系统时区

ansible all -m shell -a "timedatectl set-timezone Asia/Shanghai"#将当前的 UTC 时间写入硬件时钟
ansible all -m shell -a "timedatectl set-local-rtc 0"#重启依赖于系统时间的服务
ansible all -m shell -a "systemctl restart rsyslog "
ansible all -m shell -a "systemctl restart crond"

7.创建相关目录

ansible all -m shell -a "mkdir -p  /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert"

8.设置环境变量脚本

所有环境变量都定义在environment.sh中,需要根据环境修改。并且需要拷贝到所有节点的/opt/k8s/bin目录下

#!/usr/bin/bash
# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# 集群各机器 IP 数组
export NODE_IPS=( 10.120.200.2 10.120.200.3  10.120.200.10 )
# 集群各 IP 对应的主机名数组
export NODE_NAMES=(k8s-master02 k8s-master01 k8s-node07)
# 集群MASTER机器 IP 数组
export MASTER_IPS=(10.120.200.2 10.120.200.3  )
# 集群所有的master Ip对应的主机
export MASTER_NAMES=(k8s-master02 k8s-master01)
# etcd 对应主机列表
export ETCD_NAMES=(k8s-master02 k8s-master01 k8s-node07)
# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://10.120.200.2:2379,https://10.120.200.3:2379,https://10.120.200.10:2379"
# etcd 集群间通信的 IP 和端口
export ETCD_NODES="k8s-01=https://10.120.200.2:2380,k8s-02=https://10.120.200.3:2380,k8s-03=https://10.120.200.10:2380"
# etcd 集群所有node ip
export ETCD_IPS=(192.110.120.200.2 10.120.200.3  10.120.200.10 )
# kube-apiserver 的反向代理(kube-nginx)地址端口
export KUBE_APISERVER="https://10.120.200.100:8443"
# 节点间互联网络接口名称
export IFACE="eth0"
# etcd 数据目录
export ETCD_DATA_DIR="/data/k8s/etcd/data"
# etcd WAL 目录,建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区
export ETCD_WAL_DIR="/data/k8s/etcd/wal"
# k8s 各组件数据目录
export K8S_DIR="/data/k8s/k8s"
# docker 数据目录
#export DOCKER_DIR="/data/k8s/docker"
## 以下参数一般不需要修改
# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
#BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"
# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段
# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 保证)
SERVICE_CIDR="10.254.0.0/16"
# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="172.30.0.0/16"
# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="1024-32767"
# flanneld 网络配置前缀
export FLANNEL_ETCD_PREFIX="/kubernetes/network"
# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"
# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"
# 集群 DNS 域名(末尾不带点号)
export CLUSTER_DNS_DOMAIN="cluster.local"
# 将二进制目录 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH
ansible all -m copy -a "src=./environment.sh dest=/opt/k8s/bin"
ansible all -m shell -a "chdir=/opt/k8s/bin chmod 755 ./environment.sh"

基本上系统前期基础部署到这里就完成了。这些内容每天master和node均需要操作。在ansible中采用了all 这个参数选项。接下来开始做master组件部署。

三. 创建证书

1. 安装cfssl工具集

ansible k8s-master01.k8s.com -m shell -a "mkdir -p /opt/k8s/cert"
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
mv cfssl_linux-amd64 cfssl
ansible k8s-master01.k8s.com -m copy  -a "src=./cfssl dest=/opt/k8s/bin/"
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 cfssljson
ansible k8s-master01.k8s.com -m copy  -a "src=./cfssljson dest=/opt/k8s/bin/"
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 cfssl-certinfo
ansible k8s-master01.k8s.com -m copy  -a "src=./cfssl-certinfo dest=/opt/k8s/bin/"
ansible k8s-master01.k8s.com -m shell  -a "chmod +x /opt/k8s/bin/*"
ansible k8s-master01.k8s.com -m shell  -a "export PATH=/opt/k8s/bin:$PATH"

这部分可以在master01机器上操作,但是我这环境master 没有外网,只能在ansible机器下载然后推过去了。

2.创建证书

cat > ca-config.json <<EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}
}
EOF
cat > ca-csr.json <<EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "k8s","OU": "4Paradigm"}],"ca": {"expiry": "876000h"}
}
EOF
ansible k8s-master01.k8s.com -m copy  -a "src=./ca-config.json dest=/opt/k8s/work/"
ansible k8s-master01.k8s.com -m copy  -a "src=./ca-csr.json dest=/opt/k8s/work/" 

将生成的json文件推送到master01中。


ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  /opt/k8s/bin/cfssl gencert -initca ca-csr.json | /opt/k8s/bin/cfssljson -bare ca"
#查看生成的文件
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  ls ca*"

3. 分发证书

ansible k8s-master01.k8s.com -m shell  -a "source /opt/k8s/bin/environment.sh"
source /opt/k8s/bin/environment.sh
ansible all -m shell  -a "mkdir -p /etc/kubernetes/cert"
for node_ip in ${NODE_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert"; done

4. 部署kubectl命令行工具

ansible all -m copy -a "src=./kubectl dest=/opt/k8s/bin"
ansible all -m shell  -a "chdir=/opt/k8s/bin chmod +x ./* "

5. 生成admin 证书

生成admin-csr.json的文件

{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "system:masters","OU": "4Paradigm"}]
}
ansible k8s-master01.k8s.com -m copy  -a "src=./admin-csr.json dest=/opt/k8s/work/"
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  /opt/k8s/bin/cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes admin-csr.json | /opt/k8s/bin/cfssljson -bare admin"
#查看文件
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  ls admin*" admin.csr
admin-csr.json
admin-key.pem
admin.pem

6.生成kubeconfig文件

ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  source /opt/k8s/bin/environment.sh"
## 设置集群参数
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  /opt/k8s/bin/kubectl config set-cluster kubernetes \--certificate-authority=/opt/k8s/work/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kubectl.kubeconfig"
#设置客户端认证参数
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  /opt/k8s/bin/kubectl config set-credentials admin \                                 --client-certificate=/opt/k8s/work/admin.pem \--client-key=/opt/k8s/work/admin-key.pem \                      --embed-certs=true \                    --kubeconfig=kubectl.kubeconfig"
# 设置上下文参数ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  /opt/k8s/bin/kubectl config set-context kubernetes \                                 --cluster=kubernetes \                        --user=admin \                                                  --kubeconfig=kubectl.kubeconfig"
# 设置默认上下文
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work  /opt/k8s/bin/kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig" #文件分发ansible all  -m shell  -a "mkdir -p ~/.kube  "for node_ip in ${NODE_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp kubectl.kubeconfig root@${node_ip}:~/.kube/config"; done

四.ETCD部署

1. 分发文件

tar -zxf etcd-v3.3.25-linux-amd64.tar.gz
cd etcd-v3.3.25-linux-amd64
ansible etcd  -m copy -a "src=./etcd  dest=/opt/k8s/bin"
ansible etcd  -m copy -a "src=./etcdctl  dest=/opt/k8s/bin"
ansible etcd  -m shell -a "chdir=/opt/k8s/bin chmod +x *"

2. 创建etcd证书和私钥

生成etcd-csr.json 文件

{"CN": "etcd","hosts": ["127.0.0.1","10.120.200.2","10.120.200.3","10.120.200.10"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "k8s","OU": "4Paradigm"}]
}
ansible k8s-master01.k8s.com -m copy -a "src=./etcd-csr.json dest=/opt/k8s/work"
ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work /opt/k8s/bin/cfssl gencert -ca=/opt/k8s/work/ca.pem \-ca-key=/opt/k8s/work/ca-key.pem \-config=/opt/k8s/work/ca-config.json \-profile=kubernetes etcd-csr.json | /opt/k8s/bin/cfssljson -bare etcd"
ansible all -m shell -a "source /opt/k8s/bin/environment.sh"
for node_ip in ${ETCD_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./etcd.pem root@${node_ip}:/etc/etcd/cert"; done
for node_ip in ${ETCD_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./etcd-key.pem root@${node_ip}:/etc/etcd/cert"; done

3. 创建启动文件

生成etcd启动模板文件etcd.service.template

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd/data
ExecStart=/opt/k8s/bin/etcd \--data-dir=/data/k8s/etcd/data \--wal-dir=/data/k8s/etcd/wal \--name=##NODE_NAME## \--cert-file=/etc/etcd/cert/etcd.pem \--key-file=/etc/etcd/cert/etcd-key.pem \--trusted-ca-file=/etc/kubernetes/cert/ca.pem \--peer-cert-file=/etc/etcd/cert/etcd.pem \--peer-key-file=/etc/etcd/cert/etcd-key.pem \--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \--peer-client-cert-auth \--client-cert-auth \--listen-peer-urls=https://##NODE_IP##:2380 \--initial-advertise-peer-urls=https://##NODE_IP##:2380 \--listen-client-urls=https://##NODE_IP##:2379 \--advertise-client-urls=https://##NODE_IP##:2379 \--initial-cluster-token=etcd-cluster-0 \--initial-cluster=k8s-master01=https://10.120.200.2:2380,k8s-master02=https://10.120.200.3:2380,k8s-node07=https://10.120.200.10:2380 \--initial-cluster-state=new \--auto-compaction-mode=periodic \--auto-compaction-retention=1 \--max-request-bytes=33554432 \--quota-backend-bytes=6442450944 \--heartbeat-interval=250 \--election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

生成节点启动文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))dosed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${ETCD_IPS[i]}/" etcd.service.template > etcd-${ETCD_IPS[i]}.service done
ls *.serviceansible k8s-master01.k8s.com  -m copy -a "src=./etcd-10.120.200.2.service dest=/etc/systemd/system/etcd.service"
ansible k8s-master02.k8s.com  -m copy -a "src=./etcd-10.120.200.3.service dest=/etc/systemd/system/etcd.service"
ansible k8s-node01.k8s.com  -m copy -a "src=./etcd-10.120.200.10.service dest=/etc/systemd/system/etcd.service"
ansible etcd -m shell -a "mkdir /data/k8s/etcd/data/ -p"
ansible etcd -m shell -a "chmod 700 /data/k8s/etcd/data/ -R"[root@k8s-node-01 data]#
ansible etcd -m shell -a "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
ansible etcd -m shell -a "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd"

检查结果

ansible etcd -m shell -a "systemctl status etcd|grep Active"
结果
k8s-node07.k8s.com | CHANGED | rc=0 >>Active: active (running) since 三 2020-12-23 19:55:56 CST; 2h 5min ago
k8s-master01.k8s.com | CHANGED | rc=0 >>Active: active (running) since 三 2020-12-23 19:55:06 CST; 2h 5min ago
k8s-master02.k8s.com | CHANGED | rc=0 >>Active: active (running) since 三 2020-12-23 19:55:06 CST; 2h 5min ago

任意一个etcd节点执行

for node_ip in ${ETCD_IPS[@]};   do     echo ">>> ${node_ip}";     ETCDCTL_API=3 /opt/k8s/bin/etcdctl     --endpoints=https://${node_ip}:2379     --cacert=/etc/kubernetes/cert/ca.pem     --cert=/etc/etcd/cert/etcd.pem     --key=/etc/etcd/cert/etcd-key.pem endpoint health;   done
结果:
>>> 10.120.200.2
https://10.120.200.2:2379 is healthy: successfully committed proposal: took = 25.451793ms
>>> 10.120.200.3
https://10.120.200.3:2379 is healthy: successfully committed proposal: took = 24.119252ms
>>> 10.120.200.10
https://10.120.200.10:2379 is healthy: successfully committed proposal: took = 28.700118ms

查看etcd的leader

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \-w table --cacert=/etc/kubernetes/cert/ca.pem \--cert=/etc/etcd/cert/etcd.pem \--key=/etc/etcd/cert/etcd-key.pem \--endpoints=${ETCD_ENDPOINTS} endpoint status
结果:
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
|  https://10.120.200.2:2379 | ae550aa9096a80a7 |  3.3.25 |   20 kB |      true |        49 |         11 |
|  https://10.120.200.3:2379 | 3cab77a8b9fc02e5 |  3.3.25 |   20 kB |     false |        49 |         11 |
| https://10.120.200.10:2379 | ab964a290e2d8de9 |  3.3.25 |   20 kB |     false |        49 |         11 |
+----------------------------+------------------+---------+---------+-----------+-----------+------------+

五. 部署Flannel网络

k8s集群需要master和node各个节点pod网络互通。我们可以通过flannel的xvlan技术,在各个创建虚拟网卡,组成pod网络。pod网络使用的端口为8472,第一次启动时,从etcd获取配置的Pod网络,为本节点分配一个未使用的地址段,然后创建flannel.1网络接口。

1. 分发文件

tar -zxf flannel-v0.12.0-linux-amd64.tar.gz
ansible all -m copy -a "src=./flanneld dest=/opt/k8s/bin/ "
ansible all -m copy -a "src=./mk-docker-opts.sh dest=/opt/k8s/bin/ "
ansible all -m shell -a " chmod +x /opt/k8s/bin/*  " 

2. 创建秘钥


cat > flanneld-csr.json <<EOF
{"CN": "flanneld","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "k8s","OU": "4Paradigm"}]
}
EOF
#分发到etcd节点
ansible k8s-master01.k8s.com  -m copy -a 'src=./flanneld-csr.json  dest=/opt/k8s/work'
ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work /opt/k8s/bin/cfssl gencert -ca=/opt/k8s/work/ca.pem  -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes flanneld-csr.json | /opt/k8s/bin/cfssljson -bare flanneld"
#将秘钥分发到各个服务器,包括master和nod
ansible all -m shell -a "mkdir -p /etc/flannel/cert"
for node_ip in ${NODE_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./flanneld.pem root@${node_ip}:/etc/flannel/cert"; done
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./flanneld.pem root@${node_ip}:/etc/flannel/cert"; done
for node_ip in ${NODE_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./flanneld-key.pem root@${node_ip}:/etc/flannel/cert"; done
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./flanneld-key.pem root@${node_ip}:/etc/flannel/cert"; done

3. 将网段信息写入etcd

#由于涉及到符号问题,在etcd节点任意一台执行
source /opt/k8s/bin/environment.sh
etcdctl   --endpoints=${ETCD_ENDPOINTS}   --ca-file=/etc/kubernetes/cert/ca.pem   --cert-file=/etc/flannel/cert/flanneld.pem   --key-file=/etc/flannel/cert/flanneld-key.pem   mk ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'

4. 创建启动文件,并启动服务

创建启动文件

cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \\-etcd-cafile=/etc/kubernetes/cert/ca.pem \\-etcd-certfile=/etc/flannel/cert/flanneld.pem \\-etcd-keyfile=/etc/flannel/cert/flanneld-key.pem \\-etcd-endpoints=${ETCD_ENDPOINTS} \\-etcd-prefix=${FLANNEL_ETCD_PREFIX} \\-iface=${IFACE} \\-ip-masq
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF#分发到节点ansible all -m copy -a 'src=./flanneld.service dest=/etc/systemd/system/'

启动服务

ansible all -m shell -a 'systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld' 

检查服务进程

ansible all  -m shell  -a "systemctl status flanneld|grep Active" 

检查分配给flanneld的Pod网段信息

etcdctl \--endpoints=${ETCD_ENDPOINTS} \--ca-file=/etc/kubernetes/cert/ca.pem \--cert-file=/etc/flannel/cert/flanneld.pem \--key-file=/etc/flannel/cert/flanneld-key.pem \get ${FLANNEL_ETCD_PREFIX}/config结果:
{"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}

查看已分配的Pod子网网段列表

etcdctl \--endpoints=${ETCD_ENDPOINTS} \--ca-file=/etc/kubernetes/cert/ca.pem \--cert-file=/etc/flannel/cert/flanneld.pem \--key-file=/etc/flannel/cert/flanneld-key.pem \ls ${FLANNEL_ETCD_PREFIX}/subnets结果:
/kubernetes/network/subnets/172.30.80.0-21
/kubernetes/network/subnets/172.30.96.0-21
/kubernetes/network/subnets/172.30.184.0-21
/kubernetes/network/subnets/172.30.32.0-21

查看某Pod网段对应节点IP和flannel接口地址

etcdctl   --endpoints=${ETCD_ENDPOINTS}   --ca-file=/etc/kubernetes/cert/ca.pem   --cert-file=/etc/flannel/cert/flanneld.pem   --key-file=/etc/flannel/cert/flanneld-key.pem   get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.184.0-21结果:
{"PublicIP":"10.120.200.4","BackendType":"vxlan","BackendData":{"VtepMAC":"36:da:bd:63:39:f9"}}

六. 部署服务高可用

1. kube-nginx 部署

##由于安装nginx,方便起见直接在master01上操作。

#编译
cd /opt/k8s/work/nginx-1.15.3
mkdir nginx-prefix
./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module
make && make install

加载动态库

ldd ./nginx-prefix/sbin/nginx

分发文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}done
#可能拷贝过程中报错,需要重复执行
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx  root@${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginxssh root@${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"sleep 3done

创建配置文件并分发

cat > kube-nginx.conf <<EOF
worker_processes 1;
events {worker_connections  1024;
}
stream {upstream backend {hash $remote_addr consistent;server 192.168.0.50:6443        max_fails=3 fail_timeout=30s;server 192.168.0.51:6443        max_fails=3 fail_timeout=30s;server 192.168.0.52:6443        max_fails=3 fail_timeout=30s;}server {listen *:8443;proxy_connect_timeout 1s;proxy_pass backend;}
}
EOFfor node_ip in ${MASTER_IPS[@]}doecho ">>> ${node_ip}"scp kube-nginx.conf  root@${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.confdone

创建启动文件并启动

cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#分发
for node_ip in ${MASTER_IPS[@]}doecho ">>> ${node_ip}"scp kube-nginx.service  root@${node_ip}:/etc/systemd/system/done#启动
for node_ip in ${MASTER_IPS[@]}doecho ">>> ${node_ip}"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl start kube-nginx"done
检查状态
for node_ip in ${MASTER_IPS[@]}doecho ">>> ${node_ip}"ssh root@${node_ip} "systemctl status kube-nginx |grep 'Active:'"done

2. keepalived 部署

keekalived只再master端部署即可。首先安装服务。

ansible master -m shell -a 'yum  install -y keepalived'

创建配置文件,并分发

cat ./keepalived.conf
global_defs {router_id 10.120.200.2
}
vrrp_script chk_nginx {script "/etc/keepalived/check_port.sh 8443"interval 2weight -20
}
vrrp_instance VI_1 {state MASTERinterface ens32virtual_router_id 251priority 100advert_int 1mcast_src_ip 10.120.200.2nopreemptauthentication {auth_type PASSauth_pass 11111111}track_script {chk_nginx}virtual_ipaddress {10.120.200.100}ansible master -m copy -a "src=./keepalived.conf dest=/etc/keepalived/"
#因为按照master01 做的需要在master02 上修改ip
ansible k8s-master02.k8s.com -m shell -a "sed -i 's#10.120.200.2#10.120.200.3#g'  /etc/keepalived/keepalived.conf" 

创建健康检查脚本

cat check_port.sh
CHK_PORT=$1if [ -n "$CHK_PORT" ];thenPORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -l`if [ $PORT_PROCESS -eq 0 ];thenecho "Port $CHK_PORT Is Not Used,End."exit 1fielseecho "Check Port Cant Be Empty!#分发文件ansible master -m copy -a "src=./check_port.sh dest=/etc/keepalived/"

启动服务

ansible master -m shell -a  'systemctl enable --now keepalived'
#可能一次起不来 ansible master -m shell -a  'systemctl start keepalived'

检查服务情况

# ping 虚拟ip
ping 10.120.200.250 结果:PING 10.120.200.250 (10.120.200.250) 56(84) bytes of data.
64 bytes from 10.120.200.250: icmp_seq=1 ttl=64 time=0.593 ms
64 bytes from 10.120.200.250: icmp_seq=2 ttl=64 time=0.932 ms
^C
--- 10.120.200.250 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1051ms
rtt min/avg/max/mdev = 0.593/0.762/0.932/0.171 ms

七. kube-apiserver 部署

1. 解压文件并分发

tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server#将k8s相关文件拷贝到所有master节点上
ansible master -m copy -a "src=./bin  dest=/opt/k8s/"
ansible master -m shell -a  'chmod +x /opt/k8s/bin/*'
#将相关文件拷贝到node节点
ansible node -m copy -a "src=./bin/kubelet  dest=/opt/k8s/bin"
ansible node -m copy -a "src=./bin/kube-proxy  dest=/opt/k8s/bin"
ansible node -m shell -a  'chmod +x /opt/k8s/bin/*'

2. 创建Kubernetes 证书和私钥

cat kubernetes-csr.json
{"CN": "kubernetes","hosts": ["127.0.0.1","10.120.200.2","10.120.200.3","10.120.200.4","10.120.200.10","10.254.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local."],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "k8s","OU": "4Paradigm"}]
}
ansible k8s-master01.k8s.com  -m copy -a "src=./kubernetes-csr.json   dest=/opt/k8s/work"#生成私钥和证书
ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work /opt/k8s/bin/cfssl gencert -ca=/opt/k8s/work/ca.pem \-ca-key=/opt/k8s/work/ca-key.pem \-config=/opt/k8s/work/ca-config.json \-profile=kubernetes kubernetes-csr.json | /opt/k8s/bin/cfssljson -bare kubernetes"
#分发到master节点
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kubernetes-key.pem root@${node_ip}:/etc/kubernetes/cert"; done
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kubernetes.pem root@${node_ip}:/etc/kubernetes/cert"; done

3. 创建加密配置文件

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:- resources:- secretsproviders:- aescbc:keys:- name: key1secret: ${ENCRYPTION_KEY}- identity: {}
EOF#分发文件
ansible master -m copy -a "src=./encryption-config.yaml  dest=/etc/kubernetes/"

4. 创建审计策略文件

source /opt/k8s/bin/environment.sh
cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:# The following requests were manually identified as high-volume and low-risk, so drop them.- level: Noneresources:- group: ""resources:- endpoints- services- services/statususers:- 'system:kube-proxy'verbs:- watch- level: Noneresources:- group: ""resources:- nodes- nodes/statususerGroups:- 'system:nodes'verbs:- get- level: Nonenamespaces:- kube-systemresources:- group: ""resources:- endpointsusers:- 'system:kube-controller-manager'- 'system:kube-scheduler'- 'system:serviceaccount:kube-system:endpoint-controller'verbs:- get- update- level: Noneresources:- group: ""resources:- namespaces- namespaces/status- namespaces/finalizeusers:- 'system:apiserver'verbs:- get# Don't log HPA fetching metrics.- level: Noneresources:- group: metrics.k8s.iousers:- 'system:kube-controller-manager'verbs:- get- list# Don't log these read-only URLs.- level: NonenonResourceURLs:- '/healthz*'- /version- '/swagger*'# Don't log events requests.- level: Noneresources:- group: ""resources:- events# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes- level: RequestomitStages:- RequestReceivedresources:- group: ""resources:- nodes/status- pods/statususers:- kubelet- 'system:node-problem-detector'- 'system:serviceaccount:kube-system:node-problem-detector'verbs:- update- patch- level: RequestomitStages:- RequestReceivedresources:- group: ""resources:- nodes/status- pods/statususerGroups:- 'system:nodes'verbs:- update- patch# deletecollection calls can be large, don't log responses for expected namespace deletions- level: RequestomitStages:- RequestReceivedusers:- 'system:serviceaccount:kube-system:namespace-controller'verbs:- deletecollection# Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,# so only log at the Metadata level.- level: MetadataomitStages:- RequestReceivedresources:- group: ""resources:- secrets- configmaps- group: authentication.k8s.ioresources:- tokenreviews# Get repsonses can be large; skip them.- level: RequestomitStages:- RequestReceivedresources:- group: ""- group: admissionregistration.k8s.io- group: apiextensions.k8s.io- group: apiregistration.k8s.io- group: apps- group: authentication.k8s.io- group: authorization.k8s.io- group: autoscaling- group: batch- group: certificates.k8s.io- group: extensions- group: metrics.k8s.io- group: networking.k8s.io- group: policy- group: rbac.authorization.k8s.io- group: scheduling.k8s.io- group: settings.k8s.io- group: storage.k8s.ioverbs:- get- list- watch# Default level for known APIs- level: RequestResponseomitStages:- RequestReceivedresources:- group: ""- group: admissionregistration.k8s.io- group: apiextensions.k8s.io- group: apiregistration.k8s.io- group: apps- group: authentication.k8s.io- group: authorization.k8s.io- group: autoscaling- group: batch- group: certificates.k8s.io- group: extensions- group: metrics.k8s.io- group: networking.k8s.io- group: policy- group: rbac.authorization.k8s.io- group: scheduling.k8s.io- group: settings.k8s.io- group: storage.k8s.io# Default level for all other requests.- level: MetadataomitStages:- RequestReceived
EOF

分发文件

ansible master -m copy -a "src=./audit-policy.yaml  dest=/etc/kubernetes/"

5. 创建证书签名请求

 cat > proxy-client-csr.json <<EOF
{"CN": "aggregator","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "k8s","OU": "4Paradigm"}]
}
EOFansible k8s-master01.k8s.com  -m copy -a "src=./proxy-client-csr.json  dest=/opt/k8s/work"#生成证书及私钥ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work  /opt/k8s/bin/cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \-ca-key=/etc/kubernetes/cert/ca-key.pem  \-config=/etc/kubernetes/cert/ca-config.json  \-profile=kubernetes proxy-client-csr.json | /opt/k8s/bin/cfssljson -bare proxy-client"#分发证书
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./proxy-client-key.pem root@${node_ip}:/etc/kubernetes/cert"; done
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./proxy-client.pem root@${node_ip}:/etc/kubernetes/cert"; done

6. 创建kube-apiserver启动文件

创建模板文件

 cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\--advertise-address=##NODE_IP## \\--default-not-ready-toleration-seconds=360 \\--default-unreachable-toleration-seconds=360 \\--max-mutating-requests-inflight=2000 \\--max-requests-inflight=4000 \\--default-watch-cache-size=200 \\--delete-collection-workers=2 \\--encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\--etcd-cafile=/etc/kubernetes/cert/ca.pem \\--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\--etcd-servers=${ETCD_ENDPOINTS} \\--bind-address=##NODE_IP## \\--secure-port=6443 \\--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\--insecure-port=0 \\--audit-log-maxage=15 \\--audit-log-maxbackup=3 \\--audit-log-maxsize=100 \\--audit-log-truncate-enabled \\--audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\--audit-policy-file=/etc/kubernetes/audit-policy.yaml \\--profiling \\--anonymous-auth=false \\--client-ca-file=/etc/kubernetes/cert/ca.pem \\--enable-bootstrap-token-auth \\--requestheader-allowed-names="aggregator" \\--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\--requestheader-extra-headers-prefix="X-Remote-Extra-" \\--requestheader-group-headers=X-Remote-Group \\--requestheader-username-headers=X-Remote-User \\--service-account-key-file=/etc/kubernetes/cert/ca.pem \\--authorization-mode=Node,RBAC \\--runtime-config=api/all=true \\--enable-admission-plugins=NodeRestriction \\--allow-privileged=true \\--apiserver-count=3 \\--event-ttl=168h \\--kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\--kubelet-https=true \\--kubelet-timeout=10s \\--proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\--proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\--service-cluster-ip-range=${SERVICE_CIDR} \\--service-node-port-range=${NODE_PORT_RANGE} \\--logtostderr=true \\--v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

生成节点配置文件并分发

for (( i=0; i < 3; i++ ))    do     sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${MASTER_IPS[i]}.service ;   doneansible k8s-master01.k8s.com  -m copy -a "src=./kube-apiserver-10.120.200.2.service  dest=/etc/systemd/system/kube-apiserver.service"
ansible k8s-master02.k8s.com  -m copy -a "src=./kube-apiserver-10.120.200.3.service  dest=/etc/systemd/system/kube-apiserver.service"

启动并检查服务

ansible master -m shell -a  'mkdir -p /data/k8s/k8s/kube-apiserver'
ansible master -m shell -a  'systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver' #检查启动进程
ansible master -m shell -a "systemctl status kube-apiserver |grep 'Active:'"结果:
k8s-master02.k8s.com | CHANGED | rc=0 >>Active: active (running) since 二 2020-12-29 16:45:57 CST; 2m ago
k8s-master01.k8s.com | CHANGED | rc=0 >>Active: active (running) since 二 2020-12-29 16:44:19 CST; 2m ago

打印kube-apiserver写入etcd数据

ansible k8s-master01.k8s.com -m shell -a "ETCDCTL_API=3 /opt/k8s/bin/etcdctl \--endpoints=${ETCD_ENDPOINTS} \--cacert=/opt/k8s/work/ca.pem \--cert=/opt/k8s/work/etcd.pem \--key=/opt/k8s/work/etcd-key.pem \get /registry/ --prefix --keys-only"#结果很多就不在这里复制了

检查kube-apiserver监听的端口

ansible master -m shell -a "netstat -lntup|grep kube"#结果为k8s-master02.k8s.com | CHANGED | rc=0 >>
tcp        0      0 10.120.200.3:6443       0.0.0.0:*               LISTEN      19745/kube-apiserve
k8s-master01.k8s.com | CHANGED | rc=0 >>
tcp        0      0 10.120.200.2:6443       0.0.0.0:*               LISTEN      48893/kube-apiserve 

集群检查

ansible k8s-master01.k8s.com -m shell -a "kubectl cluster-info"#结果为k8s-master01.k8s.com | CHANGED | rc=0 >>
Kubernetes master is running at https://10.120.200.100:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.ansible k8s-master01.k8s.com -m shell -a "kubectl get all --all-namespaces"#结果为
k8s-master01.k8s.com | CHANGED | rc=0 >>
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   19hansible k8s-master01.k8s.com -m shell -a "kubectl get componentstatuses"
#结果为
k8s-master01.k8s.com | CHANGED | rc=0 >>
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-2               Healthy     {"health":"true"}
etcd-0               Healthy     {"health":"true"}
etcd-1               Healthy     {"health":"true"}                                                                             Warning: v1 ComponentStatus is deprecated in v1.19+

授权kube-apiserver访问kubelet API的权限

ansible k8s-master01.k8s.com -m shell -a  'kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes'

 八. 部署kube-controller-manager

1. 创建证书签名

cat kube-controller-manager-csr.json
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","10.120.200.2","10.120.200.3"],"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "system:kube-controller-manager","OU": "4Paradigm"}]
}ansible k8s-master01.k8s.com  -m copy -a "src=./kube-controller-manager-csr.json  dest=/opt/k8s/work"  #生成证书和私钥ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work  cfssl gencert -ca=/opt/k8s/work/ca.pem \-ca-key=/opt/k8s/work/ca-key.pem \-config=/opt/k8s/work/ca-config.json \-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager"#分发到节点服务器
for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kube-controller-manager-key.pem root@${node_ip}:/etc/kubernetes/cert"; donefor node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kube-controller-manager.pem root@${node_ip}:/etc/kubernetes/cert"; done

2. 创建和分发kubeconfig文件

source /opt/k8s/bin/environment.sh
ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work  kubectl config set-cluster kubernetes \--certificate-authority=/opt/k8s/work/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-controller-manager.kubeconfig"ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work  kubectl config set-credentials system:kube-controller-manager \--client-certificate=kube-controller-manager.pem \--client-key=kube-controller-manager-key.pem \--embed-certs=true \--kubeconfig=kube-controller-manager.kubeconfig"ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work kubectl config set-context system:kube-controller-manager \     --cluster=kubernetes \                            --user=system:kube-controller-manager \       --kubeconfig=kube-controller-manager.kubeconfig"ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work  kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig"#分发到节点服务器for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kube-controller-manager.kubeconfig root@${node_ip}:/etc/kubernetes/"; done

3. 创建kube-controller-manager启动文件

cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\--profiling \\--cluster-name=kubernetes \\--controllers=*,bootstrapsigner,tokencleaner \\--kube-api-qps=1000 \\--kube-api-burst=2000 \\--leader-elect \\--use-service-account-credentials\\--concurrent-service-syncs=2 \\--bind-address=0.0.0.0 \\--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\--client-ca-file=/etc/kubernetes/cert/ca.pem \\--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\--requestheader-extra-headers-prefix="X-Remote-Extra-" \\--requestheader-group-headers=X-Remote-Group \\--requestheader-username-headers=X-Remote-User \\--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\--experimental-cluster-signing-duration=876000h \\--horizontal-pod-autoscaler-sync-period=10s \\--concurrent-deployment-syncs=10 \\--concurrent-gc-syncs=30 \\--node-cidr-mask-size=24 \\--service-cluster-ip-range=${SERVICE_CIDR} \\--pod-eviction-timeout=6m \\--terminated-pod-gc-threshold=10000 \\--root-ca-file=/etc/kubernetes/cert/ca.pem \\--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\--logtostderr=true \\--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF#替换启动文件,并分发脚本
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))dosed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${MASTER_IPS[i]}.service done#分发到节点
ansible k8s-master01.k8s.com  -m copy -a "src=./kube-controller-manager-10.120.200.2.service   dest=/etc/systemd/system/kube-controller-manager.service "
ansible k8s-master02.k8s.com  -m copy -a "src=./kube-controller-manager-10.120.200.3.service   dest=/etc/systemd/system/kube-controller-manager.service "

4.启动服务

ansible master -m shell -a  'mkdir -p /data/k8s/k8s/kube-controller-manager'
ansible master -m shell -a  'systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager'
#检查进程
ansible master -m shell -a  'systemctl status kube-controller-manager|grep Active'
#检查端口
ansible master -m shell -a  'netstat -lnpt | grep kube-cont'
##结果为:k8s-master02.k8s.com | CHANGED | rc=0 >>
tcp6       0      0 :::10252                :::*                    LISTEN      119404/kube-control
tcp6       0      0 :::10257                :::*                    LISTEN      119404/kube-control
k8s-master01.k8s.com | CHANGED | rc=0 >>
tcp6       0      0 :::10252                :::*                    LISTEN      18696/kube-controll
tcp6       0      0 :::10257                :::*                    LISTEN      18696/kube-controll 

5.kube-controller-manager 创建权限

当前ClusteRole system:kube-controller-manager权限过小,只能创建secret、serviceaccount等资源,需要将controller权限分配到ClusterRole system:controller:xxx中。在启动文件增加 --use-service-account-credentials=true ,这样主控会建ServiceAccount XXX-controlle, ClusterRoleBinding system:controller:XXX 会给各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。

ansible k8s-master01.k8s.com -m shell -a " kubectl describe clusterrole system:kube-controller-manager"
##结果:
k8s-master01.k8s.com | CHANGED | rc=0 >>
Name:         system:kube-controller-manager
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:Resources                                  Non-Resource URLs  Resource Names             Verbs---------                                  -----------------  --------------             -----secrets                                    []                 []                         [create delete get update]serviceaccounts                            []                 []                         [create get update]events                                     []                 []                         [create patch update]events.events.k8s.io                       []                 []                         [create patch update]endpoints                                  []                 []                         [create]serviceaccounts/token                      []                 []                         [create]tokenreviews.authentication.k8s.io         []                 []                         [create]subjectaccessreviews.authorization.k8s.io  []                 []                         [create]leases.coordination.k8s.io                 []                 []                         [create]endpoints                                  []                 [kube-controller-manager]  [get update]leases.coordination.k8s.io                 []                 [kube-controller-manager]  [get update]configmaps                                 []                 []                         [get]namespaces                                 []                 []                         [get]*.*                                        []                 []                         [list watch]
 ansible k8s-master01.k8s.com -m shell -a " kubectl get clusterrole|grep controller"
k8s-master01.k8s.com | CHANGED | rc=0 >>
system:controller:attachdetach-controller                              2020-12-29T08:44:24Z
system:controller:certificate-controller                               2020-12-29T08:44:26Z
system:controller:clusterrole-aggregation-controller                   2020-12-29T08:44:24Z
system:controller:cronjob-controller                                   2020-12-29T08:44:24Z
system:controller:daemon-set-controller                                2020-12-29T08:44:24Z
system:controller:deployment-controller                                2020-12-29T08:44:24Z
system:controller:disruption-controller                                2020-12-29T08:44:24Z
system:controller:endpoint-controller                                  2020-12-29T08:44:24Z
system:controller:endpointslice-controller                             2020-12-29T08:44:25Z
system:controller:endpointslicemirroring-controller                    2020-12-29T08:44:25Z
system:controller:expand-controller                                    2020-12-29T08:44:25Z
system:controller:generic-garbage-collector                            2020-12-29T08:44:25Z
system:controller:horizontal-pod-autoscaler                            2020-12-29T08:44:25Z
system:controller:job-controller                                       2020-12-29T08:44:25Z
system:controller:namespace-controller                                 2020-12-29T08:44:25Z
system:controller:node-controller                                      2020-12-29T08:44:25Z
system:controller:persistent-volume-binder                             2020-12-29T08:44:25Z
system:controller:pod-garbage-collector                                2020-12-29T08:44:25Z
system:controller:pv-protection-controller                             2020-12-29T08:44:26Z
system:controller:pvc-protection-controller                            2020-12-29T08:44:26Z
system:controller:replicaset-controller                                2020-12-29T08:44:25Z
system:controller:replication-controller                               2020-12-29T08:44:25Z
system:controller:resourcequota-controller                             2020-12-29T08:44:26Z
system:controller:route-controller                                     2020-12-29T08:44:26Z
system:controller:service-account-controller                           2020-12-29T08:44:26Z
system:controller:service-controller                                   2020-12-29T08:44:26Z
system:controller:statefulset-controller                               2020-12-29T08:44:26Z
system:controller:ttl-controller                                       2020-12-29T08:44:26Z
system:kube-controller-manager                                         2020-12-29T08:44:23Z

以system:controller:attachdetach-controller为例

ansible k8s-master01.k8s.com -m shell -a " kubectl describe clusterrole system:controller:attachdetach-controller"
k8s-master01.k8s.com | CHANGED | rc=0 >>
Name:         system:controller:attachdetach-controller
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:Resources                         Non-Resource URLs  Resource Names  Verbs---------                         -----------------  --------------  -----volumeattachments.storage.k8s.io  []                 []              [create delete get list watch]events                            []                 []              [create patch update]events.events.k8s.io              []                 []              [create patch update]nodes                             []                 []              [get list watch]csidrivers.storage.k8s.io         []                 []              [get list watch]csinodes.storage.k8s.io           []                 []              [get list watch]persistentvolumeclaims            []                 []              [list watch]persistentvolumes                 []                 []              [list watch]pods                              []                 []              [list watch]nodes/status                      []                 []              [patch update]

查看当前的 leader

ansible k8s-master01.k8s.com -m shell -a " kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml"
k8s-master01.k8s.com | CHANGED | rc=0 >>
apiVersion: v1
kind: Endpoints
metadata:annotations:control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master01_06a89fb9-a6c8-404a-a3fc-0988e7b8960c","leaseDurationSeconds":15,"acquireTime":"2020-12-30T06:31:02Z","renewTime":"2020-12-30T06:40:17Z","leaderTransitions":0}'creationTimestamp: "2020-12-30T06:31:03Z"managedFields:- apiVersion: v1fieldsType: FieldsV1fieldsV1:f:metadata:f:annotations:.: {}f:control-plane.alpha.kubernetes.io/leader: {}manager: kube-controller-manageroperation: Updatetime: "2020-12-30T06:40:17Z"name: kube-controller-managernamespace: kube-systemresourceVersion: "16869"selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manageruid: 84b77801-1b03-4d95-b5fa-cc604106171f

九. 部署kube-scheduler

1. 创建证书签名

cat kube-scheduler-csr.json
{"CN": "system:kube-scheduler","hosts": ["127.0.0.1","10.120.200.2","10.120.200.3"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "system:kube-scheduler","OU": "4Paradigm"}]
}#发送到master01ansible k8s-master01.k8s.com  -m copy -a "src=./kube-scheduler-csr.json  dest=/opt/k8s/work"#生成证书和私钥
ansible k8s-master01.k8s.com -m shell -a " chdir=/opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \           -ca-key=/opt/k8s/work/ca-key.pem \-config=/opt/k8s/work/ca-config.json \-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler"#分发到master节点for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kube-scheduler-key.pem  root@${node_ip}:/etc/kubernetes/cert"; donefor node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kube-scheduler.pem  root@${node_ip}:/etc/kubernetes/cert"; done

2. 创建和分发 kubeconfig 文件

ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work kubectl config set-cluster kubernetes \                                                                                                     --certificate-authority=/opt/k8s/work/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-scheduler.kubeconfig"ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work kubectl config set-credentials system:kube-scheduler \                                                                                       --client-certificate=kube-scheduler.pem \          --client-key=kube-scheduler-key.pem \                 --embed-certs=true \                --kubeconfig=kube-scheduler.kubeconfig"ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work kubectl config set-context system:kube-scheduler \                                                                                          --cluster=kubernetes \                             --user=system:kube-scheduler \                        --kubeconfig=kube-scheduler.kubeconfig"ansible k8s-master01.k8s.com -m shell -a "chdir=/opt/k8s/work kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig"#分发文件for node_ip in ${MASTER_IPS[@]};   do     echo ">>> ${node_ip}"; ansible k8s-master01.k8s.com -m shell  -a "chdir=/opt/k8s/work scp ./kube-scheduler.kubeconfig root@${node_ip}:/etc/kubernetes/"; done 

3.创建 kube-scheduler 配置文件

cat /etc/kubernetes/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"qps: 100
enableContentionProfiling: false
enableProfiling: true
healthzBindAddress: 127.0.0.1:10251
leaderElection:leaderElect: true
metricsBindAddress: ##NODE_IP##:10251for (( i=0; i < 3; i++ ));   do     sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${MASTER_IPS[i]}.yaml; done#分发
ansible k8s-master01.k8s.com  -m copy -a "src=./kube-scheduler-10.120.200.2.yaml    dest=/etc/kubernetes/kube-scheduler.yaml "ansible k8s-master02.k8s.com  -m copy -a "src=./kube-scheduler-10.120.200.3.yaml    dest=/etc/kubernetes/kube-scheduler.yaml "

4. 创建启动文件

 cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\--config=/etc/kubernetes/kube-scheduler.yaml \\--bind-address=##NODE_IP## \\--secure-port=10259 \\--port=0 \\--tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\--tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\--client-ca-file=/etc/kubernetes/cert/ca.pem \\--requestheader-allowed-names="" \\--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\--requestheader-extra-headers-prefix="X-Remote-Extra-" \\--requestheader-group-headers=X-Remote-Group \\--requestheader-username-headers=X-Remote-User \\--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\--logtostderr=true \\--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOFfor (( i=0; i < 3; i++ ));   do     sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${NODE_IPS[i]}.service ;   done#分发
ansible k8s-master01.k8s.com  -m copy -a "src=./kube-scheduler-10.120.200.2.service    dest=/etc/systemd/system/kube-scheduler.service"ansible k8s-master02.k8s.com  -m copy -a "src=./kube-scheduler-10.120.200.3.service    dest=/etc/systemd/system/kube-scheduler.service"

5. 启动服务并检查

ansible master -m shell -a  'mkdir -p /opt/k8s/work/kube-scheduler'ansible master -m shell -a  'systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler'ansible master -m shell -a  'systemctl status kube-scheduler|grep Active'#查看leader
kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
#结果:
apiVersion: v1
kind: Endpoints
metadata:annotations:control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master01_427509fc-39d9-4fd2-8fb4-3637735e0849","leaseDurationSeconds":15,"acquireTime":"2020-12-30T15:09:21Z","renewTime":"2020-12-31T02:46:43Z","leaderTransitions":1}'creationTimestamp: "2020-12-30T15:09:05Z"managedFields:- apiVersion: v1fieldsType: FieldsV1fieldsV1:f:metadata:f:annotations:.: {}f:control-plane.alpha.kubernetes.io/leader: {}manager: kube-scheduleroperation: Updatetime: "2020-12-31T02:46:43Z"name: kube-schedulernamespace: kube-systemresourceVersion: "140866"selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduleruid: 853fb510-a05c-418f-890f-6fa34f2bdf4d
' 

这周事儿太多,文档中断了一周。今天开始接着往下继续。开始安装各个node的节点。看到胜利的曙光了

十. docker的安装

1. 安装docker服务

由于我这边做的内部yum源,直接可以yum安装。在实际情况中根据自己的实际状况进行安装。这个比较简单,这里不做过度的介绍。

ansible all -m shell -a  'yum install -y docker-ce-19.03.12-3.el7.x86_64.rpm docker-ce-cli-19.03.12-3.el7.x86_64.rpm  containerd.io-1.2.13-3.2.el7.x86_64.rpm'

2. 创建配置文件

cat daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://aad0405c.m.daocloud.io"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","insecure-registries": ["harbor.k8s.com"]
}ansible all -m copy -a "src=./daemon.json dest=/etc/docker/"
cat docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target[Service]
Type=notify
#EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
#ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
EnvironmentFile=-/run/flannel/docker
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s[Install]
WantedBy=multi-user.targetansible all -m copy -a "src=./docker.service dest=/usr/lib/systemd/system/"   

3. 启动服务并检查

ansible all -m shell -a "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"#检查检查存活ansible all -m shell -a "systemctl status docker|grep Active"      # 查看docker0 网卡信息ansible all -m shell -a "ip addr show docker0" #查看存储驱动
ansible all -m shell -a "docker info|grep overlay2"

十一. 部署kubelet组件

1. 创建kubelet bootstrap kubeconfig文件

source /opt/k8s/bin/environment.shfor node_name in ${NODE_NAMES[@]}doecho ">>> ${node_name}"# 创建 tokenexport BOOTSTRAP_TOKEN=$(kubeadm token create \--description kubelet-bootstrap-token \--groups system:bootstrappers:${node_name} \--kubeconfig ~/.kube/config)# 设置集群参数kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/cert/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfigdone#分发文件for node_name in ${NODE_NAMES[@]};   do     echo ">>> ${node_name}";     scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}.k8s.com:/etc/kubernetes/bootstrap-kubelet.conf;   done#分发文件for node_name in ${NODE_NAMES[@]};   do     echo ">>> ${node_name}";     scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}.k8s.com:/etc/kubernetes/kubelet-bootstrap.kubeconfig;   done

2. 查看kubeadm为各个节点创建的token

kubeadm token list --kubeconfig ~/.kube/configkubectl get secrets  -n kube-system|grep bootstrap-token

3. 创建配置文件

cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:anonymous:enabled: falsewebhook:enabled: truex509:clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:- "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: systemd
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:memory.available:  "100Mi"
nodefs.available:  "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOFfor node_ip in ${NODE_IPS[@]}do echo ">>> ${node_ip}"sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.templatescp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet.kubeconfigdone

4. 创建和分发kubelet启动文件

cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\--allow-privileged=true \\--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\--cert-dir=/etc/kubernetes/cert \\--cni-conf-dir=/etc/cni/net.d \\--container-runtime=docker \\--container-runtime-endpoint=unix:///var/run/dockershim.sock \\--root-dir=${K8S_DIR}/kubelet \\--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\--config=/etc/kubernetes/kubelet-config.yaml \\--hostname-override=##NODE_NAME## \\--pod-infra-container-image=harbor.k8s.com/k8s-base/pause-amd64:3.1 \\--image-pull-progress-deadline=15m \\--volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\--logtostderr=true \\--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOFfor node_name in ${NODE_NAMES[@]}do echo ">>> ${node_name}"sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.servicescp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.servicedone#创建user和group的CSR权限kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers# 启动服务 ansible all -m shell -a "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"

5. approve CSR请求

## 自动approve CSR请求。需要等待几分钟
cat > csr-crb.yaml <<EOF# Approve all CSRs for the group "system:bootstrappers"kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: auto-approve-csrs-for-groupsubjects:- kind: Groupname: system:bootstrappersapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientapiGroup: rbac.authorization.k8s.io
---# To let a node of the group "system:nodes" renew its own credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-client-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientapiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]resources: ["certificatesigningrequests/selfnodeserver"]verbs: ["create"]
---# To let a node of the group "system:nodes" renew its own server credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-server-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: approve-node-server-renewal-csrapiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f csr-crb.yaml##手动approve CSR请求
kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
#查看csr 是不是已经Approved,Issued 状态kubectl get csr
NAME        AGE     SIGNERNAME                      REQUESTOR                CONDITION
csr-4gthz   3h42m   kubernetes.io/kubelet-serving   system:node:k8s-node02   Approved,Issued
csr-4lpcn   5h30m   kubernetes.io/kubelet-serving   system:node:k8s-node02   Approved,Issued
csr-4n4d9   82m     kubernetes.io/kubelet-serving   system:node:k8s-node04   Approved,Issued
csr-4nzxs   4h44m   kubernetes.io/kubelet-serving   system:node:k8s-node02   Approved,Issued#查看nodeskubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
k8s-master01   Ready    <none>   3m    v1.19.2
k8s-master02   Ready    <none>   3m    v1.19.2
k8s-node01     Ready    <none>   3m    v1.19.2
k8s-node07     Ready    <none>   3m    v1.19.2#查看API 接口netstat -lntup|grep kubelet
tcp        0      0 10.120.200.4:10248      0.0.0.0:*               LISTEN      72037/kubelet
tcp        0      0 10.120.200.4:10250      0.0.0.0:*               LISTEN      72037/kubelet
tcp        0      0 127.0.0.1:34957         0.0.0.0:*               LISTEN      72037/kubelet

6. bear token认证和授权

kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}

十二. 部署kube-proxy组件

1. 创建kube-proxy证书签名请求

cat > kube-proxy-csr.json <<EOF
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "TJ","L": "TJ","O": "k8s","OU": "4Paradigm"}]
}
EOFansible k8s-master01.k8s.com -m copy -a 'src=./kube-proxy-csr.json dest=/opt/k8s/work/kube-proxy-csr.json'ansible k8s-master01.k8s.com -m shell -a 'chdir=/opt/k8s/work /opt/k8s/bin/cfssl gencert -ca=/opt/k8s/work/ca.pem \-ca-key=/opt/k8s/work/ca-key.pem \-config=/opt/k8s/work/ca-config.json \-profile=kubernetes  kube-proxy-csr.json | /opt/k8s/bin/cfssljson -bare kube-proxy'for node_name in ${NODE_NAMES[@]}; do ansible k8s-master01.k8s.com -m shell -a "chidr=/opt/k8s/work scp /opt/k8s/work/kube-proxy-key.pem root@${node_name}.k8s.com :/etc/kubernetes/cert/"; donefor node_name in ${NODE_NAMES[@]}; do ansible k8s-master01.k8s.com -m shell -a "chidr=/opt/k8s/work scp /opt/k8s/work/kube-proxy.pem root@${node_name}:/etc/kubernetes/cert/"; done

2. 创建和分发 kubeconfig 文件

source /opt/k8s/bin/environment.shansible k8s-master01.k8s.com -m shell -a 'chdir=/opt/k8s/work kubectl config set-cluster kubernetes \                             --certificate-authority=/opt/k8s/work/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfig'ansible k8s-master01.k8s.com -m shell -a 'chdir=/opt/k8s/work kubectl config set-credentials kube-proxy \                             --client-certificate=kube-proxy.pem \         --client-key=kube-proxy-key.pem \--embed-certs=true \        --kubeconfig=kube-proxy.kubeconfig'ansible k8s-master01.k8s.com -m shell -a 'chdir=/opt/k8s/work kubectl config set-context default \                                    --cluster=kubernetes \                        --user=kube-proxy \              --kubeconfig=kube-proxy.kubeconfig'
ansible k8s-master01.k8s.com -m shell -a 'chdir=/opt/k8s/work kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig'  for node_name in ${NODE_NAMES[@]}; do ansible k8s-master01.k8s.com -m shell -a "scp /opt/k8s/work/kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/"; done

3. 创建kube-proxy配置文件

cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:burst: 200kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:masqueradeAll: false
kubeProxyIPVSConfiguration:scheduler: rrexcludeCIDRs: []
EOFsource /opt/k8s/bin/environment.sh  for (( i=0; i < 10; i++ ));   do      echo ">>> ${NODE_NAMES[i]}";     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template;     scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${node_name}:/etc/kubernetes/kube-proxy-config.yaml; done# 创建启动文件
cat > kube-proxy.service <<EOF                                                                                                                                                                                   [Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\--config=/etc/kubernetes/kube-proxy-config.yaml \\--logtostderr=true \\--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOFansible all -m copy -a 'src=./kube-proxy.service  dest=/etc/systemd/system/'

4. 启动 kube-proxy 服务

ansible all -m shell -a  'mkdir -p /data/k8s/k8s/kube-proxy'ansible all -m shell -a  'modprobe ip_vs_rr' ansible all -m shell -a  'systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy'# 检测服务状态ansible all -m shell -a  'systemctl status kube-proxy|grep Active'
k8s-node01.k8s.com | CHANGED | rc=0 >>Active: active (running) since 二 2021-01-19 10:26:15 CST; 2m ago
k8s-master01.k8s.com | CHANGED | rc=0 >>Active: active (running) since 二 2021-01-19 10:10:45 CST; 2m ago
k8s-node07.k8s.com | CHANGED | rc=0 >>Active: active (running) since 二 2021-01-19 10:10:45 CST; 2m ago
k8s-master02.k8s.com | CHANGED | rc=0 >>Active: active (running) since 二 2021-01-19 10:10:46 CST; 2m ago
#检测端口
ansible all -m shell -a  'netstat -lnpt|grep kube-prox'
k8s-node01.k8s.com | CHANGED | rc=0 >>
tcp        0      0 10.120.200.4:10249      0.0.0.0:*               LISTEN      37747/kube-proxy
tcp        0      0 10.120.200.4:10256      0.0.0.0:*               LISTEN      37747/kube-proxy    k8s-master01.k8s.com | CHANGED | rc=0 >>
tcp        0      0 10.120.200.2:10249      0.0.0.0:*               LISTEN      95352/kube-proxy
tcp        0      0 10.120.200.2:10256      0.0.0.0:*               LISTEN      95352/kube-proxy    k8s-node07.k8s.com | CHANGED | rc=0 >>
tcp        0      0 10.120.200.10:10249     0.0.0.0:*               LISTEN      82438/kube-proxy
tcp        0      0 10.120.200.10:10256     0.0.0.0:*               LISTEN      82438/kube-proxy
k8s-master02.k8s.com | CHANGED | rc=0 >>
tcp        0      0 10.120.200.3:10249      0.0.0.0:*               LISTEN      20054/kube-proxy
tcp        0      0 10.120.200.3:10256      0.0.0.0:*               LISTEN      20054/kube-proxy #查看ipvs路由规则
ansible all -m shell -a  '/usr/sbin/ipvsadm -ln'
k8s-node01.k8s.com | CHANGED | rc=0 >>
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 10.120.200.2:6443            Masq    1      0          0         -> 10.120.200.3:6443            Masq    1      0          0
k8s-master01.k8s.com | CHANGED | rc=0 >>
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 10.120.200.2:6443            Masq    1      0          0         -> 10.120.200.3:6443            Masq    1      0          0
k8s-node07.k8s.com | CHANGED | rc=0 >>
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 10.120.200.2:6443            Masq    1      0          0         -> 10.120.200.3:6443            Masq    1      0          0
k8s-master02.k8s.com | CHANGED | rc=0 >>
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 10.120.200.2:6443            Masq    1      0          0         -> 10.120.200.3:6443            Masq    1      0          0
#验证集群kubectl get node
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   Ready      <none>   5d    v1.19.2
k8s-master02   Ready      <none>   5d    v1.19.2
k8s-node01     Ready      <none>   5d    v1.19.2
k8s-node07     Ready      <none>   5d    v1.19.2

十三. 测试集群功能

1. 创建yml文件

cat nginx-ds.yml
apiVersion: v1
kind: Service
metadata:name: nginx-dslabels:app: nginx-ds
spec:type: NodePortselector:app: nginx-dsports:- name: httpport: 80targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: nginx-dslabels:addonmanager.kubernetes.io/mode: Reconcile
spec:selector:matchLabels:app: nginx-dstemplate:metadata:labels:app: nginx-dsspec:containers:- name: my-nginximage: harbor.k8s.com/k8s-base/nginx:1.13.0-alpineports:- containerPort: 80
# 执行yml文件
kubectl create -f nginx-ds.yml                       

2. 查看pod

kubectl get pod  -o wide
NAME             READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
nginx-ds-7j5px   1/1     Running   0          3h26m   172.30.184.2   k8s-node01     <none>           <none>
nginx-ds-9jb9s   1/1     Running   0          3h26m   172.30.192.2   k8s-node07     <none>           <none>
nginx-ds-scxvg   1/1     Running   0          24m     172.30.208.3   k8s-master02   <none>           <none>
nginx-ds-snsjq   1/1     Running   0          3h26m   172.30.224.2   k8s-master01   <none>           <none>

3. 测试连通性

ping 172.30.184.2
PING 172.30.184.2 (172.30.184.2) 56(84) bytes of data.
64 bytes from 172.30.184.2: icmp_seq=1 ttl=63 time=0.745 ms
^C64 bytes from 172.30.184.2: icmp_seq=2 ttl=63 time=1.07 ms
^C
--- 172.30.184.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1029ms
rtt min/avg/max/mdev = 0.745/0.907/1.070/0.165 msping 172.30.192.2
PING 172.30.192.2 (172.30.192.2) 56(84) bytes of data.
64 bytes from 172.30.192.2: icmp_seq=1 ttl=63 time=0.904 ms
64 bytes from 172.30.192.2: icmp_seq=2 ttl=63 time=0.966 ms
^C
--- 172.30.192.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.904/0.935/0.966/0.031 msping 172.30.208.3
PING 172.30.208.3 (172.30.208.3) 56(84) bytes of data.
64 bytes from 172.30.208.3: icmp_seq=1 ttl=63 time=0.631 ms
64 bytes from 172.30.208.3: icmp_seq=2 ttl=63 time=0.530 ms
^C
--- 172.30.208.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1063ms
rtt min/avg/max/mdev = 0.530/0.580/0.631/0.055 msping 172.30.224.2
PING 172.30.224.2 (172.30.224.2) 56(84) bytes of data.
64 bytes from 172.30.224.2: icmp_seq=1 ttl=64 time=0.150 ms
64 bytes from 172.30.224.2: icmp_seq=2 ttl=64 time=0.074 ms
^C
--- 172.30.224.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1030ms
rtt min/avg/max/mdev = 0.074/0.112/0.150/0.038 ms

4. 查看服务情况

kubectl get svc |grep nginx-ds
nginx-ds     NodePort    10.254.49.54   <none>        80:9249/TCP   4h46mcurl -s 10.254.49.54:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

使用浏览器可以查看

十四. CoreDNS安装

1. 创建yaml文件

cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns
rules:
- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch
- apiGroups:- ""resources:- nodesverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists
data:Corefile: |.:53 {errorshealthkubernetes cluster.local in-addr.arpa ip6.arpa {pods insecureupstreamfallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.confcache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:replicas: 2# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsannotations:seccomp.security.alpha.kubernetes.io/pod: 'docker/default'spec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:beta.kubernetes.io/os: linuxcontainers:- name: corednsimage: harbor.k8s.com/k8s-base/coredns:1.4.0 imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: host-timemountPath: /etc/localtimereadOnly: true- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthport: 8080scheme: HTTPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: host-timehostPath:path: /etc/localtime- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.254.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP

2. 创建coredns

kubectl create -f ./coredns.yaml#需要注意镜像,我已经把镜像放入带harbor中,在实际操作中根据自己实际情况而定#查看podkubectl get pod -n kube-system  -l k8s-app=kube-dns -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
coredns-5477df9596-9pvw2   1/1     Running   0          35m   172.30.192.2   k8s-node07     <none>           <none>
coredns-5477df9596-f98z4   1/1     Running   0          35m   172.30.224.2   k8s-master01   <none>           <none>

3. 测试dns功能

我使用busybox来测试下dns是否正常,不建议使用高版本的。可能nslookup有问题。活生生的例子,折腾了半天。

#创建yamlcat busybox.yaml
apiVersion: v1
kind: Pod
metadata:name: busyboxnamespace: default
spec:containers:- name: busyboximage: harbor.k8s.com/k8s-base/busybox:1.28.3command:- sleep- "3600"imagePullPolicy: IfNotPresentrestartPolicy: Alwaykubectl create -f busybox.yaml   #查看下pod
kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
busybox          1/1     Running   0          27m
##使用nslookup查看kubectl exec -ti busybox -- nslookup kubernetes
Server:    10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.localName:      kubernetes
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local

十五. 后记

总算是写完了,前后写了有一个月的时间,本打算一到两个星期完事儿。一来工作安排的很满没有过多的时间来写。二来为保证文档质量,做一点写一点。虽然耽误些时间,但是总体来说按照这个文档能把k8s搭建起来,这就没有枉费一个多月的努力。

本文现阶段只写了k8s基本框架的搭建,在实际环境中不可能只是这些,还需有一些其他辅助的模块。比如说web界面、ingress、prometheus监控等等,在后续的文章中都会涉及。这是是一个开始,精彩的还在后面,比如说结合jenkins和gitlab实现自动化发布等等。青山不改,绿水长流。以后再见!

k8s 二进制集群部署相关推荐

  1. k8s二进制集群部署安装文档

    一.架构拓扑图 版本信息: kubernetes v1.18.20/ etcd-v3.4.21 docker 18.09.9-3.el7 calico/node v3.8.9 安装所需要镜像: har ...

  2. 这一篇 K8S(Kubernetes)集群部署 我觉得还可以

    点赞再看,养成习惯,微信搜索[牧小农]关注我获取更多资讯,风里雨里,小农等你,很高兴能够成为你的朋友. 国内安装K8S的四种途径 Kubernetes 的安装其实并不复杂,因为Kubernetes 属 ...

  3. Kubernetes单节点部署----二进制集群部署(ETCD集群+Flannel网络)

    文章目录 环境部署 开局优化 master节点操作etcd 集群部署 开始制作证书 node节点加入ETCD集群(实现内部通信) node1/2节点操作 docker安装 flannel网络配置 ma ...

  4. Kubernetes二进制集群部署+Web管理界面+kubectl 命令管理+YAML文件详解(集合)

    Kubernetes---- 二进制集群部署(ETCD集群+Flannel网络) Kubernetes----单节点部署 Kubernetes----双master节点二进制部署 Kubernetes ...

  5. k8s dashboard_k8s集群部署Dashboard

    部署Dashboard(Web UI) * dashboard-deployment.yaml // 部署Pod,提供Web服务 * dashboard-rbac.yaml // 授权访问apiser ...

  6. 自动化运维之k8s——Kubernetes集群部署、pod、service微服务、kubernetes网络通信

    目录 一.Kubernetes简介 1.Kubernetes简介 2.kubernetes设计架构 3.Kubernetes核心组件 4.kubernetes设计结构 二.Kubernetes部署 1 ...

  7. Kubernetes(K8S)集群部署搭建图文教程(最全)

    Kubernetes 集群安装 前期准备 集群安装 系统初始化 Harbor采取私有的仓库去镜像使用 集群检测 集群功能演示 前期准备 第一步:Router软路由构建 第二步:centos7安装 5台 ...

  8. 来了,k8s!-----------------k8s集群部署

    k8s的集群部署,官方提供了三种方式: minikube Minikube是一个工具,可以在本地快速运行的一个单点的k8s,仅用于尝试k8s或日常开发的用户使用.部署地址:https://kubern ...

  9. k8s集群部署二进制(一)

    部署一套完整的Kubernetes高可用集群(二进制) 一.前置知识点 1.1 生产环境可部署Kubernetes集群的两种方式 目前生产部署Kubernetes集群主要有两种方式: kubeadm ...

最新文章

  1. Kubernetes — Helm 软件包管理工具
  2. jQuery父级以及同级元素查找
  3. JavaScript基本语法(续)
  4. 论中国和欧洲程序员对加班的态度
  5. mapreduce 算法_MapReduce算法–了解数据联接第1部分
  6. 【牛客 - 301哈尔滨理工大学软件与微电子学院第八届程序设计竞赛同步赛(高年级 )】小乐乐和25(模拟,技巧)
  7. 制作个性化gurb菜单背景图片
  8. dns 主从 windows
  9. 【实践】预训练模型在华为信息流推荐中的应用与探索.pdf(附下载链接)
  10. 鸿蒙系统 安卓游戏,华为鸿蒙系统运行安卓游戏出现新状况!安卓换皮论被彻底打脸?...
  11. [SharePoint][SharePoint 2013从入门到精通]Chapter 1 介绍 SharePoint2013
  12. BZOJ4569 SCOI2016萌萌哒(倍增+并查集)
  13. Android 利用SurfaceView进行图形绘制
  14. Windows程序设计的第一个实例
  15. 基于linux搭建zmodem服务
  16. STM32 - L4系列芯片手册: LTDC功能
  17. 如何禁止更改IE的代理服务器设置(转)
  18. C# 实现解答数独功能
  19. 【目标检测】YOLO和SSD的区别
  20. czl蒻蒟的OI之路7

热门文章

  1. 蓝牙无线技术(BLE)介绍与开发点滴总结
  2. python方差检验分析(ANOVA)
  3. 凡事预则立,不预则废
  4. RESTful API设计简介
  5. 抖音小店入驻精选联盟有什么条件?精选联盟添加商品操作流程分享
  6. MSI和MSI-X对比(五)
  7. 给小白分享几个学习Android的网站
  8. 日志管理(spring AOP切面拦截)
  9. CSS语义-icont+text
  10. 兔子生兔子问题(java实现)