1、准备工作

1.1 calico-集群节点信息

机器均为8C8G的虚拟机,硬盘为100G。

IP Hostname
10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal
10.31.88.11 tiny-calico-worker-88-11.k8s.tcinternal
10.31.88.12 tiny-calico-worker-88-12.k8s.tcinternal
10.88.64.0/18 podSubnet
10.88.0.0/18 serviceSubnet

1.2 检查mac和product_uuid

同一个k8s集群内的所有节点需要确保mac地址和product_uuid均唯一,开始集群初始化之前需要检查相关信息

1
2
3
4
5
6
# 检查mac地址
ip link
ifconfig -a# 检查product_uuid
sudo cat /sys/class/dmi/id/product_uuid
Copy

1.3 配置ssh免密登录(可选)

如果k8s集群的节点有多个网卡,确保每个节点能通过正确的网卡互联访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 在root用户下面生成一个公用的key,并配置可以使用该key免密登录
su root
ssh-keygen
cd /root/.ssh/
cat id_rsa.pub >> authorized_keys
chmod 600 authorized_keyscat >> ~/.ssh/config <<EOF
Host tiny-calico-master-88-1.k8s.tcinternalHostName 10.31.88.1User rootPort 22IdentityFile ~/.ssh/id_rsaHost tiny-calico-worker-88-11.k8s.tcinternalHostName 10.31.88.11User rootPort 22IdentityFile ~/.ssh/id_rsaHost tiny-calico-worker-88-12.k8s.tcinternalHostName 10.31.88.12User rootPort 22IdentityFile ~/.ssh/id_rsa
EOF
Copy

1.4 修改hosts文件

1
2
3
4
5
cat >> /etc/hosts <<EOF
10.31.88.1  tiny-calico-master-88-1 tiny-calico-master-88-1.k8s.tcinternal
10.31.88.11 tiny-calico-worker-88-11 tiny-calico-worker-88-11.k8s.tcinternal
10.31.88.12 tiny-calico-worker-88-12 tiny-calico-worker-88-12.k8s.tcinternal
EOF
Copy

1.5 关闭swap内存

1
2
3
4
# 使用命令直接关闭swap内存
swapoff -a
# 修改fstab文件禁止开机自动挂载swap分区
sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab
Copy

1.6 配置时间同步

这里可以根据自己的习惯选择ntp或者是chrony同步均可,同步的时间源服务器可以选择阿里云的ntp1.aliyun.com或者是国家时间中心的ntp.ntsc.ac.cn

使用ntp同步

1
2
3
4
5
6
7
8
# 使用yum安装ntpdate工具
yum install ntpdate -y# 使用国家时间中心的源同步时间
ntpdate ntp.ntsc.ac.cn# 最后查看一下时间
hwclock
Copy

使用chrony同步

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 使用yum安装chrony
yum install chrony -y# 设置开机启动并开启chony并查看运行状态
systemctl enable chronyd.service
systemctl start chronyd.service
systemctl status chronyd.service# 当然也可以自定义时间服务器
vim /etc/chrony.conf# 修改前
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst# 修改后
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server ntp.ntsc.ac.cn iburst# 重启服务使配置文件生效
systemctl restart chronyd.service# 查看chrony的ntp服务器状态
chronyc sourcestats -v
chronyc sources -vCopy

1.7 关闭selinux

1
2
3
4
5
# 使用命令直接关闭
setenforce 0# 也可以直接修改/etc/selinux/config文件
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
Copy

1.8 配置防火墙

k8s集群之间通信和服务暴露需要使用较多端口,为了方便,直接禁用防火墙

1
2
# centos7使用systemctl禁用默认的firewalld服务
systemctl disable firewalld.service
Copy

1.9 配置netfilter参数

这里主要是需要配置内核加载br_netfilteriptables放行ipv6ipv4的流量,确保集群内的容器能够正常通信。

1
2
3
4
5
6
7
8
9
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Copy

1.10 关闭IPV6(可选)

虽然新版本的k8s已经支持双栈网络,但是本次的集群部署过程并不涉及IPv6网络的通信,因此关闭IPv6网络支持

1
2
# 直接在内核中添加ipv6禁用参数
grubby --update-kernel=ALL --args=ipv6.disable=1
Copy

1.11 配置IPVS(可选)

IPVS是专门设计用来应对负载均衡场景的组件,kube-proxy 中的 IPVS 实现通过减少对 iptables 的使用来增加可扩展性。在 iptables 输入链中不使用 PREROUTING,而是创建一个假的接口,叫做 kube-ipvs0,当k8s集群中的负载均衡配置变多的时候,IPVS能实现比iptables更高效的转发性能。

注意在4.19之后的内核版本中使用nf_conntrack模块来替换了原有的nf_conntrack_ipv4模块

(Notes: use nf_conntrack instead of nf_conntrack_ipv4 for Linux kernel 4.19 and later)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 在使用ipvs模式之前确保安装了ipset和ipvsadm
sudo yum install ipset ipvsadm -y# 手动加载ipvs相关模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4# 配置开机自动加载ipvs相关模块
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOFsudo sysctl --system
# 最好重启一遍系统确定是否生效$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  2
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
$ cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack_ipv4
Copy

2、安装container runtime

2.1 安装docker

详细的官方文档可以参考这里,由于在刚发布的1.24版本中移除了docker-shim,因此安装的版本≥1.24的时候需要注意容器运行时的选择。这里我们安装的版本低于1.24,因此我们继续使用docker。

docker的具体安装可以参考我之前写的这篇文章,这里不做赘述。

1
2
3
4
5
6
# 安装必要的依赖组件并且导入docker官方提供的yum源
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo# 我们直接安装最新版本的docker
yum install docker-ce docker-ce-cli containerd.io
Copy

2.2 配置cgroup drivers

CentOS7使用的是systemd来初始化系统并管理进程,初始化进程会生成并使用一个 root 控制组 (cgroup), 并充当 cgroup 管理器。 Systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。 我们也可以配置容器运行时和 kubelet 使用 cgroupfs。 连同 systemd 一起使用 cgroupfs 意味着将有两个不同的 cgroup 管理器。而当一个系统中同时存在cgroupfs和systemd两者时,容易变得不稳定,因此最好更改设置,令容器运行时和 kubelet 使用 systemd 作为 cgroup 驱动,以此使系统更为稳定。 对于 Docker, 需要设置 native.cgroupdriver=systemd 参数。

参考官方的说明文档:

Container Runtimes | Kubernetes

参考配置说明文档

容器运行时 | Kubernetes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2"
}
EOFsudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker# 最后检查一下Cgroup Driver是否为systemd
$ docker info | grep systemdCgroup Driver: systemd
Copy

2.3 关于kubelet的cgroup driver

k8s官方有详细的文档介绍了如何设置kubelet的cgroup driver,需要特别注意的是,在1.22版本开始,如果没有手动设置kubelet的cgroup driver,那么默认会设置为systemd

Note: In v1.22, if the user is not setting the cgroupDriver field under KubeletConfigurationkubeadm will default it to systemd.

一个比较简单的指定kubelet的cgroup driver的方法就是在kubeadm-config.yaml加入cgroupDriver字段

1
2
3
4
5
6
7
8
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
Copy

我们可以直接查看configmaps来查看初始化之后集群的kubeadm-config配置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ kubectl describe configmaps kubeadm-config -n kube-system
Name:         kubeadm-config
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>Data
====
ClusterConfiguration:
----
apiServer:extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.6
networking:dnsDomain: cali-cluster.tclocalserviceSubnet: 10.88.0.0/18
scheduler: {}BinaryData
====Events:  <none>
Copy

当然因为我们需要安装的版本高于1.22.0并且使用的就是systemd,因此可以不用再重复配置。

3、安装kube三件套

对应的官方文档可以参考这里

Installing kubeadm | Kubernetes

kube三件套就是kubeadmkubelet 和 kubectl,三者的具体功能和作用如下:

  • kubeadm:用来初始化集群的指令。
  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与集群通信的命令行工具。

需要注意的是:

  • kubeadm不会帮助我们管理kubeletkubectl,其他两者也是一样的,也就是说这三者是相互独立的,并不存在谁管理谁的情况;
  • kubelet的版本必须小于等于API-server的版本,否则容易出现兼容性的问题;
  • kubectl并不是集群中的每个节点都需要安装,也并不是一定要安装在集群中的节点,可以单独安装在自己本地的机器环境上面,然后配合kubeconfig文件即可使用kubectl命令来远程管理对应的k8s集群;

CentOS7的安装比较简单,我们直接使用官方提供的yum源即可。需要注意的是这里需要设置selinux的状态,但是前面我们已经关闭了selinux,因此这里略过这步。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 直接导入谷歌官方的yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF# 当然如果连不上谷歌的源,可以考虑使用国内的阿里镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF# 接下来直接安装三件套即可
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes# 如果网络环境不好出现gpgcheck验证失败导致无法正常读取yum源,可以考虑关闭该yum源的repo_gpgcheck
sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/kubernetes.repo
# 或者在安装的时候禁用gpgcheck
sudo yum install -y kubelet kubeadm kubectl --nogpgcheck --disableexcludes=kubernetes# 如果想要安装特定版本,可以使用这个命令查看相关版本的信息
sudo yum list --nogpgcheck kubelet kubeadm kubectl --showduplicates --disableexcludes=kubernetes
# 这里我们为了保留使用docker-shim,因此我们按照1.24.0版本的前一个版本1.23.6
sudo yum install -y kubelet-1.23.6-0 kubeadm-1.23.6-0 kubectl-1.23.6-0 --nogpgcheck --disableexcludes=kubernetes# 安装完成后配置开机自启kubelet
sudo systemctl enable --now kubelet
Copy

4、初始化集群

4.1 编写配置文件

在集群中所有节点都执行完上面的三点操作之后,我们就可以开始创建k8s集群了。因为我们这次不涉及高可用部署,因此初始化的时候直接在我们的目标master节点上面操作即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 我们先使用kubeadm命令查看一下主要的几个镜像版本
# 因为我们此前指定安装了旧的1.23.6版本,这里的apiserver镜像版本也会随之回滚
$ kubeadm config images list
I0506 11:24:17.061315   16055 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6# 为了方便编辑和管理,我们还是把初始化参数导出成配置文件
$ kubeadm config print init-defaults > kubeadm-calico.confCopy
  • 考虑到大多数情况下国内的网络无法使用谷歌的k8s.gcr.io镜像源,我们可以直接在配置文件中修改imageRepository参数为阿里的镜像源
  • kubernetesVersion字段用来指定我们要安装的k8s版本
  • localAPIEndpoint参数需要修改为我们的master节点的IP和端口,初始化之后的k8s集群的apiserver地址就是这个
  • serviceSubnetdnsDomain两个参数默认情况下可以不用修改,这里我按照自己的需求进行了变更
  • nodeRegistration里面的name参数修改为对应master节点的hostname
  • 新增配置块使用ipvs,具体可以参考官方文档
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 10.31.88.1bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockimagePullPolicy: IfNotPresentname: tiny-calico-master-88-1.k8s.tcinternaltaints: null
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.6
networking:dnsDomain: cali-cluster.tclocalserviceSubnet: 10.88.0.0/18
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
Copy

4.2 初始化集群

此时我们再查看对应的配置文件中的镜像版本,就会发现已经变成了对应阿里云镜像源的版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 查看一下对应的镜像版本,确定配置文件是否生效
$ kubeadm config images list --config kubeadm-calico.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6# 确认没问题之后我们直接拉取镜像
$ kubeadm config images pull --config kubeadm-calico.conf
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6# 初始化
$ kubeadm init --config kubeadm-calico.conf
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
...此处略去一堆输出...
Copy

当我们看到下面这个输出结果的时候,我们的集群就算是初始化成功了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.31.88.1:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0aeCopy

4.3 配置kubeconfig

刚初始化成功之后,我们还没办法马上查看k8s集群信息,需要配置kubeconfig相关参数才能正常使用kubectl连接apiserver读取集群信息。

1
2
3
4
5
6
7
8
9
10
# 对于非root用户,可以这样操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config# 如果是root用户,可以直接导入环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf# 添加kubectl的自动补全功能
echo "source <(kubectl completion bash)" >> ~/.bashrc
Copy

前面我们提到过kubectl不一定要安装在集群内,实际上只要是任何一台能连接到apiserver的机器上面都可以安装kubectl并且根据步骤配置kubeconfig,就可以使用kubectl命令行来管理对应的k8s集群。

配置完成后,我们再执行相关命令就可以查看集群的信息了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl cluster-info
Kubernetes control plane is running at https://10.31.88.1:6443
CoreDNS is running at https://10.31.88.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'$ kubectl get nodes -o wide
NAME                                     STATUS     ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
tiny-calico-master-88-1.k8s.tcinternal   NotReady   control-plane,master   4m15s   v1.23.6   10.31.88.1    <none>        CentOS Linux 7 (Core)   3.10.0-1160.62.1.el7.x86_64   docker://20.10.14$ kubectl get pods -A -o wide
NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE     IP           NODE                                     NOMINATED NODE   READINESS GATES
kube-system   coredns-6d8c4cb4d-r8r9q                                          0/1     Pending   0          4m20s   <none>       <none>                                   <none>           <none>
kube-system   coredns-6d8c4cb4d-ztq6w                                          0/1     Pending   0          4m20s   <none>       <none>                                   <none>           <none>
kube-system   etcd-tiny-calico-master-88-1.k8s.tcinternal                      1/1     Running   0          4m25s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-apiserver-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0          4m26s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-controller-manager-tiny-calico-master-88-1.k8s.tcinternal   1/1     Running   0          4m27s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-proxy-v6cg9                                                 1/1     Running   0          4m20s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-scheduler-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0          4m25s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
Copy

4.4 添加worker节点

这时候我们还需要继续添加剩下的两个节点作为worker节点运行负载,直接在剩下的节点上面运行集群初始化成功时输出的命令就可以成功加入集群:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ kubeadm join 10.31.88.1:6443 --token abcdef.0123456789abcdef \
>         --discovery-token-ca-cert-hash sha256:a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy

如果不小心没保存初始化成功的输出信息也没有关系,我们可以使用kubectl工具查看或者生成token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 查看现有的token列表
$ kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   23h         2022-05-07T05:19:08Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token# 如果token已经失效,那就再创建一个新的token
$ kubeadm token create
e31cv1.lbtrzwp6mzon78ue
$ kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   23h         2022-05-07T05:19:08Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
e31cv1.lbtrzwp6mzon78ue   23h         2022-05-07T05:51:40Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token# 如果找不到--discovery-token-ca-cert-hash参数,则可以在master节点上使用openssl工具来获取
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |    openssl dgst -sha256 -hex | sed 's/^.* //'
a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae
Copy

添加完成之后我们再查看集群的节点可以发现这时候已经多了两个node,但是此时节点的状态还是NotReady,接下来就需要部署CNI了。

1
2
3
4
5
$ kubectl get nodes
NAME                                      STATUS     ROLES                  AGE    VERSION
tiny-calico-master-88-1.k8s.tcinternal    NotReady   control-plane,master   20m    v1.23.6
tiny-calico-worker-88-11.k8s.tcinternal   NotReady   <none>                 105s   v1.23.6
tiny-calico-worker-88-12.k8s.tcinternal   NotReady   <none>                 35s    v1.23.6
Copy

5、安装CNI

5.1 编写manifest文件

calico的安装也比较简单,官方提供了多种安装方式,我们这里使用yaml(自定义manifests)进行安装,并且使用etcd作为datastore

1
2
# 我们先把官方的yaml模板下载下来,然后对关键字段逐个修改
curl https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml -O
Copy

针对calico-etcd.yaml文件,我们需要修改一些参数以适配我们的集群:

  • CALICO_IPV4POOL_CIDR参数,配置的是pod的网段,这里我们使用此前计划好的10.88.64.0/18CALICO_IPV4POOL_BLOCK_SIZE参数,配置的是分配子网的大小,默认是26

    1
    2
    3
    4
    5
    6
    7
    
    # The default IPv4 pool to create on startup if none exists. Pod IPs will be
    # chosen from this range. Changing this value after installation will have
    # no effect. This should fall within `--cluster-cidr`.
    - name: CALICO_IPV4POOL_CIDRvalue: "10.88.64.0/18"
    - name: CALICO_IPV4POOL_BLOCK_SIZEvalue: "26"
    Copy
  • CALICO_IPV4POOL_IPIP参数,控制是否启用ip-ip模式,默认情况下是Always,由于我们的节点都在同一个二层网络,这里修改为Never或者是CrossSubnet都可以。

    其中Never表示不启用ip-ip模式,而CrossSubnet则表示仅当跨子网的时候才启用ip-ip模式

    1
    2
    3
    
    # Enable IPIP
    - name: CALICO_IPV4POOL_IPIPvalue: "Never"
    Copy
  • ConfigMap里面的etcd_endpoints变量配置etcd的连接端口和地址,为了安全我们这里开启TLS认证,当然如果不想配置证书的,也可以不使用TLS,然后这三个字段直接留空不做修改

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    
    kind: ConfigMap
    apiVersion: v1
    metadata:name: calico-confignamespace: kube-system
    data:# Configure this with the location of your etcd cluster.# etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.# etcd_ca: ""   # "/calico-secrets/etcd-ca"# etcd_cert: "" # "/calico-secrets/etcd-cert"# etcd_key: ""  # "/calico-secrets/etcd-key"etcd_endpoints: "https://10.31.88.1:2379"etcd_ca: "/etc/kubernetes/pki/etcd/ca.crt"etcd_cert: "/etc/kubernetes/pki/etcd/server.crt"etcd_key: "/etc/kubernetes/pki/etcd/server.key"
    Copy
  • Secret里面的 name: calico-etcd-secrets下面的data字段,需要把上面的三个证书内容使用该命令cat <file> | base64 -w 0转成base64编码格式

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    ---
    # Source: calico/templates/calico-etcd-secrets.yaml
    # The following contains k8s Secrets for use with a TLS enabled etcd cluster.
    # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:name: calico-etcd-secretsnamespace: kube-system
    data:# Populate the following with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# The keys below should be uncommented and the values populated with the base64# encoded contents of each file that would be associated with the TLS data.# Example command for encoding a file contents: cat <file> | base64 -w 0etcd-key: LS0tLS1CRUdJTi......tLS0tCg==etcd-cert: LS0tLS1CRUdJT......tLS0tLQo=etcd-ca: LS0tLS1CRUdJTiB......FLS0tLS0K
    Copy

5.2 部署calico

修改完成之后我们直接部署即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created# 查看pod是否正常运行
$ kubectl get pods -A
NAMESPACE     NAME                                                             READY   STATUS    RESTARTS        AGE
kube-system   calico-kube-controllers-5c4bd49f9b-6b2gr                         1/1     Running   5 (3m18s ago)   6m18s
kube-system   calico-node-bgsfs                                                1/1     Running   5 (2m55s ago)   6m18s
kube-system   calico-node-tr88g                                                1/1     Running   5 (3m19s ago)   6m18s
kube-system   calico-node-w59pc                                                1/1     Running   5 (2m36s ago)   6m18s
kube-system   coredns-6d8c4cb4d-r8r9q                                          1/1     Running   0               3h8m
kube-system   coredns-6d8c4cb4d-ztq6w                                          1/1     Running   0               3h8m
kube-system   etcd-tiny-calico-master-88-1.k8s.tcinternal                      1/1     Running   0               3h8m
kube-system   kube-apiserver-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0               3h8m
kube-system   kube-controller-manager-tiny-calico-master-88-1.k8s.tcinternal   1/1     Running   0               3h8m
kube-system   kube-proxy-n65sb                                                 1/1     Running   0               169m
kube-system   kube-proxy-qmxhp                                                 1/1     Running   0               168m
kube-system   kube-proxy-v6cg9                                                 1/1     Running   0               3h8m
kube-system   kube-scheduler-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0               3h8m# 查看calico-kube-controllers的pod日志是否有报错
$ kubectl logs -f calico-kube-controllers-5c4bd49f9b-6b2gr -n kube-system
Copy

5.3 pod安装calicoctl

calicoctl是用来查看管理calico的命令行工具,定位上有点类似于calico版本的kubectl,因为我们前面使用了etcd作为calico的datastore,这里直接选择在k8s集群中以pod的形式部署calicoctl的方式更加简单。

  • calicoctl的版本最好和部署的calico一致,这里均为v3.22.2
  • calicoctl的etcd配置最好和部署的calico一致,因为前面部署calico的时候etcd开启了TLS,因此这里我们也要修改yaml文件开启TLS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# 为了方便后期管理,我们先把calicoctl.yaml下载到本地再进行部署
$ wget https://projectcalico.docs.tigera.io/manifests/calicoctl-etcd.yaml$ cat calicoctl-etcd.yaml
# Calico Version v3.22.2
# https://projectcalico.docs.tigera.io/releases#v3.22.2
# This manifest includes the following component versions:
#   calico/ctl:v3.22.2apiVersion: v1
kind: Pod
metadata:name: calicoctlnamespace: kube-system
spec:nodeSelector:kubernetes.io/os: linuxhostNetwork: truecontainers:- name: calicoctlimage: calico/ctl:v3.22.2command:- /calicoctlargs:- version- --poll=1menv:- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# If you're using TLS enabled etcd uncomment the following.# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_certvolumeMounts:- mountPath: /calico-secretsname: etcd-certsvolumes:# If you're using TLS enabled etcd uncomment the following.- name: etcd-certssecret:secretName: calico-etcd-secrets
Copy

修改完成之后我们直接部署即可使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ kubectl apply -f calicoctl-etcd.yaml
pod/calicoctl created# 创建完成后我们查看calicoctl的运行状态
$ kubectl get pods -A | grep calicoctl
kube-system   calicoctl                                                        1/1     Running   0             9s# 检验一下是否能够正常工作
$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get nodes
NAME
tiny-calico-master-88-1.k8s.tcinternal
tiny-calico-worker-88-11.k8s.tcinternal
tiny-calico-worker-88-12.k8s.tcinternal$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get profiles -o wide
NAME                                                 LABELS
projectcalico-default-allow
kns.default                                          pcns.kubernetes.io/metadata.name=default,pcns.projectcalico.org/name=default
kns.kube-node-lease                                  pcns.kubernetes.io/metadata.name=kube-node-lease,pcns.projectcalico.org/name=kube-node-lease
kns.kube-public                                      pcns.kubernetes.io/metadata.name=kube-public,pcns.projectcalico.org/name=kube-public
kns.kube-system                                      pcns.kubernetes.io/metadata.name=kube-system,pcns.projectcalico.org/name=kube-system
...此处略去一堆输出...# 查看ipam的分配情况
$ calicoctl ipam show
+----------+---------------+-----------+------------+--------------+
| GROUPING |     CIDR      | IPS TOTAL | IPS IN USE |   IPS FREE   |
+----------+---------------+-----------+------------+--------------+
| IP Pool  | 10.88.64.0/18 |     16384 | 2 (0%)     | 16382 (100%) |
+----------+---------------+-----------+------------+--------------+# 为了方便可以在bashrc中设置alias
cat >> ~/.bashrc <<EOF
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
EOFCopy

完整版本calicoctl命令可以参考官方文档。

5.4 binary安装calicoctl

使用pod方式部署calicoctl虽然简单,但是有个问题就是无法使用calicoctl node命令,这个命令需要访问部分宿主机的文件系统。因此这里我们再二进制部署一个calicoctl

Note that if you run calicoctl in a container, calicoctl node ... commands will not work (they need access to parts of the host filesystem).

1
2
3
4
# 直接下线二进制文件即可使用
$ cd /usr/local/bin/
$ curl -L https://github.com/projectcalico/calico/releases/download/v3.22.2/calicoctl-linux-amd64 -o calicoctl
$ chmod +x ./calicoctl
Copy

二进制的calicoctl会优先读取配置文件,当找不到配置文件的时候才会去读取环境变量,这里我们直接配置/etc/calico/calicoctl.cfg,注意etcd的证书直接和前面部署calico时使用的证书文件一致即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# 配置calicoctl的配置文件
$ mkdir /etc/calico
$ cat /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:datastoreType: etcdv3etcdEndpoints: "https://10.31.88.1:2379"etcdCACert: |-----BEGIN CERTIFICATE-----MIIC9TCCAd2gAwIBAgIBADANBgkqhkiG9w0BAQsFADASMRAwDgYDVQQDEwdldGNkLWNhMB4XDTIyMDUwNjA1MTg1OVoXDTMyMDUwMzA1MTg1OVowEjEQMA4GA1UEAxMHZXRjZC1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANFFqq4Mk3DE6UW581xnZPFrHqQWlGr/KptEywKH56Bp24OAnDIAkSz7KAMrJzL+OiVsj9YJV59F9qH/YzU+bppctDnfk1yCuavkcXgLSd9O6EBhM2LkGtF9AdWMnFw9ui2jNhFC/QXjzCvq0I1c9o9gulbFmSHwIw2GLQd7ogO+PpfLsubRscJdKkCUWVFV0mb8opccmXoFvXynRX0VW3wpN+v66bD+HTdMSNK1JljfBngh9LAkibjUx7bMrHvu/GOalNCSWrtGlss/hhWkzwV7Y7AIXgvxxcmDdfswe5lUYLvW2CP4e+tXfB3i2wg10fErc8z63lixv9BWkIIalScCAwEAAaNWMFQwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFH49PpnJYxze8aq0PVwgpY4Fo6djMBIGA1UdEQQLMAmCB2V0Y2QtY2EwDQYJKoZIhvcNAQELBQADggEBAAGL6KwN80YEK6gZcL+7RI9bkMKk7UWWV48154CgN8w9GKvNTm4l0tZKvsWCnR61hiJtLQcG0S8HYHAvL1DBjOXw11bNilLyvaVM+wqOOIxPsXLU//F46z3V9z1uV0v/yLLlg320c0wtG+OLZZIn8O+yUhtOHM09K0JSAF2/KhtNxhrc0owCTOzS+DKsb0w1SzQmS0t/tflyLfc3oJZ/2V4Tqd72j7iIcDBa36lGqtUBf8MXu+Xza0cdhy/f19AqkeM2fe+/DrbzR4zDVmZ7l4dqYGLbKHYoXaLn8bSToYQq4dlA/oAlyyH0ekB5v0DyYiHwlqgZgiu4qcR3Gw8azVk=-----END CERTIFICATE-----etcdCert: |-----BEGIN CERTIFICATE-----MIIDgzCCAmugAwIBAgIIePiBSOdMGwcwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAxMHZXRjZC1jYTAeFw0yMjA1MDYwNTE4NTlaFw0yMzA1MDYwNTE4NTlaMDExLzAtBgNVBAMTJnRpbnktY2FsaWNvLW1hc3Rlci04OC0xLms4cy50Y2ludGVybmFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqZM/jBrdXLR3ctee7LVJhGSA4usg/JQXGyOAd52OkkOLYwn3fvwqeo0Z0cX0q4mqaF0cnrPYc4eExX/3fJpF3FxyD6vdpEZ/FrnzCAkibEYtK/UVhTKuV7n/VdbjFPGl8CpppuGVs6o+4NFZxffW7em08m/FK/7SDkV2qXCyG94kOaUCeDEgdBKE3cPCZQ4maFuwXi08bYs2CiTfbfa4dsT53yzaoQVX9BaBqE9IGmsHDFuxp1X8gkJXs+7wwHQX39o1oXmci6T4IVxVHA5GRbTvpCDG5Wye7QqKgnxO1KRF42FKs1Nif7UJ0iR35Ydpa7cat7Fr0M7l+rZLCDTJgwIDAQABo4G9MIG6MA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBR+PT6ZyWMc3vGqtD1cIKWOBaOnYzBaBgNVHREEUzBRgglsb2NhbGhvc3SCJnRpbnktY2FsaWNvLW1hc3Rlci04OC0xLms4cy50Y2ludGVybmFshwQKH1gBhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEBCwUAA4IBAQC+pyH14/+US5Svz04Vi8QIduY/DVx1HOQqhfrIZKOZCH2iKU7fZ4o9QpQZh7D9B8hgpXM6dNuFpd98c0MVPr+LesShu4BHVjHlgPvUWEVB2XD5x51HqnMV2OkhMKooyAUIzI0P0YKN29SFEyJGD1XDu4UtqvBADqf7COvAuqj4VbRgF/iQwNstjqZ47rSzvyp6rIwqFoHRP+Zi+8KL1qmozGjI3+H+TZFMGv3b5DRx2pmfY+kGVLO5bjl3zxylRPjCDHaRlQUWiOYSWS8OHYRCBZuSLvW4tht0JjWjUAh4hF8+3lyNrfx8moz7tfm5SG2q01pO1vjkhrhxhINAwaac-----END CERTIFICATE-----etcdKey: |-----BEGIN RSA PRIVATE KEY-----MIIEowIBAAKCAQEAqZM/jBrdXLR3ctee7LVJhGSA4usg/JQXGyOAd52OkkOLYwn3fvwqeo0Z0cX0q4mqaF0cnrPYc4eExX/3fJpF3FxyD6vdpEZ/FrnzCAkibEYtK/UVhTKuV7n/VdbjFPGl8CpppuGVs6o+4NFZxffW7em08m/FK/7SDkV2qXCyG94kOaUCeDEgdBKE3cPCZQ4maFuwXi08bYs2CiTfbfa4dsT53yzaoQVX9BaBqE9IGmsHDFuxp1X8gkJXs+7wwHQX39o1oXmci6T4IVxVHA5GRbTvpCDG5Wye7QqKgnxO1KRF42FKs1Nif7UJ0iR35Ydpa7cat7Fr0M7l+rZLCDTJgwIDAQABAoIBAE1gMw7q8zbp4dc1K/82eWU/ts/UGikmKaTofiYWboeu6ls2oQgAaCGjYLSnbw0Ws/sLAZQo3AtbOuojifoBKv9x71nXQjtDL5pfHtX71QkyvEniev9cMNE2vZudgeB8owsDT1ImfPiOJkLPQ/dhL2E/0qEM/xskGxUH/S0zjxHHfPZZsYODhkVPWc6Z+XEDll48fRCFn4/48FTN9GbRvo7dv34EHmNYA20K4DMHbZUdrPqSZpKWzAPJXnDlgZbpvUeAYOJxqZHQtCm1zbSOyM1Ql6K0Ayro0L5GAzap+0yGuk79OWiPnEsdPneVsATKG7dT7RZIL/INrOqQ0wjUmQECgYEA02OHdT1K5Au6wtiTqKD99WweltnvFd4C/Z3dobEj8M8qN6uiKCcaPievWahnxAlJEah3RiOgtarwA+0E/Jgsw99Qutp5BR/XdD3llTNczkPkg/RkWpve2f/4DlZQrxuIem7UNLl+5BacfmF691DQQoX2RoIkvQxYJGTUNXvrSUkCgYEAzVyzmvN+dvSwzAlm0gkfVP5Ez3DFESUrWd0FR2v1HR6qHQy/dkgkkic6zRGCJtGeT5V7N0kbVSHsz+wi6aQkFy0Sp0TbgZzjPhSwNtk+2JsBRvMp0CYczgrfyvWuAQ3gbXGcN8IkcZSSOv8TuigCnnYf2Xaz8LM50AivScnb6GsCgYEAyq4ScgnLpa3NawbnRPbfqRH6nl7lC01sBqn3mBHVSQ4JB4msF92uHsxEJ639mAvjIGgrvHdqnuT/7nOypVJvEXsr14ykHpKyLQUv/Idbw3V7RD3ufqYW3WS8/VorUEoQ6HsdQlRc4ur/L3ndwgWdOTtir6YW/aA5XuPCSGnBZekCgYB6VtlgW+Jg91BDnO41/d0+guN3ONUNa7kxpau5aqTxHg11lNySmFPBBcHP3LhOa94FxyVKQDEaPEWZcDE0QuaFMALGxwyFYHM3zpdTdYQtAdp26/Fi4PGUBYJgpI9ubVffmyjXRr7zMvESWFbmNWOqBvDeWgrEP+EW/7V9HdX11QKBgE1czchlibgQ/bhAl8BatKRr1X/UHvblWhmyApudOfFeGOILR6u/lWvYSS+Rg0y8nnZ4hTRSXbd/sSEsUJcSmoBc1TivWzl32eVuqe9CcrUZY0JSLtoj1KiPadRcCZtVDETXbW326Hvgz+MnqrIgzx+Zgy4tNtoAAbTv0q83j45I-----END RSA PRIVATE KEY-----Copy

配置完成之后我们检查一下效果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ calicoctl node status
Calico process is running.IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.31.88.11  | node-to-node mesh | up    | 08:26:30 | Established |
| 10.31.88.12  | node-to-node mesh | up    | 08:26:30 | Established |
+--------------+-------------------+-------+----------+-------------+IPv6 BGP status
No IPv6 peers found.$ calicoctl get nodes
NAME
tiny-calico-master-88-1.k8s.tcinternal
tiny-calico-worker-88-11.k8s.tcinternal
tiny-calico-worker-88-12.k8s.tcinternal$ calicoctl ipam show
+----------+---------------+-----------+------------+--------------+
| GROUPING |     CIDR      | IPS TOTAL | IPS IN USE |   IPS FREE   |
+----------+---------------+-----------+------------+--------------+
| IP Pool  | 10.88.64.0/18 |     16384 | 2 (0%)     | 16382 (100%) |
+----------+---------------+-----------+------------+--------------+
Copy

6、部署测试用例

集群部署完成之后我们在k8s集群中部署一个nginx测试一下是否能够正常工作。首先我们创建一个名为nginx-quic的命名空间(namespace),然后在这个命名空间内创建一个名为nginx-quic-deploymentdeployment用来部署pod,最后再创建一个service用来暴露服务,这里我们先使用nodeport的方式暴露端口方便测试。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$ cat nginx-quic.yaml
apiVersion: v1
kind: Namespace
metadata:name: nginx-quic---apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-quic-deploymentnamespace: nginx-quic
spec:selector:matchLabels:app: nginx-quicreplicas: 4template:metadata:labels:app: nginx-quicspec:containers:- name: nginx-quicimage: tinychen777/nginx-quic:latestimagePullPolicy: IfNotPresentports:- containerPort: 80---apiVersion: v1
kind: Service
metadata:name: nginx-quic-servicenamespace: nginx-quic
spec:selector:app: nginx-quicports:- protocol: TCPport: 8080 # match for service access porttargetPort: 80 # match for pod access portnodePort: 30088 # match for external access porttype: NodePort
Copy

部署完成后我们直接查看状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 直接部署
$ kubectl apply -f nginx-quic.yaml
namespace/nginx-quic created
deployment.apps/nginx-quic-deployment created
service/nginx-quic-service created# 查看deployment的运行状态
$ kubectl get deployment -o wide -n nginx-quic
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                          SELECTOR
nginx-quic-deployment   4/4     4            4           55s   nginx-quic   tinychen777/nginx-quic:latest   app=nginx-quic# 查看service的运行状态
$ kubectl get service -o wide -n nginx-quic
NAME                 TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE   SELECTOR
nginx-quic-service   NodePort   10.88.52.168   <none>        8080:30088/TCP   66s   app=nginx-quic# 查看pod的运行状态
$ kubectl get pods -o wide -n nginx-quic
NAME                                     READY   STATUS    RESTARTS   AGE   IP             NODE                                      NOMINATED NODE   READINESS GATES
nginx-quic-deployment-7457f4d579-24q9z   1/1     Running   0          75s   10.88.120.72   tiny-calico-worker-88-12.k8s.tcinternal   <none>           <none>
nginx-quic-deployment-7457f4d579-4svv9   1/1     Running   0          75s   10.88.84.68    tiny-calico-worker-88-11.k8s.tcinternal   <none>           <none>
nginx-quic-deployment-7457f4d579-btrjj   1/1     Running   0          75s   10.88.120.71   tiny-calico-worker-88-12.k8s.tcinternal   <none>           <none>
nginx-quic-deployment-7457f4d579-lvh6x   1/1     Running   0          75s   10.88.84.69    tiny-calico-worker-88-11.k8s.tcinternal   <none>           <none># 查看IPVS规则
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30088 rr-> 10.88.84.68:80               Masq    1      0          0-> 10.88.84.69:80               Masq    1      0          0-> 10.88.120.71:80              Masq    1      0          0-> 10.88.120.72:80              Masq    1      0          0
TCP  10.31.88.1:30088 rr-> 10.88.84.68:80               Masq    1      0          0-> 10.88.84.69:80               Masq    1      0          0-> 10.88.120.71:80              Masq    1      0          0-> 10.88.120.72:80              Masq    1      0          0
TCP  10.88.52.168:8080 rr-> 10.88.84.68:80               Masq    1      0          0-> 10.88.84.69:80               Masq    1      0          0-> 10.88.120.71:80              Masq    1      0          0-> 10.88.120.72:80              Masq    1      0          0
Copy

最后我们进行测试,这个nginx-quic的镜像默认情况下会返回在nginx容器中获得的用户请求的IP和端口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 首先我们在集群内进行测试
# 直接访问pod
$ curl 10.88.84.68:80
10.31.88.1:34612
# 直接访问service的ClusterIP,这时请求会被转发到pod中
$ curl 10.88.52.168:8080
10.31.88.1:58978
# 直接访问nodeport,这时请求会被转发到pod中,不会经过ClusterIP
$ curl 10.31.88.1:30088
10.31.88.1:56595# 接着我们在集群外进行测试
# 直接访问三个节点的nodeport,这时请求会被转发到pod中,不会经过ClusterIP
# 由于externalTrafficPolicy默认为Cluster,因此nginx拿到的IP就是我们访问的节点的IP,而非客户端IP
$ curl 10.31.88.1:30088
10.31.88.1:27851
$ curl 10.31.88.11:30088
10.31.88.11:16540
$ curl 10.31.88.12:30088
10.31.88.12:5767

部署calico网络的k8s集群相关推荐

  1. 项目四 CentOS使用kubeadm部署工具部署测试环境的K8s集群---Kubectl命令使用以及安装dashboard界面

    大家好,我是SuieKa.在之前呢有幸学习了马哥教育提供的K8s入门指南以及视频.初来乍到,写一篇关于K8s的介绍以及部署测试环境使用的K8s集群. 树 @·K8s入门简单介绍 一.K8s(Kuber ...

  2. 总结 Underlay 和 Overlay 网络,在k8s集群实现underlay网络,网络组件flannel vxlan/ calico IPIP模式的网络通信流程,基于二进制实现高可用的K8S集群

    1.总结Underlay和Overlay网络的的区别及优缺点 Overlay网络:  Overlay 叫叠加网络也叫覆盖网络,指的是在物理网络的 基础之上叠加实现新的虚拟网络,即可使网络的中的容器可 ...

  3. 通过kubeadm部署高可用的k8s集群

    1环境准备 注意: 禁用swap 关闭selinux 关闭iptable 优化内核参数限制参数 root@kubeadm-master1:~# sysctl -p net.ipv4.ip_forwar ...

  4. 如何在Rancher 2.2 Preview2上部署和管理多K8s集群应用

    Rancher 2.2 Preview2于近日全面发布,这一版本包含了许多K8S集群操作的强大特性.本文将详细介绍多集群应用这一特性,让您可以在短时间内更新集群,大大提升工作效率. 近日,全球领先的容 ...

  5. 如何在Rancher 2.2 Preview2上部署和管理多K8s集群应用 1

    2019独角兽企业重金招聘Python工程师标准>>> Rancher 2.2 Preview2于近日全面发布,这一版本包含了许多K8S集群操作的强大特性.本文将详细介绍多集群应用这 ...

  6. 【K8s】部署自定义项目到K8s集群上运行

    前言:自定义项目参考[Docker]Dockerfile构建自定义进阶的helloworld镜像-2和[Docker]将自定义的镜像上传至dockerhub或阿里云私有仓库,并在其他节点进行拉取. 0 ...

  7. Centos7 安装部署Kubernetes(k8s)集群过程

    1.系统环境 服务器版本 docker软件版本 CPU架构 CentOS Linux release 7.9 Docker version 20.10.12 x86_64 2.前言 如下图描述了软件部 ...

  8. Kubernetes(5)-K8s集群部署

    部署环境: IP 主机名 角色 192.168.100.142 kube-master1,kube-master1.suosuoli.cn K8s 集群主节点 1 192.168.100.144 ku ...

  9. 使用Kubeadm创建k8s集群之节点部署(三十二)

    前言 由于上次忘开申明原创,特再发一次. 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜 ...

  10. 使用Kubeadm创建k8s集群之部署规划(三十一)

    前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...

最新文章

  1. python数据笔记分析_利用 Python 进行数据分析学习笔记(一)
  2. python装饰器改变命运
  3. 【互联网安全】DDoS攻防原理及实战
  4. Nginx——事件驱动机制(雷霆追风问题,负载均衡)
  5. RANSAC与 最小二乘(LS, Least Squares)拟合直线的效果比较
  6. 项目中使用粘性布局不起作用_项目中的 Git 使用规范
  7. [PYTHON] for循环中关于列表list中remove method 不得不说的秘密
  8. UILAbel 设置了attributedText 后省略号不显示
  9. 调用一个Activity并返回结果
  10. python写sql语句_如何在Python脚本中执行多个SQL语句?
  11. 蓝牙打印 设置打印样式_Android——蓝牙连接打印机以及打印格式
  12. C#编程,使用 Roslyn引擎动态编译代码
  13. java选课系统代码_ssm+jsp开发java学生信息与选课系统(优化界面)
  14. 计算机毕业设计(附源码)python在线答题系统
  15. 循环结构中break、continue、return和exit的区别
  16. java线程报时代码_什么?一个核同时执行两个线程?
  17. 京东物流-三维装箱(记录)
  18. C语言进阶:程序中的三国天下 考研数据结构
  19. 什么是架构即代码( Infrastructure As Code)
  20. 一个2层隐层神经网络解决抑或问题

热门文章

  1. 数字证书、CA、CA证书,傻傻分不清楚?这一篇看懂
  2. 上海师大计算机科学与技术,上海师大计算机科学与技术专业本科文凭、国家承认可查证书...
  3. Nero11序列号 有效序列号
  4. 魔兽怀旧服服务器位置,魔兽世界怀旧服PDD在哪个服务器 魔兽世界怀旧服pdd去哪个区...
  5. ideapad linux s9_联想IdeaPad S9 电源管理驱动
  6. 2023年安徽大学科学技术哲学考研上岸前辈备考经验指导
  7. android 外卖源码,外卖人8.7源码外卖人订餐系统仿美团饿了么外卖安卓APP
  8. Jquery UI 教程
  9. Excel VBA与VSTO基础实战指南 VBA和VSTO权威教材
  10. 2021年中国鱼油发展现状及进出口状况分析:我国鱼油需求进一步扩大 [图]