文章目录

  • 离线安装Kubernetes
    • 一、环境准备
    • 二、配置ftp服务
    • 三、Docker安装
    • 四、部署Kubernetes
    • 五、Kubernetes优化
    • 六、配置kube-proxy ipvs
    • 七、重新获取加入k8s集群token

离线安装Kubernetes


软件包下载提取码:y8zq

IP地址 系统 节点
10.0.0.30 CentOS7 master主节点:harbor仓库
10.0.0.40 CentOS7 node从节点
10.0.0.50 CentOS7 node从节点

一、环境准备

关闭防火墙、清除防火墙规则、selinux、swap分区、配置hosts主机映射

以下操作三个节点均需实施

给系统升级内核

开启路由转发

$ systemctl stop firewalld
$ systemctl disable firewalld
$ firewall-cmd --state
not running$ iptables -F
$ iptables -X
$ iptables -Z
/usr/sbin/iptables-save$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
$ setenforce 0
$ swapoff -a
$ sed -i.bak '/swap/s/^/#/' /etc/fstab
$ cat >>/etc/hosts<<EOF
10.0.0.30 xnode1
10.0.0.40 xnode2
10.0.0.50 xnode3
EOF#升级系统内核:
#导入elrepo软件仓库的公共秘钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org#安装elrepo软件仓库的yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm#启用elrepo软件源并下载最新稳定版本内核
yum --enablerepo=elrepo-kernel install kernel-ml -y#查看可用内核,并设置内核启动顺序
$ sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux (5.19.1-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-327.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-1924e95064844f2ca7d9aad6ae89d2a3) 7 (Core)#生成 grub 配置文件:初始化页面的第一个内核将作为默认内核、初始化页面的第一个内核将作为默认内核
$ grub2-set-default 0
$ grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.19.1-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.19.1-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-1924e95064844f2ca7d9aad6ae89d2a3
Found initrd image: /boot/initramfs-0-rescue-1924e95064844f2ca7d9aad6ae89d2a3.img
done#重启验证
$ reboot
$ uname -r
5.19.1-1.el7.elrepo.x86_64
#删除旧内核
yum remove kernel-tools kernel -y#开启路由转发
cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF$ modprobe br_netfilter
$ sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

三个主机配置IP地址:

#xnode1主节点执行下面的操作
[root@xnode1 ~]# nmcli connection modify eno16777736 connection.autoconnect yes ipv4.addresses 10.0.0.30/24 ipv4.gateway 10.0.0.2 ipv4.dns 8.8.8.8 ipv4.method manual
[root@xnode1 ~]# nmcli connection down eno16777736 && nmcli connection up eno16777736 && nmcli connection reload eno16777736[root@xnode1 ~]# ifconfig | grep inet | awk NR==12'{print$2}'
10.0.0.30
[root@xnode1 ~]# ping www.baidu.com -c 4
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38: icmp_seq=1 ttl=128 time=27.3 ms
64 bytes from 14.215.177.38: icmp_seq=2 ttl=128 time=27.6 ms
64 bytes from 14.215.177.38: icmp_seq=3 ttl=128 time=27.2 ms
64 bytes from 14.215.177.38: icmp_seq=4 ttl=128 time=27.4 ms--- www.a.shifen.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3008ms
rtt min/avg/max/mdev = 27.233/27.443/27.662/0.193 ms#xnode2从节点执行下面的操作
[root@xnode2 ~]# nmcli connection modify eno16777736 connection.autoconnect yes ipv4.addresses 10.0.0.40/24 ipv4.gateway 10.0.0.2 ipv4.dns 8.8.8.8 ipv4.method manual
[root@xnode2 ~]# nmcli connection down eno16777736 && nmcli connection up eno16777736 && nmcli connection reload eno16777736[root@xnode2 ~]# ifconfig | grep inet | awk NR==12'{print$2}'
10.0.0.40
[root@xnode2 ~]# ping www.baidu.com -c 4
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38: icmp_seq=1 ttl=128 time=27.3 ms
64 bytes from 14.215.177.38: icmp_seq=2 ttl=128 time=27.6 ms
64 bytes from 14.215.177.38: icmp_seq=3 ttl=128 time=27.2 ms
64 bytes from 14.215.177.38: icmp_seq=4 ttl=128 time=27.4 ms--- www.a.shifen.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3008ms
rtt min/avg/max/mdev = 27.233/27.443/27.662/0.193 ms#xnode3从节点
[root@xnode3 ~]# nmcli connection modify eno16777736 connection.autoconnect yes ipv4.addresses 10.0.0.50/24 ipv4.gateway 10.0.0.2 ipv4.dns 8.8.8.8 ipv4.method manual
[root@xnode3 ~]# nmcli connection down eno16777736 && nmcli connection up eno16777736 && nmcli connection reload eno16777736[root@xnode3 ~]# ifconfig | grep inet | awk NR==12'{print$2}'
10.0.0.50
[root@xnode3 ~]# ping www.baidu.com -c 4
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38: icmp_seq=1 ttl=128 time=27.3 ms
64 bytes from 14.215.177.38: icmp_seq=2 ttl=128 time=27.6 ms
64 bytes from 14.215.177.38: icmp_seq=3 ttl=128 time=27.2 ms
64 bytes from 14.215.177.38: icmp_seq=4 ttl=128 time=27.4 ms

二、配置ftp服务

[root@xnode1 ~]# yum install vsftpd.x86_64 -y
[root@xnode1 ~]# cp -rp /etc/vsftpd/vsftpd.conf{,.bak}
[root@xnode1 ~]# sed -i '1i\anon_root=/' /etc/vsftpd/vsftpd.conf #添加访问目录
[root@xnode1 ~]# systemctl start vsftpd && systemctl enable vsftpd #启动vsftp服务

三、Docker安装

配置Docker、kubernetes源

以下操作三个节点均需实施

①.将软件源文件 Docker 上传到/opt目录下

②.创建Docker本地源

③.创建阿里云加速,部署docker-compose

[root@xnode1 ~]# ll /opt/Docker/Docker/
total 28
drwxr-xr-x 2 root root 20480 Aug 16 17:39 base
drwxr-xr-x 2 root root  4096 Aug 16 17:39 repodata[root@xnode1 ~]# cat >>/etc/yum.repos.d/local.repo<<EOF
[centos]
name=centos
baseurl=file:///tmp/centos
gpgcheck=0
enable=1
[Docker]
name=Docker
baseurl=file:///opt/Docker/Docker/
gpgcheck=0
enable=1
[Kubernetes]
name=Kubernetes
baseurl=file:///root/kubernetes/Kubernetes/
gpgcheck=0
enable=1
EOF#配置本地源:并设置开机自动挂载
[root@xnode1 ~]# mount /dev/cdrom/ /tmp/centos/
[root@xnode1 ~]# echo /dev/cdrom/ /tmp/centos/ iso9660 defaults 0 0 >> /etc/fstab[root@xnode1 ~]# yum clean all
[root@xnode1 ~]# yum makecache fast
[root@xnode1 ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* elrepo: hkg.mirror.rackspace.com* epel: hkg.mirror.rackspace.com
repo id                    repo name                                                              status
Docker                     Docker                                                                    341
centos                     centos                                                                  3,723
elrepo                     ELRepo.org Community Enterprise Linux Repository - el7                    150
epel/x86_64                Extra Packages for Enterprise Linux 7 - x86_64                         13,758
kubernetes                                         kubernetes                                            341#安装Docker:安装依赖软件包,安装Docker
[root@xnode1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@xnode1 ~]# yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io -y#配置Docker加速和仓库地址
[root@xnode1 ~]# mkdir -p /etc/docker/
[root@xnode1 ~]# tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF[root@xnode1 ~]# systemctl daemon-reload
[root@xnode1 ~]# systemctl start docker.service
[root@xnode1 ~]# systemctl enable docker.service[root@xnode1 ~]# chmod +x /opt/Docker/compose/docker-compose
[root@xnode1 ~]# mv /opt/Docker/compose/docker-compose /usr/local/bin/
[root@xnode1 ~]# docker-compose --version
docker-compose version 1.24.1, build 4667896b#node2节点部署:部署docker源和ftp源
[root@xnode2 ~]# mkdir -p /tmp/centos
[root@xnode2 ~]# mount /dev/cdrom /tmp/centos/
[root@xnode2 ~]# cat >>/etc/yum.repo.d/local/repo<<EOF
[Docker]
name=Docker
baseurl=ftp://10.0.0.30/opt/Docker
gpgcheck=0
enable=1
[Centos]
name=Centos
baseurl=ftp://10.0.0.30/tmp/centos/
gpgcheck=0
enable=1
[Kubernetes]
name=Kubernetes
baseurl=ftp://10.0.0.30/root/kubernetes/Kubernetes/
gpgcheck=0
enable=1
EOF
EOF#清除缓存,重新加载可用yum源
[root@xnode2 ~]# yum clean all
[root@xnode2 ~]# yum makecache fast
[root@xnode2 ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* elrepo: mirror.rackspace.com
repo id                repo name                                                             status
Centos                 Centos                                                                4,070
Docker                 Docker                                                                  463
elrepo                 ELRepo.org Community Enterprise Linux Repository - el7                  150
repolist: 4,683#node2节点安装docker,并配置阿里云镜像加速
[root@xnode2 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@xnode2 ~]# yum install docker-ce -y
[root@xnode2 opt]# chmod +x docker-compose
[root@xnode2 opt]# mv docker-compose /usr/local/bin/
[root@xnode2 opt]# tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF[root@xnode1 ~]# docker info | grep Cgroup
Cgroup Driver: systemd#xnode3配置和xnode2一致,就不再赘述了

四、部署Kubernetes

部署环境准备:

①.配置chrony时钟同步

②.所有节点创建/etc/sysctl.d/k8s.conf 文件

③.配置 IPVS

#配置chrony时钟同步
[root@xnode1 ~]# yum install chrony -y#xnode1主节点注释默认 NTP 服务器,指定上游公共 NTP 服务器,并允许其他节点同步时间。
[root@xnode1 ~]# sed -i 's/^server/#&/' /etc/chrony.conf
[root@xnode1 ~]# cat >> /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
allow all
EOF[root@xnode1 ~]# systemctl start chronyd.service
[root@xnode1 ~]# systemctl start chronyd.service && systemctl enable chronyd#xnode2和xnode3从节点:指定内网 master 节点为上游 NTP 服务器,重启服务并设为开机启动
sed -i 's/^server/#&/' /etc/chrony.conf
echo server 10.0.0.30 iburst >> /etc/chrony.conf
systemctl start chronyd.service && systemctl enable chronyd#验证
[root@xnode1 ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? 103-153-176-123.as131657>     0   6     0     -     +0ns[   +0ns] +/-    0ns
^+ 202.28.116.236                1   6   177     2   +157us[ +157us] +/-  195ms
^* time.cloudflare.com           3   6   177     3  +6250us[+5670us] +/-  109ms
^- 125-228-203-179.hinet-ip>     2   6    77    64    +29ms[  +28ms] +/-   57ms[root@xnode2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* xnode1                        4   6    37    27  -2446ns[-1655us] +/-  114ms[root@xnode3 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? xnode1                        2   6     1     2  -6092us[-6092us] +/-   42ms#xnode1 xnode2 xnode3均需实施
#创建k8s.conf文件
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF#配置ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOFchmod +755 /etc/sysconfig/modules/ipvs.modules && bash#安装ipset软件包
yum install ipset ipvsadm -y

安装Kubernetes:

①.安装Kubernetes集群, 该操作所有节点均需实施

②.初始化kubernetes,master节点实施

③.配置kubectl的工具,安装并配置kubernetes的网络,master节点实施

④.将xnode2和xnode3节点加入kubernetes集群,xnode2和xnode3节点实施

⑤.安装kubernetes dashboard, master节点实施

yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1
systemctl enable kubelet.service && systemctl start kubelet.service#初始化kubernetes
[root@xnode1 ~]# kubeadm init --apiserver-advertise-address 10.0.0.30 \
--kubernetes-version="v1.14.1" --pod-network-cidr=10.244.0.0/16 \
--image-repository=registry.aliyuncs.com/google_containersmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.0.30:6443 --token yqctmc.j325mz4wzhwq3nfl \--discovery-token-ca-cert-hash sha256:3fd5ab2000b407f01ef601461990796f84cc8a6bc5f5c99b461f7d3eda0fcc1b [root@xnode1 ~]# mkdir -p $HOME/.kube
[root@xnode1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@xnode1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config#检查集群的状态
[root@xnode1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}   #配置kubernetes网络
[root@xnode1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
[root@xnode1 ~]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created#查看kube状态
[root@xnode1 ~]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-f9pqv         1/1     Running   0          12m
coredns-8686dcc4fd-mtcmn         1/1     Running   0          12m
etcd-xnode1                      1/1     Running   0          12m
kube-apiserver-xnode1            1/1     Running   0          11m
kube-controller-manager-xnode1   1/1     Running   0          11m
kube-flannel-ds-amd64-frh8l      1/1     Running   0          77s
kube-proxy-q7z99                 1/1     Running   0          12m
kube-scheduler-xnode1            1/1     Running   0          12m#将xnode2和xnode3节点加入kubernetes集群
kubeadm join 10.0.0.30:6443 --token yqctmc.j325mz4wzhwq3nfl \
--discovery-token-ca-cert-hash sha256:3fd5ab2000b407f01ef601461990796f84cc8a6bc5f5c99b461f7d3eda0fcc1b[root@xnode1 ~]# kubectl get node
NAME     STATUS     ROLES    AGE   VERSION
xnode1   Ready      master   16m   v1.14.1
xnode2   NotReady   <none>   22s   v1.14.1
xnode3   NotReady   <none>   22s   v1.14.1#安装kubernetes dashboard
[root@xnode1 ~]# kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml#设置访问端口
[root@xnode1 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
# 将type: ClusterIP 改为 type: NodePort21   ports:22   - port: 44323     protocol: TCP24     targetPort: 844325   selector:26     k8s-app: kubernetes-dashboard27   sessionAffinity: None28   type: NodePort   ==>修改#查看端口号
[root@xnode1 ~]# kubectl get svc -A |grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.111.84.203   <none>        8000/TCP                 2m56s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.99.20.86     <none>        443:32429/TCP            2m56s
#创建访问账号
[root@xnode1 ~]# cat >>dash.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard
EOF
[root@xnode1 ~]# kubectl apply -f dash.yaml
serviceaccount/admin-user unchanged
clusterrolebinding.rbac.authorization.k8s.io/admin-user created#获取访问令牌
[root@xnode1 ~]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXA5NjRyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NjIxOTAwYy0xZGUxLTExZWQtODFjNy0wMDBjMjkyY2U5YTUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.D6Tq2IK2mxMkwFQ03m16VfpGfjXmtdMUdRpKmW-wsZvQqRxNXsTC1iaA4-dwqoIpRnUmBzILevZ6AtTO911zC5F2KsMSdLcuV0zHeFc1MqR14CD63r_HweGZEHU1668OLrV_1RhTVj2htPj7osA1HzQ3pev7c5I2Ro6fhVRVmnnS3Js8jdaAZMjV38e6uCuzrqOxPF4l-eBXJmtvsf4-XMioGRbkt7GOVrxzSTxBTCLm3vlPowFVRgY71qqtorr36gnaL4n7RJGS0WjOYuBKdywpyg_k5EjhqSJx3rrnvnhuUHiebfMSC6gDhpWv0HGaXYP81wWsBJo0cYwODyanpg

五、Kubernetes优化

配置kubectl的命令填充:

[root@xnode1 ~]# source /usr/share/bash-completion/bash_completion
[root@xnode1 ~]# source <(kubectl completion bash)
[root@xnode1 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

在xnode2节点上执行kubectl命令:

[root@xnode1 ~]# scp -r ~/.kube/ 10.0.0.40:~/[root@xnode2 ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-f9pqv         1/1     Running   1          3h48m
coredns-8686dcc4fd-mtcmn         1/1     Running   1          3h48m
etcd-xnode1                      1/1     Running   1          3h47m
kube-apiserver-xnode1            1/1     Running   1          3h47m
kube-controller-manager-xnode1   1/1     Running   1          3h47m
kube-flannel-ds-amd64-67htn      1/1     Running   2          3h32m
kube-flannel-ds-amd64-frh8l      1/1     Running   1          3h36m
kube-proxy-q7z99                 1/1     Running   1          3h48m
kube-proxy-vw9wf                 1/1     Running   1          3h32m
kube-scheduler-xnode1            1/1     Running   1          3h47m

六、配置kube-proxy ipvs

开启ipvs:

①.修改kube-proxy配置文件

②.重启kube-proxy

[root@xnode1 ~]# kubectl edit cm kube-proxy -n kube-system32     ipvs:33       excludeCIDRs: null34       minSyncPeriod: 0s35       scheduler: ""36       syncPeriod: 30s37     kind: KubeProxyConfiguration38     metricsBindAddress: 127.0.0.1:1024939     mode: "ipvs"               ==>添加ipvs40     nodePortAddresses: null41     oomScoreAdj: -99942     portRange: ""43     resourceContainer: /kube-proxy44     udpIdleTimeout: 250ms[root@xnode1 ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-q7z99" deleted
pod "kube-proxy-vw9wf" deleted

测试ipvs:

①.由于已经通过了configmap修改了kube-proxy的配置,素有后期增加了节点会直接使用ipvs模式,我们这里查看一下日志

#查看一下kube-proxy容器的详细信息[root@xnode1 ~]# kubectl describe pod -n kube-system kube-proxy
Name:               kube-proxy-ptlm8
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               xnode1/10.0.0.30
Start Time:         Wed, 17 Aug 2022 03:45:04 -0400
Labels:             controller-revision-hash=5f46cbf776k8s-app=kube-proxypod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 10.0.0.30
Controlled By:      DaemonSet/kube-proxy
Containers:kube-proxy:Container ID:  docker://b7fc4a3b55063900c62415803b30097342e29a2dadd26277c7aac238cb49de55Image:         registry.aliyuncs.com/google_containers/kube-proxy:v1.14.1Image ID:      docker-pullable://registry.aliyuncs.com/google_containers/kube-proxy@sha256:44af2833c6cbd9a7fc2e9d2f5244a39dfd2e31ad91bf9d4b7d810678db738ee9Port:          <none>Host Port:     <none>#查看kube-proxy-ptlm8这个pod的日志信息
[root@xnode1 ~]# kubectl logs kube-proxy-ptlm8 -n kube-system
I0817 07:45:05.902030       1 server_others.go:177] Using ipvs Proxier.     ==>可以看到这里已经在使用ipvs
W0817 07:45:05.902743       1 proxier.go:381] IPVS scheduler not specified, use rr by default
I0817 07:45:05.902982       1 server.go:555] Version: v1.14.1
I0817 07:45:05.922512       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0817 07:45:05.923301       1 config.go:102] Starting endpoints config controller
I0817 07:45:05.923338       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0817 07:45:05.923354       1 config.go:202] Starting service config controller
I0817 07:45:05.923416       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0817 07:45:06.023978       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0817 07:45:06.024736       1 controller_utils.go:1034] Caches are synced for service config controller

七、重新获取加入k8s集群token

[root@xnode3 ~]# kubeadm token create --print-join-command

【Kubernetes离线安装】相关推荐

  1. 在kubernetes集群用helm离线安装harbor

    背景说明 在公司内部局域网环境kubernetes集群(未连接互联网)通过helm离线安装harbor 实施步骤 一.kubernetes集群安装helm(已安装的直接跳过此节) 1. 关于helm ...

  2. Centos离线安装Kubernetes集群

    目录 kuberadmin离线安装K8s集群 环境准备 基础环境配置 安装docker 准备基础镜像以及安装包 镜像 集群初始化 设置.kube/config 安装网络组件 加入node节点 验证集群 ...

  3. 轻量级Kubernetes之k3s:4:离线安装与部署

    k3s安装已经非常方便,考虑到企业内网无法直接联网的情况,可以考虑使用设定INSTALL_K3S_SKIP_DOWNLOAD进行安装.使用官方脚本无需做任何改动即可进行离线安装. 步骤1: 手动下载指 ...

  4. kubesphere 3.0离线安装

    离线安装 离线安装几乎与在线安装相同,不同之处是您必须创建一个本地仓库来托管 Docker 镜像.本教程演示了如何在离线环境中将 KubeSphere 安装到 Kubernetes 上. 开始下方步骤 ...

  5. 离线安装minikube—1.10.1

    基础环境: Virtual box OS: Ubuntu:16.04 enp0s3:(Hostonly)192.168.56.102(提供pc端访问服务器-hostonly) enp0s8:(NAT) ...

  6. Openshift 4.4 静态 IP 离线安装系列:初始安装

    Openshift 4.4 静态 IP 离线安装系列:初始安装 上篇文章准备了离线安装 OCP 所需要的离线资源,包括安装镜像.所有样例 Image Stream 和 OperatorHub 中的所有 ...

  7. rancher单节点离线安装_Rancher花里胡哨的部署方式

    一.Rancher简介 Rancher 最初是为了与多个协调者合作而构建的.随着 Kubernetes 在市场上的兴起,Rancher 2.x 专门部署和管理在任何提供商上随时随地运行的 Kubern ...

  8. 华为 A800-9000 服务器 离线安装MindX DL 可视化环境+监控

    MindX DL Sample主要应用于企业的数据中心或超算中心机房中,针对不同的应用场景为客户提供AI深度学习端到端解决方案. 传统行业:用户无自建深度学习平台,希望能够提供简单易用.软硬件一体化的 ...

  9. Kubernetes -K8S安装部署及SpringCloud应用

    k set image deploy kubia nodejs=luksa/kubia:v2 一.Kubernetes - 一键安装Kubernetes集群 集群方案 使用三台物理机或VMware虚拟 ...

最新文章

  1. Pycharm 2018安装步骤详解
  2. UI5的货币显示格式的逻辑
  3. 技术实践第三期|HashTag在Redis集群环境下的使用
  4. 新入职了一个卷王,天天加班12点!张口闭口就是性能优化,太让人崩溃……...
  5. DPDK在Linux用户级执行环境中执行EAL
  6. DB2常用错误代码大全
  7. 《Linux编程》学习笔记 ·001【基本操作、常用命令】
  8. Win11系统中的Thumbs.db文件可以删除吗?
  9. 如何制作SCI论文中的Figure(三)
  10. Fedora安装完必做
  11. acl审计软件_现在有多少种比较常用的审计软件 ?
  12. 【Java异常】Caused by: com.sun.mail.iap.BadCommandException: A3 BAD invalid command or parameters的解决方案
  13. 【渝粤题库】陕西师范大学201611《中国古代文学(四)》作业
  14. Oops是什么有什么用
  15. 小米为什么要“抛弃”红米?
  16. JavavEE中网络编程Socket套接字Ⅱ(TCP)
  17. 【历史上的今天】7 月 8 日:PostgreSQL 发布;SUSE 收购 K8s 最大服务商;动视暴雪合并
  18. linux http连接超时时间设置,Linux 下 HTTP连接超时
  19. 【统计学】统计学基础
  20. Nachos进程数量限制128、ID号分配以及基于优先级的调度算法详解

热门文章

  1. SEE Conf 大会直播邀请函丨体验技术风向标
  2. 笔记:模电-1.2二极管
  3. 【译】迁移被废弃的Kotlin Android Extensions插件
  4. 基于cocos2d-x简易泡泡龙游戏
  5. json解析天气预报java_JAVA操作json实战--获得天气预报信息
  6. 微服务研究 - Swoole框架-Swoft初探
  7. checkbox 选中未选中赋值 以及是否选中状态判断
  8. PTA平台,jmu-python-字符串-统计不同字符个数
  9. java编写关机恶搞程序,VB关机恶搞小程序
  10. 相逢在栀枝花开的季节