K8s节点notReady状态解决

挂掉原因:我想要通过externalIP来发布一个service,同时要删除旧的pod,删除命令执行后,节点就不可用了。

错误操作复现

  1. 创建externalIP类型的service
  2. 将已有的deployments/demo的节点置为0(这一步有大问题)
  3. 删除已有的pod节点(直接卡死,之后node节点全部断掉了)
[root@k8s-master01 ~]# kubectl apply -f services-externalip-demo.yaml
service/demo-externalip-service created
[root@k8s-master01 ~]# kubectl get svc
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP      PORT(S)   AGE
demo-externalip-service   ClusterIP   10.97.55.241   192.168.15.154   80/TCP    4s
kubernetes                ClusterIP   10.96.0.1      <none>           443/TCP   128d
[root@k8s-master01 ~]# kubectl describe po demo-6666947f9f-ggd4h
Name:           demo-6666947f9f-ggd4h
Namespace:      default
Priority:       0
Node:           k8s-node01/192.168.15.153
Start Time:     Fri, 02 Apr 2021 21:44:42 +0800
Labels:         app=demopod-template-hash=6666947f9f
Annotations:    <none>
Status:         Running
IP:             10.244.3.9
Controlled By:  ReplicaSet/demo-6666947f9f
Containers:nginx:Container ID:   docker://b18e31dcb300803ce7e611eaecfa45a70b857b403e2dd3ff92db5a341e3306bbImage:          nginx:latestImage ID:       docker-pullable://nginx@sha256:10b8cc432d56da8b61b070f4c7d2543a9ed17c2b23010b43af434fd40e2ca4aaPort:           <none>Host Port:      <none>State:          RunningStarted:      Fri, 02 Apr 2021 21:45:22 +0800Ready:          TrueRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-srskw (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True
Volumes:default-token-srskw:Type:        Secret (a volume populated by a Secret)SecretName:  default-token-srskwOptional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300snode.kubernetes.io/unreachable:NoExecute for 300s
Events:Type     Reason            Age                  From                 Message----     ------            ----                 ----                 -------Warning  FailedScheduling  5h (x13 over 5h15m)  default-scheduler    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.Warning  FailedScheduling  19m                  default-scheduler    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.Normal   Scheduled         19m                  default-scheduler    Successfully assigned default/demo-6666947f9f-ggd4h to k8s-node01Normal   Pulling           <invalid>            kubelet, k8s-node01  Pulling image "nginx:latest"Normal   Pulled            <invalid>            kubelet, k8s-node01  Successfully pulled image "nginx:latest"Normal   Created           <invalid>            kubelet, k8s-node01  Created container nginxNormal   Started           <invalid>            kubelet, k8s-node01  Started container nginx
[root@k8s-master01 ~]# kubectl describe -f services-externalip-demo.yaml
Name:              demo-externalip-service
Namespace:         default
Labels:            app=demo-service
Annotations:       kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"demo-service"},"name":"demo-externalip-service","namespa...
Selector:          app=demo
Type:              ClusterIP
IP:                10.97.55.241
External IPs:      192.168.15.154
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:
Session Affinity:  None
Events:            <none>
[root@k8s-master01 ~]# kubectl get deployments
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
demo   0/1     1            0           5h16m
[root@k8s-master01 ~]# kubectl describe -f services-externalip-demo.yaml
Name:              demo-externalip-service
Namespace:         default
Labels:            app=demo-service
Annotations:       kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"demo-service"},"name":"demo-externalip-service","namespa...
Selector:          app=demo
Type:              ClusterIP
IP:                10.97.55.241
External IPs:      192.168.15.154
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:
Session Affinity:  None
Events:            <none>
[root@k8s-master01 ~]# kubectl get deployments
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
demo   0/1     1            0           5h16m
[root@k8s-master01 ~]# kubectl scale deployments/demo --replicas=0
deployment.extensions/demo scaled
[root@k8s-master01 ~]# kubectl get po
NAME                    READY   STATUS        RESTARTS   AGE
demo-6666947f9f-ggd4h   1/1     Terminating   0          5h16m
[root@k8s-master01 ~]# kubectl delete po demo-6666947f9f-ggd4h
pod "demo-6666947f9f-ggd4h" deleted
^C
[root@k8s-master01 ~]# kubectl get po
NAME                    READY   STATUS        RESTARTS   AGE
demo-6666947f9f-ggd4h   1/1     Terminating   0          5h19m
[root@k8s-master01 ~]# kubectl get deployments
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
demo   0/0     0            0           5h19m
[root@k8s-master01 ~]# kubectl create deploy demo --image=nginx:latest
Error from server (AlreadyExists): deployments.apps "demo" already exists
[root@k8s-master01 ~]# kubectl scale deployments/demo --replicas=3
deployment.extensions/demo scaled
[root@k8s-master01 ~]# kubectl get po
NAME                    READY   STATUS        RESTARTS   AGE
demo-6666947f9f-ggd4h   1/1     Terminating   0          5h20m
demo-6666947f9f-m42q2   0/1     Pending       0          3s
demo-6666947f9f-t58r7   0/1     Pending       0          3s
demo-6666947f9f-xcjzs   0/1     Pending       0          3s
[root@k8s-master01 ~]# kubectl describe po demo-6666947f9f-m42q2
Name:           demo-6666947f9f-m42q2
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=demopod-template-hash=6666947f9f
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/demo-6666947f9f
Containers:nginx:Image:        nginx:latestPort:         <none>Host Port:    <none>Environment:  <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-srskw (ro)
Conditions:Type           StatusPodScheduled   False
Volumes:default-token-srskw:Type:        Secret (a volume populated by a Secret)SecretName:  default-token-srskwOptional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300snode.kubernetes.io/unreachable:NoExecute for 300s
Events:Type     Reason            Age                From               Message----     ------            ----               ----               -------Warning  FailedScheduling  16s (x2 over 16s)  default-scheduler  0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   Ready      master   128d   v1.15.1
k8s-node01     NotReady   <none>   123d   v1.15.1
k8s-node02     NotReady   <none>   123d   v1.15.1

解决方式1-尝试重启所有服务器(失败)

[root@k8s-master01 ~]# poweroff连接断开
连接成功
Last login: Thu Apr  1 21:32:56 2021 from 192.168.15.1
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   Ready      master   128d   v1.15.1
k8s-node01     NotReady   <none>   123d   v1.15.1
k8s-node02     NotReady   <none>   123d   v1.15.1
[root@k8s-master01 ~]# kubectl describe nodes k8s-node01
Name:               k8s-node01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=k8s-node01kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"96:6b:71:dc:33:3d"}flannel.alpha.coreos.com/backend-type: vxlanflannel.alpha.coreos.com/kube-subnet-manager: trueflannel.alpha.coreos.com/public-ip: 192.168.15.153kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.socknode.alpha.kubernetes.io/ttl: 0volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 29 Nov 2020 00:31:58 +0800
Taints:             node.kubernetes.io/unreachable:NoExecutenode.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Conditions:Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message----             ------    -----------------                 ------------------                ------              -------MemoryPressure   Unknown   Fri, 02 Apr 2021 22:12:36 +0800   Thu, 01 Apr 2021 22:13:25 +0800   NodeStatusUnknown   Kubelet stopped posting node status.DiskPressure     Unknown   Fri, 02 Apr 2021 22:12:36 +0800   Thu, 01 Apr 2021 22:13:25 +0800   NodeStatusUnknown   Kubelet stopped posting node status.PIDPressure      Unknown   Fri, 02 Apr 2021 22:12:36 +0800   Thu, 01 Apr 2021 22:13:25 +0800   NodeStatusUnknown   Kubelet stopped posting node status.Ready            Unknown   Fri, 02 Apr 2021 22:12:36 +0800   Thu, 01 Apr 2021 22:13:25 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:InternalIP:  192.168.15.153Hostname:    k8s-node01
Capacity:cpu:                2ephemeral-storage:  100610052Kihugepages-2Mi:      0memory:             4028688Kipods:               110
Allocatable:cpu:                2ephemeral-storage:  92722223770hugepages-2Mi:      0memory:             3926288Kipods:               110
System Info:Machine ID:                 87ff2ee4182e421680e90c865344076cSystem UUID:                48D64D56-CD82-7CD9-7265-00C117529BB5Boot ID:                    9059ab9f-2607-4fe1-880d-f5ccb9f8e784Kernel Version:             4.4.244-1.el7.elrepo.x86_64OS Image:                   CentOS Linux 7 (Core)Operating System:           linuxArchitecture:               amd64Container Runtime Version:  docker://19.3.13Kubelet Version:            v1.15.1Kube-Proxy Version:         v1.15.1
PodCIDR:                     10.244.3.0/24
Non-terminated Pods:         (5 in total)Namespace                  Name                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE---------                  ----                     ------------  ----------  ---------------  -------------  ---default                    demo-6666947f9f-m42q2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56sdefault                    demo-6666947f9f-t58r7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56sdefault                    demo-6666947f9f-xcjzs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56skube-system                kube-flannel-ds-rk4t5    100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      123dkube-system                kube-proxy-fbtln         0 (0%)        0 (0%)      0 (0%)           0 (0%)         123d
Allocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource           Requests   Limits--------           --------   ------cpu                100m (5%)  100m (5%)memory             50Mi (1%)  50Mi (1%)ephemeral-storage  0 (0%)     0 (0%)
Events:Type     Reason                   Age                            From                    Message----     ------                   ----                           ----                    -------Normal   Starting                 <invalid>                      kubelet, k8s-node01     Starting kubelet.Normal   NodeAllocatableEnforced  <invalid>                      kubelet, k8s-node01     Updated Node Allocatable limit across podsNormal   NodeHasSufficientMemory  <invalid> (x2 over <invalid>)  kubelet, k8s-node01     Node k8s-node01 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    <invalid> (x2 over <invalid>)  kubelet, k8s-node01     Node k8s-node01 status is now: NodeHasNoDiskPressureNormal   NodeHasSufficientPID     <invalid> (x2 over <invalid>)  kubelet, k8s-node01     Node k8s-node01 status is now: NodeHasSufficientPIDWarning  Rebooted                 <invalid>                      kubelet, k8s-node01     Node k8s-node01 has been rebooted, boot id: 36b21c39-94a9-437f-bf8a-191eb5181dbeNormal   NodeReady                <invalid>                      kubelet, k8s-node01     Node k8s-node01 status is now: NodeReadyNormal   Starting                 <invalid>                      kube-proxy, k8s-node01  Starting kube-proxy.Normal   Starting                 <invalid>                      kubelet, k8s-node01     Starting kubelet.Normal   NodeAllocatableEnforced  <invalid>                      kubelet, k8s-node01     Updated Node Allocatable limit across podsNormal   NodeHasSufficientMemory  <invalid> (x2 over <invalid>)  kubelet, k8s-node01     Node k8s-node01 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    <invalid> (x2 over <invalid>)  kubelet, k8s-node01     Node k8s-node01 status is now: NodeHasNoDiskPressureNormal   NodeHasSufficientPID     <invalid> (x2 over <invalid>)  kubelet, k8s-node01     Node k8s-node01 status is now: NodeHasSufficientPIDWarning  Rebooted                 <invalid>                      kubelet, k8s-node01     Node k8s-node01 has been rebooted, boot id: 9059ab9f-2607-4fe1-880d-f5ccb9f8e784Normal   NodeReady                <invalid>                      kubelet, k8s-node01     Node k8s-node01 status is now: NodeReadyNormal   Starting                 <invalid>                      kube-proxy, k8s-node01  Starting kube-proxy.
[root@k8s-master01 ~]# kubectl describe nodes k8s-node02
Name:               k8s-node02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=k8s-node02kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:73:ec:96:1c:45"}flannel.alpha.coreos.com/backend-type: vxlanflannel.alpha.coreos.com/kube-subnet-manager: trueflannel.alpha.coreos.com/public-ip: 192.168.15.152kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.socknode.alpha.kubernetes.io/ttl: 0volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 28 Nov 2020 23:23:32 +0800
Taints:             node.kubernetes.io/unreachable:NoExecutenode.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Conditions:Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message----             ------    -----------------                 ------------------                ------              -------MemoryPressure   Unknown   Fri, 02 Apr 2021 22:13:53 +0800   Thu, 01 Apr 2021 22:14:35 +0800   NodeStatusUnknown   Kubelet stopped posting node status.DiskPressure     Unknown   Fri, 02 Apr 2021 22:13:53 +0800   Thu, 01 Apr 2021 22:14:35 +0800   NodeStatusUnknown   Kubelet stopped posting node status.PIDPressure      Unknown   Fri, 02 Apr 2021 22:13:53 +0800   Thu, 01 Apr 2021 22:14:35 +0800   NodeStatusUnknown   Kubelet stopped posting node status.Ready            Unknown   Fri, 02 Apr 2021 22:13:53 +0800   Thu, 01 Apr 2021 22:14:35 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:InternalIP:  192.168.15.152Hostname:    k8s-node02
Capacity:cpu:                2ephemeral-storage:  100610052Kihugepages-2Mi:      0memory:             4028688Kipods:               110
Allocatable:cpu:                2ephemeral-storage:  92722223770hugepages-2Mi:      0memory:             3926288Kipods:               110
System Info:Machine ID:                 cab995456bd34aab927d7b5cb22daf5cSystem UUID:                29144D56-00D2-A845-03B6-DBC78819D1F1Boot ID:                    0c9f8d85-10ad-4d7f-8c80-e752a420a7e5Kernel Version:             4.4.244-1.el7.elrepo.x86_64OS Image:                   CentOS Linux 7 (Core)Operating System:           linuxArchitecture:               amd64Container Runtime Version:  docker://19.3.13Kubelet Version:            v1.15.1Kube-Proxy Version:         v1.15.1
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (2 in total)Namespace                  Name                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE---------                  ----                     ------------  ----------  ---------------  -------------  ---kube-system                kube-flannel-ds-lpdbl    100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      123dkube-system                kube-proxy-tkwvb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         123d
Allocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource           Requests   Limits--------           --------   ------cpu                100m (5%)  100m (5%)memory             50Mi (1%)  50Mi (1%)ephemeral-storage  0 (0%)     0 (0%)
Events:Type     Reason                   Age                            From                    Message----     ------                   ----                           ----                    -------Normal   NodeAllocatableEnforced  <invalid>                      kubelet, k8s-node02     Updated Node Allocatable limit across podsNormal   Starting                 <invalid>                      kubelet, k8s-node02     Starting kubelet.Warning  Rebooted                 <invalid>                      kubelet, k8s-node02     Node k8s-node02 has been rebooted, boot id: 0768fc77-bb23-4fe9-ae2e-ee987dc16b54Normal   NodeReady                <invalid>                      kubelet, k8s-node02     Node k8s-node02 status is now: NodeReadyNormal   NodeHasNoDiskPressure    <invalid> (x2 over <invalid>)  kubelet, k8s-node02     Node k8s-node02 status is now: NodeHasNoDiskPressureNormal   NodeHasSufficientPID     <invalid> (x2 over <invalid>)  kubelet, k8s-node02     Node k8s-node02 status is now: NodeHasSufficientPIDNormal   NodeHasSufficientMemory  <invalid> (x2 over <invalid>)  kubelet, k8s-node02     Node k8s-node02 status is now: NodeHasSufficientMemoryNormal   Starting                 <invalid>                      kube-proxy, k8s-node02  Starting kube-proxy.Normal   NodeAllocatableEnforced  <invalid>                      kubelet, k8s-node02     Updated Node Allocatable limit across podsNormal   Starting                 <invalid>                      kubelet, k8s-node02     Starting kubelet.Normal   NodeHasSufficientMemory  <invalid> (x2 over <invalid>)  kubelet, k8s-node02     Node k8s-node02 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    <invalid> (x2 over <invalid>)  kubelet, k8s-node02     Node k8s-node02 status is now: NodeHasNoDiskPressureNormal   NodeHasSufficientPID     <invalid> (x2 over <invalid>)  kubelet, k8s-node02     Node k8s-node02 status is now: NodeHasSufficientPIDWarning  Rebooted                 <invalid>                      kubelet, k8s-node02     Node k8s-node02 has been rebooted, boot id: 0c9f8d85-10ad-4d7f-8c80-e752a420a7e5Normal   NodeReady                <invalid>                      kubelet, k8s-node02     Node k8s-node02 status is now: NodeReadyNormal   Starting                 <invalid>                      kube-proxy, k8s-node02  Starting kube-proxy.

解决方式2-删除externalIP的service并重启

又怀疑是因为service创建的有问题

[root@k8s-master01 ~]# kubectl get svc
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP      PORT(S)   AGE
demo-externalip-service   ClusterIP   10.97.55.241   192.168.15.154   80/TCP    50m
kubernetes                ClusterIP   10.96.0.1      <none>           443/TCP   129d
[root@k8s-master01 ~]# kubectl delete -f services-externalip-demo.yaml
service "demo-externalip-service" deleted
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   129d
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   Ready      master   129d   v1.15.1
k8s-node01     NotReady   <none>   123d   v1.15.1
k8s-node02     NotReady   <none>   123d   v1.15.1
[root@k8s-master01 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   Ready      master   129d   v1.15.1
k8s-node01     NotReady   <none>   123d   v1.15.1
k8s-node02     NotReady   <none>   123d   v1.15.1

又在从节点上看kubelet的日志,发现联不通master节点的6443端口

4月 02 22:59:06 k8s-node02 kubelet[46204]: E0402 22:59:06.527365   46204 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.15.154:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.15.154:6443: connect: connection refused

看master节点的6443端口,服务是启动着的;

[root@k8s-master01 ~]# netstat -lntup|grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      2017/kube-apiserver

又把目标转向kube-system命名空间下的pod

[root@k8s-master01 ~]# kubectl get po -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-gskbg               1/1     Running   14         129d
coredns-5c98db65d4-kgrls               1/1     Running   14         129d
etcd-k8s-master01                      1/1     Running   14         129d
kube-apiserver-k8s-master01            1/1     Running   18         129d
kube-controller-manager-k8s-master01   1/1     Running   22         129d
kube-flannel-ds-25ch5                  1/1     Running   8          128d
kube-flannel-ds-lpdbl                  0/1     Error     2          123d
kube-flannel-ds-rk4t5                  0/1     Error     2          123d
kube-proxy-6ksnl                       1/1     Running   14         129d
kube-proxy-fbtln                       0/1     Error     2          123d
kube-proxy-tkwvb                       0/1     Error     2          123d
kube-scheduler-k8s-master01            1/1     Running   26         129d

看启动flannel的Error状态的pod的api信息,看不出什么原因

 Normal  SandboxChanged  <invalid>  kubelet, k8s-node02  Pod sandbox changed, it will be killed and re-created.Normal  Pulled          <invalid>  kubelet, k8s-node02  Container image "jmgao1983/flannel:latest" already present on machineNormal  Created         <invalid>  kubelet, k8s-node02  Created container install-cniNormal  Started         <invalid>  kubelet, k8s-node02  Started container install-cniNormal  Pulled          <invalid>  kubelet, k8s-node02  Container image "jmgao1983/flannel:latest" already present on machineNormal  Created         <invalid>  kubelet, k8s-node02  Created container kube-flannelNormal  Started         <invalid>  kubelet, k8s-node02  Started container kube-flannelNormal  SandboxChanged  <invalid>  kubelet, k8s-node02  Pod sandbox changed, it will be killed and re-created.Normal  Pulled          <invalid>  kubelet, k8s-node02  Container image "jmgao1983/flannel:latest" already present on machineNormal  Created         <invalid>  kubelet, k8s-node02  Created container install-cni

看pod的日志,发现说是从节点的10250端口访问不了,发现从节点的10250端口是有服务启动的

[root@k8s-master01 ~]# kubectl logs -f kube-flannel-ds-lpdbl -n kube-system
Error from server: Get https://192.168.15.152:10250/containerLogs/kube-system/kube-flannel-ds-lpdbl/kube-flannel?follow=true: dial tcp 192.168.15.152:10250: connect: no route to host
[root@k8s-master01 ~]# kubectl logs -f kube-proxy-fbtln -n kube-system
Error from server: Get https://192.168.15.153:10250/containerLogs/kube-system/kube-proxy-fbtln/kube-proxy?follow=true: dial tcp 192.168.15.153:10250: connect: no route to host
[root@k8s-master01 ~]# netstat -lntup|grep 10250
tcp6       0      0 :::10250                :::*                    LISTEN      70516/kubelet

从节点10250服务

[root@k8s-node01 ~]# netstat -lntup|grep 10250
tcp6       0      0 :::10250                :::*                    LISTEN      46643/kubelet
[root@k8s-node02 ~]# netstat -lntup|grep 10250
tcp6       0      0 :::10250                :::*                    LISTEN      46204/kubelet

最终把信息看向了connect: no route to host

从节点ping主节点

[root@k8s-node01 ~]# ping 192.168.15.154
PING 192.168.15.154 (192.168.15.154) 56(84) bytes of data.
64 bytes from 192.168.15.154: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 192.168.15.154: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 192.168.15.154: icmp_seq=3 ttl=64 time=0.064 ms
[root@k8s-node02 ~]# ping 192.168.15.154
PING 192.168.15.154 (192.168.15.154) 56(84) bytes of data.
64 bytes from 192.168.15.154: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 192.168.15.154: icmp_seq=2 ttl=64 time=0.040 ms

主节点访问从节点

[root@k8s-master01 ~]# telnet 192.168.15.153 10250
Trying 192.168.15.153...
telnet: connect to address 192.168.15.153: No route to host
[root@k8s-master01 ~]# ping 192.168.15.153
PING 192.168.15.153 (192.168.15.153) 56(84) bytes of data.
From 192.168.15.154 icmp_seq=1 Destination Host Unreachable
From 192.168.15.154 icmp_seq=2 Destination Host Unreachable
From 192.168.15.154 icmp_seq=3 Destination Host Unreachable

又查看三台机器的网络配置信息,发现没问题(因为之前一直可以用),防火墙也是关闭的,路由信息也没什么异常

[root@k8s-master01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
[root@k8s-master01 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
[root@k8s-master01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.15.2    0.0.0.0         UG    100    0        0 ens33
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.3.0      10.244.3.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.15.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33

最终结论是:主节点服务器ping不通从节点,从节点可以ping主节点。

重启主节点服务器和一台从节点,发现就可以ping通了,从节点也自动注册上来了

[root@k8s-master01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=33.2 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=34.1 ms
^C
--- www.a.shifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 33.253/33.701/34.149/0.448 ms
[root@k8s-master01 ~]# ping 192.168.15.153
PING 192.168.15.153 (192.168.15.153) 56(84) bytes of data.
64 bytes from 192.168.15.153: icmp_seq=1 ttl=64 time=0.351 ms
64 bytes from 192.168.15.153: icmp_seq=2 ttl=64 time=0.309 ms
^C
--- 192.168.15.153 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.309/0.330/0.351/0.021 ms
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   Ready      master   129d   v1.15.1
k8s-node01     Ready      <none>   123d   v1.15.1

总结:上面第一次重启没有恢复可能是没有完全重启成功,需要观察kube-system下的服务是否都启动成功,如果有没启动成功的,可以使用kubectl logs -f来看日志,再分析问题出现在什么地方。一般kube-proxy出问题,就是网络的连通性问题。

K8s入门-K8s节点notReady状态解决相关推荐

  1. k8s关于node节点NotReady的解决

    问题描述: 在master上使用kubectl get nodes发现node节点的状态为NotReady 解决方案: 经过排查发现在所有load balancer(nginx)节点上都无法找到VIP ...

  2. 解决K8S节点NotReady状态

    问题场景 使用kubectl get node指令查看节点状态 检查kubeadm的文件,检查k8s初始化信息,情况正常 kubeadm config images list --config kub ...

  3. k8s节点NotReady状态

    #k get node#k describe nodes 问题节点 尝试登陆改节点失败 通过监控发现改节点负载高达300,CPU使用率800% 重启节点后Ready. Terminating 状态的p ...

  4. k8s节点变为NotReady状态

    记录一个小教训.切记: k8s安装完成后不要轻易修改hostname. 问题: k8s的master节点变为notReady,另外一个worker节点正常. 症状:master上的flannel po ...

  5. 从零开始入门 K8s | 有状态应用编排 - StatefulSet

    作者 | 酒祝  阿里巴巴技术专家 本文整理自<CNCF x Alibaba 云原生技术公开课>第 22 讲. 关注"阿里巴巴云原生"公众号,回复关键词**" ...

  6. k8s节点NotReady问题定位

    步骤一:在master节点上执行kubelet get nodes命令,可以看到某节点的状态一直是notready. 步骤二:k8s上可以使用命令kubectl describe nodes 10-X ...

  7. 啃K8s之快速入门,以及哭吧S(k8s)单节点部署

    啃K8s之快速入门,以及哭吧S(k8s)单节点部署 一:Kubernets概述 1.1:Kubernets是什么? 1.2:Kubernets特性 1.3:Kubernets群集架构与组件 1.3.1 ...

  8. k8s之Pod详解(五)【Kubernetes(K8S) 入门进阶实战完整教程,黑马程序员K8S全套教程(基础+高级)】

    参考于Kubernetes(K8S) 入门进阶实战完整教程,黑马程序员K8S全套教程(基础+高级) Pod Pod的结构 每个Pod中都可以包含一个或者多个容器 这些容器可以分为两类: 用户自定义用的 ...

  9. k8s入门:裸机部署 k8s 集群

    系列文章 第一章:✨ k8s入门:裸机部署 k8s 集群 第二章:✨ k8s入门:部署应用到 k8s 集群 第三章:✨ k8s入门:service 简单使用 第四章:✨ k8s入门:StatefulS ...

最新文章

  1. centos php ioncube_Linux/Centos 安装PHP ioncube扩展
  2. linux 修改home 目录
  3. guestbook.php注入,TinyGuestBook 'sign.php'多个SQL注入漏洞
  4. 十七、Redis事务
  5. UVA - 512 ​​​​​​​Spreadsheet Tracking
  6. mysql 及时点还原_mysqlbinglog基于即时点还原
  7. python抽荣耀水晶_深度分析抽取荣耀水晶的窍门,不良抽法很好用,平均100RMB一颗...
  8. 电子工程可以报考二建_二建报考要求是工程类专业怎么办?非工程类专业可以报名吗?...
  9. 【Oracle】DG三种保护模式及切换方式
  10. 深蓝学院机器人学中的状态估计课程
  11. Linux下安装McAfee防病毒软件(企业版本)
  12. Ovi Store标志着App store模式大战正式开启
  13. SPSS新手教程—两步聚类之结果解读
  14. android弹窗不能手动关闭_如何检测弹窗、并关闭相应的安卓弹窗
  15. 运营 | 抖音运营12个步骤
  16. 程序员延寿指南-活着才能输出
  17. 电子技术课设------交通灯信号控制器
  18. 微信小程序整合Vant Weapp 步骤
  19. nextcloud中设置 onlyoffice服务器,连接异常(invalid token)
  20. 虚拟机ipv4和6都没访问权限_ipv4无访问权限,小编教你ipv4无internet访问权限怎么办...

热门文章

  1. 计算机快捷键如何移动到桌面,如何设置显示桌面快捷键 设置显示桌面快捷键方法【图文】...
  2. Vuepress-theme-reco 构建静态网页错误:在格式错误时超出了最大调用堆栈大小
  3. android9.0安装包更新,一加5官方安卓9.0稳定版固件rom系统升级更新包:第4版
  4. PHPmywind 调用方法
  5. log4j WARN 和 SLF4J WARN 解决办法
  6. 你平时都怎么记笔记?给好学的你安利10个最好用的记笔记神器!
  7. Java复习:确定给定日期是一年的第几天
  8. 编译程序和解释程序的区别
  9. 产生随机数——起名神器
  10. Hdu2184汉诺塔VIII