k8s dial tcp connect: no route to host
莫名其妙遇到…pod container 启动失败:Back-off restarting failed container
StatefulSet zk 集群 pod 一直处于pending,pv nfs
[root@hadoop03 k8s]# kubectl get pods
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 16s
[root@hadoop03 k8s]# kubectl describe pod zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=zkcontroller-revision-hash=zk-84975cc754statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/zk
Containers:kubernetes-zookeeper:Image: kubebiz/zookeeperPorts: 2181/TCP, 2888/TCP, 3888/TCPHost Ports: 0/TCP, 0/TCP, 0/TCPCommand:sh-cstart-zookeeper --servers=3 --data_dir=/var/lib/zookeeper/data --data_log_ dir=/var/lib/zookeeper/data/log --conf_dir=/opt/zookeeper/conf --client_port=218 1 --election_port=3888 --server_port=2888 --tick_time=2000 --init_limit=10 --syn c_limit=5 --heap=512M --max_client_cnxns=60 --snap_retain_count=3 --purge_interv al=12 --max_session_timeout=40000 --min_session_timeout=4000 --log_level=INFORequests:cpu: 100mmemory: 512MiLiveness: exec [sh -c zookeeper-ready 2181] delay=10s timeout=5s period= 10s #success=1 #failure=3Readiness: exec [sh -c zookeeper-ready 2181] delay=10s timeout=5s period= 10s #success=1 #failure=3Environment: <none>Mounts:/var/lib/zookeeper from datadir (rw)/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8xqxv ( ro)
Conditions:Type StatusPodScheduled False
Volumes:datadir:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: datadir-zk-0ReadOnly: falsekube-api-access-8xqxv:Type: Projected (a volume that contains injected data fro m multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: <nil>DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists fo r 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 24s (x2 over 25s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
zookeeper.yaml
[root@hadoop03 k8s]# cat zookeeper.yaml
apiVersion: v1
kind: Service
metadata:name: zk-hslabels:app: zk
spec:ports:- port: 2888name: server- port: 3888name: leader-electionclusterIP: Noneselector:app: zk
---
apiVersion: v1
kind: Service
metadata:name: zk-cslabels:app: zk
spec:ports:- port: 2181name: clientselector:app: zk
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: zk-pdb
spec:selector:matchLabels:app: zkmaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: zk
spec:selector:matchLabels:app: zkserviceName: zk-hsreplicas: 3updateStrategy:type: RollingUpdatepodManagementPolicy: OrderedReadytemplate:metadata:labels:app: zkspec:#affinity:# podAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# - labelSelector:# matchExpressions:# - key: "app"# operator: In# values:# - zk# topologyKey: "kubernetes.io/hostname"containers:- name: kubernetes-zookeeperimagePullPolicy: Alwaysimage: "kubebiz/zookeeper"resources:requests:memory: "512Mi"cpu: "0.1"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start-zookeeper \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \--conf_dir=/opt/zookeeper/conf \--client_port=2181 \--election_port=3888 \--server_port=2888 \--tick_time=2000 \--init_limit=10 \--sync_limit=5 \--heap=512M \--max_client_cnxns=60 \--snap_retain_count=3 \--purge_interval=12 \--max_session_timeout=40000 \--min_session_timeout=4000 \--log_level=INFO"readinessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5livenessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5volumeMounts:- name: datadirmountPath: /var/lib/zookeepersecurityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: datadirannotations:volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" ### sc 绑定相同 sc matedata name对应pvc(同一个namespace)spec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 1Gi
sc 对应 NFS provisioner,以及pvc
[root@hadoop03 NFS]# cat nfs-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage
provisioner: fuseim.pri/ifs ###
parameters:archiveOnDelete: "false"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:archiveOnDelete: "false"######### NFS provisioner #########[root@hadoop03 NFS]# cat nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:namespace: nfs-clientname: nfs-client-provisionerlabels:app: nfs-client-provisioner
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs ###- name: NFS_SERVERvalue: 192.168.153.103- name: NFS_PATHvalue: /nfs/datavolumes:- name: nfs-client-rootnfs:server: 192.168.153.103path: /nfs/data
[root@hadoop03 NFS]########## pvc #########
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: test-sc-pvcannotations:volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" ### sc matedata name
spec:accessModes:- ReadWriteManyresources:requests:storage: 1Gi
##################
[root@hadoop03 k8s]# kubectl get pod -n nfs-client
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-764f44f754-gww47 0/1 CrashLoopBackOff 7 (2m22s ago) 17h
test-pod 0/1 Completed 0 17h
[root@hadoop03 k8s]# kubectl describe pod nfs-client-provisioner-764f44f754-htndd -n nfs-client
Name: nfs-client-provisioner-764f44f754-htndd
Namespace: nfs-client
Priority: 0
Node: hadoop03/192.168.153.103
Start Time: Fri, 19 Nov 2021 09:56:49 +0800
Labels: app=nfs-client-provisionerpod-template-hash=764f44f754
Annotations: <none>
Status: Running
IP: 10.244.0.8
IPs:IP: 10.244.0.8
Controlled By: ReplicaSet/nfs-client-provisioner-764f44f754
Containers:nfs-client-provisioner:Container ID: docker://7fe4e55942d2f104c56ce092800eb7270b23f6f237f7b3a29ad7e974cd756bbdImage: quay.io/external_storage/nfs-client-provisioner:latestImage ID: docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919Port: <none>Host Port: <none>State: TerminatedReason: ErrorExit Code: 255Started: Fri, 19 Nov 2021 09:57:29 +0800Finished: Fri, 19 Nov 2021 09:57:33 +0800Last State: TerminatedReason: ErrorExit Code: 255Started: Fri, 19 Nov 2021 09:57:09 +0800Finished: Fri, 19 Nov 2021 09:57:12 +0800Ready: FalseRestart Count: 2Environment:PROVISIONER_NAME: fuseim.pri/ifsNFS_SERVER: 192.168.153.103NFS_PATH: /nfs/dataMounts:/persistentvolumes from nfs-client-root (rw)/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkwdg (ro)
Conditions:Type StatusInitialized TrueReady FalseContainersReady FalsePodScheduled True
Volumes:nfs-client-root:Type: NFS (an NFS mount that lasts the lifetime of a pod)Server: 192.168.153.103Path: /nfs/dataReadOnly: falsekube-api-access-vkwdg:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: <nil>DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 51s default-scheduler Successfully assigned nfs-client/nfs-client-provisioner-764f44f754-htndd to hadoop03Normal Pulled 43s kubelet Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest" in 6.395868709sNormal Pulled 33s kubelet Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest" in 4.951636594sNormal Pulling 16s (x3 over 49s) kubelet Pulling image "quay.io/external_storage/nfs-client-provisioner:latest"Normal Pulled 12s kubelet Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest" in 4.671464188sNormal Created 11s (x3 over 42s) kubelet Created container nfs-client-provisionerNormal Started 11s (x3 over 42s) kubelet Started container nfs-client-provisionerWarning BackOff 6s (x2 over 27s) kubelet Back-off restarting failed container#######################[root@hadoop03 k8s]# kubectl logs nfs-client-provisioner-764f44f754-htndd -n nfs-client
F1119 01:58:09.702588 1 provisioner.go:180] Error getting server version: Get https://10.1.0.1:443/version?timeout=32s: dial tcp 10.1.0.1:443: connect: no route to host
[root@hadoop03 k8s]#
解决:
[root@hadoop03 k8s]# service docker stop
Redirecting to /bin/systemctl stop docker.service
Warning: Stopping docker.service, but it can still be activated by:docker.socket
[root@hadoop03 k8s]# systemctl stop docker.socket [root@hadoop03 k8s]# service kubelet stop Redirecting to /bin/systemctl stop kubelet.service
[root@hadoop03 k8s]# iptables --flush
[root@hadoop03 k8s]# iptables -tnat --flush
[root@hadoop03 k8s]# service docker start Redirecting to /bin/systemctl start docker.service
[root@hadoop03 k8s]# service kubelet start
Redirecting to /bin/systemctl start kubelet.service
[root@hadoop03 k8s]#
[root@hadoop03 k8s]# kubectl get pod -n nfs-client NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-764f44f754-htndd 1/1 Running 8 (2m7s ago) 14m
test-pod 0/1 Completed 0 17h#########
[root@hadoop03 NFS]# kubectl get pod
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 54m
zk-1 1/1 Running 0 21m
zk-2 1/1 Running 0 19m
[root@hadoop03 NFS]#
[root@hadoop03 NFS]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-zk-0 Bound pvc-39361e04-7a61-4a6e-863f-018f73ed1557 1Gi RWO managed-nfs-storage 54m
datadir-zk-1 Bound pvc-daaf2112-cbdd-4da5-90d3-662cace5b0ab 1Gi RWO managed-nfs-storage 21m
datadir-zk-2 Bound pvc-4964f707-e4de-4db0-8d4e-45a5b0c29b60 1Gi RWO managed-nfs-storage 20m
test-sc-pvc Bound pvc-d6d17ca7-8b05-4068-bcdc-0f4443196274 1Gi RWX managed-nfs-storage 47m
[root@hadoop03 NFS]#
[root@hadoop03 NFS]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-39361e04-7a61-4a6e-863f-018f73ed1557 1Gi RWO Delete Bound default/datadir-zk-0 managed-nfs-storage 22m
pvc-4964f707-e4de-4db0-8d4e-45a5b0c29b60 1Gi RWO Delete Bound default/datadir-zk-2 managed-nfs-storage 19m
pvc-a8158cfe-5950-4ef5-b90c-0941a8fa082c 1Mi RWX Delete Bound nfs-client/test-claim managed-nfs-storage 17h
pvc-d6d17ca7-8b05-4068-bcdc-0f4443196274 1Gi RWX Delete Bound default/test-sc-pvc managed-nfs-storage 22m
pvc-daaf2112-cbdd-4da5-90d3-662cace5b0ab 1Gi RWO Delete Bound default/datadir-zk-1 managed-nfs-storage 21m
衍生
k8s master HA
各个master 节点都进行以下处理
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
参考:https://github.com/kubernetes/kubeadm/issues/193#issuecomment
学习:https://tencentcloudcontainerteam.github.io/2019/12/15/no-route-to-host/
k8s dial tcp connect: no route to host相关推荐
- Error from server: Get “https:IP:10250/*“:dial tcp IP:10250: connect: no route to host
Error from server: Get "https://10.23.XX:10250/containerLogs/kpanda-system/kpanda-controller-ma ...
- ETCD启动失败-connect: no route to host
搭建K8S集群遇到ETCD的报错,报错信息如下, ##一定要关闭防火墙.iptables和SELINUX,三个都要关闭!! Mar 26 20:39:24 k8s-m1 etcd[6437]: hea ...
- FTP错误 [ftp: connect: No route to host] 解决方法
FTP错误 [ftp: connect: No route to host] 解决方法 参考文章: (1)FTP错误 [ftp: connect: No route to host] 解决方法 (2) ...
- kafka send failed: dial tcp: lookup hostname: no such host
Kafka, 是一种高吞吐率, 多分区, 多副本, 基于发布订阅的分布式消息系统, 支持海量数据传递 高吞吐量, 低延迟: 每秒可以处理几十万条消息, 延迟最低只有几毫秒, 每个主题可以分多个分区, ...
- Ubuntu有线校园网认证窗口提示:could not connect : no route to host
问题 在Linux系统(Unbuntu22.04)上连接校园网时,遇到一个问题. 因为使用的是有线连接校园网,弹出校园网认证窗口时提示: could not connect: no route to ...
- 解决执行go mod tidy时报错的问题:dial tcp: lookup xxx: no such host
问题截图: 最近在执行go mod tidy更新依赖库时遇到如下问题:dial tcp: lookup xxx: no such host问题原因:Go 设置了默认的GOSUMDB=sum.golan ...
- ftp connect: No route to host 解决过程
2019独角兽企业重金招聘Python工程师标准>>> 问题描述:java 实现的ftp客户端遇到ftp连接No route to host 的报错. 通过ping命令测试网络,可以 ...
- 【k8s】Error response from daemon: Get https://192.168.22.234/v2/: dial tcp 192.168.22.234:443: connec
[起因] 番茄配蔬菜!!! 起初Dashborad无法访问,在重启了相关的节点之后,登录进来发现,你怎么成这样了呢,大兄滴儿~~ [问题] 仔细看一下报错信息,我们可以找到错误原因:Faile ...
- 安装calico网络插件后K8s集群节点间通信找不到主机路由(no route to host)
安装calico网络插件后K8s集群节点间通信找不到主机路由(no route to host) 背景:k8s安装calico网络插件后master节点ping不通其它node节点,但可以ping通外 ...
最新文章
- 10 年 bloger 教你如何优雅玩转博客!
- JS触发Click操作以及获得事件源(转)
- java keeplive,java http长链接(keep-alive)导致的问题
- 北大OJ百练——4073:最长公共字符串后缀(C语言)
- 学到了关于服务器磁盘阵列
- php curl 下载图片,CURL实现下载远程图片并保存到本地
- Mac下配置svn服务器
- 第八次立会顺利召开!
- 数据探索很麻烦?推荐一款史上最强大的特征分析可视化工具:yellowbrick
- 如何使用jstack?线程的状态?
- 旧金山散记(一):第一次在美国打车
- 照相长度测试软件,拍张照片就知道你的长度了,还要什么测量工具!
- vb6.0开发的单片机串口温度采集系统(单片机测温、串口传输、温度曲线显示)
- 高效能人士的七个习惯读后感与总结概括-(第七章,第八章,第九章)
- ZigBee模块无线通信组网结构技术之Mesh拓扑网状
- c语言里的字体怎么设置,C语言中如何添加文字
- iOS开发罗盘/指南针
- lol美服服务器修改密码,如何修改LOL美服密码?英雄联盟美服账号密码和邮箱修改教程...
- php 获取qq头像,免费的API接口推荐(获取QQ昵称、头像、QQ秀等等)
- 快读快写和fread,fwrite--zhengjun