Kubernetes CKA认证运维工程师笔记-Kubernetes故障排查

  • 1. 应用程序故障排查
  • 2. 管理节点与工作节点异常排查
  • 3. Service访问异常排查

1. 应用程序故障排查

# 置顶type类型,是Pod还是deployment等,再接名称
kubectl describe TYPE/NAME
# 查看日志,一个Pod有两个容器的话,要用-c指定容器名
kubectl logs TYPE/NAME [-c CONTAINER]
# 进入容器中进行查看
kubectl exec POD [-c CONTAINER] --COMMAND [args...]
[root@k8s-master ~]# kubectl get pods
NAME                                     READY   STATUS             RESTARTS   AGE
client1                                  1/1     Running            5          2d23h
client2                                  0/1     ImagePullBackOff   4          2d23h
configmap-demo-pod                       0/1     ImagePullBackOff   3          4d10h
my-pod2                                  1/1     Running            11         4d17h
nfs-client-provisioner-58d675cd5-dx7n4   0/1     ImagePullBackOff   6          4d11h
pod-taint                                1/1     Running            9          10d
secret-demo-pod                          1/1     Running            4          4d9h
sh                                       1/1     Running            6          4d10h
test-76846b5956-gftn9                    1/1     Running            2          4d10h
test-76846b5956-r7s9k                    1/1     Running            2          4d10h
test-76846b5956-trpbn                    1/1     Running            2          4d10h
test2-78c4694588-87b9r                   1/1     Running            5          4d12h
web-0                                    1/1     Running            4          4d11h
web-1                                    0/1     ImagePullBackOff   3          4d11h
web-2                                    0/1     ImagePullBackOff   3          4d11h
web-96d5df5c8-vc9kf                      1/1     Running            3          3d
[root@k8s-master ~]# kubectl get pods
NAME                                     READY   STATUS             RESTARTS   AGE
client1                                  1/1     Running            5          2d23h
client2                                  0/1     ImagePullBackOff   4          2d23h
configmap-demo-pod                       0/1     ImagePullBackOff   3          4d10h
my-pod2                                  1/1     Running            11         4d17h
nfs-client-provisioner-58d675cd5-dx7n4   0/1     ImagePullBackOff   6          4d11h
pod-taint                                1/1     Running            9          10d
secret-demo-pod                          1/1     Running            4          4d9h
sh                                       1/1     Running            6          4d10h
test-76846b5956-gftn9                    1/1     Running            2          4d10h
test-76846b5956-r7s9k                    1/1     Running            2          4d10h
test-76846b5956-trpbn                    1/1     Running            2          4d10h
test2-78c4694588-87b9r                   1/1     Running            5          4d12h
web-0                                    1/1     Running            4          4d11h
web-1                                    0/1     ImagePullBackOff   3          4d11h
web-2                                    0/1     ImagePullBackOff   3          4d11h
web-96d5df5c8-vc9kf                      1/1     Running            3          3d
[root@k8s-master ~]# kubectl describe pod web-96d5df5c8-vc9kf
Name:         web-96d5df5c8-vc9kf
Namespace:    default
Priority:     0
Node:         k8s-node2/10.0.0.63
Start Time:   Wed, 22 Dec 2021 22:11:51 +0800
Labels:       app=webpod-template-hash=96d5df5c8
Annotations:  cni.projectcalico.org/podIP: 10.244.169.158/32cni.projectcalico.org/podIPs: 10.244.169.158/32
Status:       Running
IP:           10.244.169.158
IPs:IP:           10.244.169.158
Controlled By:  ReplicaSet/web-96d5df5c8
Containers:nginx:Container ID:   docker://f3243ba267e377896e3c5de8a2909d9dd12ed3b2a3fbd80b0094711e5a3f8c81Image:          nginxImage ID:       docker-pullable://nginx@sha256:366e9f1ddebdb844044c2fafd13b75271a9f620819370f8971220c2b330a9254Port:           <none>Host Port:      <none>State:          RunningStarted:      Sat, 25 Dec 2021 22:09:45 +0800Last State:     TerminatedReason:       ErrorExit Code:    255Started:      Fri, 24 Dec 2021 15:08:58 +0800Finished:     Sat, 25 Dec 2021 22:02:39 +0800Ready:          TrueRestart Count:  3Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True
Volumes:default-token-8grtj:Type:        Secret (a volume populated by a Secret)SecretName:  default-token-8grtjOptional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason          Age    From     Message----    ------          ----   ----     -------Normal  SandboxChanged  10m    kubelet  Pod sandbox changed, it will be killed and re-created.Normal  Pulling         9m38s  kubelet  Pulling image "nginx"Normal  SandboxChanged  4m5s   kubelet  Pod sandbox changed, it will be killed and re-created.Normal  Pulling         3m53s  kubelet  Pulling image "nginx"Normal  Pulled          3m37s  kubelet  Successfully pulled image "nginx" in 16.296403014sNormal  Created         3m36s  kubelet  Created container nginxNormal  Started         3m36s  kubelet  Started container nginx
[root@k8s-master ~]# kubectl logs web-96d5df5c8-vc9kf
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/12/25 14:09:46 [notice] 1#1: using the "epoll" event method
2021/12/25 14:09:46 [notice] 1#1: nginx/1.21.4
2021/12/25 14:09:46 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2021/12/25 14:09:46 [notice] 1#1: OS: Linux 3.10.0-1160.45.1.el7.x86_64
2021/12/25 14:09:46 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/12/25 14:09:46 [notice] 1#1: start worker processes
2021/12/25 14:09:46 [notice] 1#1: start worker process 31
2021/12/25 14:09:46 [notice] 1#1: start worker process 32
[root@k8s-master ~]# kubectl exec -it web-96d5df5c8-vc9kf bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@web-96d5df5c8-vc9kf:/# exit
exit
[root@k8s-master ~]# kubectl exec -it web-96d5df5c8-vc9kf -- bash
root@web-96d5df5c8-vc9kf:/#

2. 管理节点与工作节点异常排查

管理节点组件:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

工作节点组件:

  • kubelet
  • kube-proxy

Kubernetes集群架构图

需要先区分部署方式:
1、kubeadm
除kubelet外,其他组件均采用静态Pod启动

[root@k8s-master ~]# ls /etc/kubernetes/manifests/
etcd.yaml            kube-controller-manager.yaml
kube-apiserver.yaml  kube-scheduler.yaml
[root@k8s-master ~]# cat /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:anonymous:enabled: falsewebhook:cacheTTL: 0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 0scacheUnauthorizedTTL: 0s
cgroupDriver: cgroupfs
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-c4cg5   1/1     Running   3          30h
calico-node-4pwdc                         1/1     Running   16         33d
calico-node-9r6zd                         1/1     Running   16         33d
calico-node-vqzdj                         1/1     Running   17         33d
client1                                   1/1     Running   5          2d23h
coredns-6d56c8448f-gcgrh                  1/1     Running   16         33d
coredns-6d56c8448f-mdl7c                  1/1     Running   2          30h
etcd-k8s-master                           1/1     Running   3          30h
filebeat-5pwh7                            1/1     Running   11         10d
filebeat-pt848                            1/1     Running   11         10d
kube-apiserver-k8s-master                 1/1     Running   3          30h
kube-controller-manager-k8s-master        1/1     Running   3          30h
kube-proxy-87lbj                          1/1     Running   3          30h
kube-proxy-mcdnv                          1/1     Running   2          30h
kube-proxy-mchc9                          1/1     Running   2          30h
kube-scheduler-k8s-master                 1/1     Running   3          30h
metrics-server-84f9866fdf-rz676           1/1     Running   15         4d15h

2、二进制
所有组件均采用systemd管理

常见问题:

  • 网络不通

    • ping 节点
    • telnet 节点端口
  • 启动失败,一般配置文件或者依赖服务
  • 平台不兼容
# kubeadm部署的,直接查看kube-apiserver-k8s-master的Pod日志
[root@k8s-master ~]# kubectl logs kube-apiserver-k8s-master -n kube-system
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I1225 14:12:18.558307       1 server.go:625] external host was not specified, using 10.0.0.61
I1225 14:12:18.558513       1 server.go:163] Version: v1.19.3
I1225 14:12:19.123232       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1225 14:12:19.123294       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1225 14:12:19.124128       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1225 14:12:19.124167       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1225 14:12:19.126549       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.126601       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.139669       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.139693       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.146921       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.146944       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.147383       1 client.go:360] parsed scheme: "passthrough"
I1225 14:12:19.147669       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:12:19.147718       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:12:19.194115       1 master.go:271] Using reconciler: lease
I1225 14:12:19.194533       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.194550       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.221352       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.221377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.230469       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.230511       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.240139       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.240181       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.255518       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.255555       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.265105       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.265191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.275038       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.275076       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.285281       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.285336       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.302076       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.302102       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.314415       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.314679       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.327616       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.327671       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.338580       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.338901       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.354401       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.354487       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.363624       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.363651       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.376090       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.376133       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.386480       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.386534       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.394978       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.395030       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.404842       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.404888       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.559645       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.559692       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.576723       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.576767       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.588265       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.588284       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.596125       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.596145       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.608161       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.608212       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.619144       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.619196       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.626852       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.626895       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.644521       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.644550       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.658031       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.658090       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.669971       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.670265       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.692800       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.692836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.708784       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.708826       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.734898       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.735032       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.755957       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.755982       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.772847       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.772872       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.788862       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.788886       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.803723       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.803754       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.818516       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.818551       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.826818       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.826857       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.837298       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.837339       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.844194       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.844217       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.857209       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.857597       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.867066       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.867181       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.877262       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.877302       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.889062       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.889099       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.896457       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.902303       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.910393       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.910423       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.927814       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.927861       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.940076       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.940098       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.952012       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.952115       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.961099       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.961123       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.975537       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.975585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.988067       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.988145       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:19.995939       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:19.995965       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.018436       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.018502       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.109379       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.109398       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.121750       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.121777       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.138751       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.138786       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.148112       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.151713       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.161554       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.161578       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.175335       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.175359       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.193425       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.194080       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.262691       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.262740       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.277204       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.277249       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.299607       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.299713       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.315284       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.315481       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.328823       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.328848       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.345828       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.345871       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.361304       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.361328       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W1225 14:12:20.640827       1 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
W1225 14:12:20.659984       1 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W1225 14:12:20.685600       1 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1225 14:12:20.717635       1 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1225 14:12:20.722620       1 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1225 14:12:20.746581       1 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1225 14:12:20.774071       1 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W1225 14:12:20.774104       1 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I1225 14:12:20.794493       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1225 14:12:20.794524       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1225 14:12:20.801886       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.801939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:20.810029       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:20.810055       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:23.548796       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I1225 14:12:23.548865       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I1225 14:12:23.549360       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key
I1225 14:12:23.549780       1 secure_serving.go:197] Serving securely on [::]:6443
I1225 14:12:23.549835       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key
I1225 14:12:23.549858       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1225 14:12:23.552336       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1225 14:12:23.552372       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1225 14:12:23.553014       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1225 14:12:23.553087       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1225 14:12:23.553110       1 controller.go:83] Starting OpenAPI AggregationController
I1225 14:12:23.553250       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I1225 14:12:23.553295       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I1225 14:12:23.561604       1 available_controller.go:404] Starting AvailableConditionController
I1225 14:12:23.561627       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1225 14:12:23.561671       1 autoregister_controller.go:141] Starting autoregister controller
I1225 14:12:23.561678       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1225 14:12:23.561791       1 customresource_discovery_controller.go:209] Starting DiscoveryController
E1225 14:12:23.666220       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.0.0.61, ResourceVersion: 0, AdditionalErrorMsg:
I1225 14:12:23.954656       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I1225 14:12:23.995142       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1225 14:12:23.995162       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1225 14:12:23.995170       1 shared_informer.go:247] Caches are synced for crd-autoregister
I1225 14:12:23.995261       1 controller.go:86] Starting OpenAPI controller
I1225 14:12:24.019280       1 naming_controller.go:291] Starting NamingConditionController
I1225 14:12:24.019448       1 establishing_controller.go:76] Starting EstablishingController
I1225 14:12:24.019750       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1225 14:12:24.021435       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1225 14:12:24.021505       1 crd_finalizer.go:266] Starting CRDFinalizer
I1225 14:12:24.084663       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1225 14:12:24.089038       1 cache.go:39] Caches are synced for autoregister controller
I1225 14:12:24.155442       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1225 14:12:24.299909       1 trace.go:205] Trace[1198217794]: "Get" url:/api/v1/namespaces/ingress-nginx/secrets/nginx-ingress-serviceaccount-token-vh69r,user-agent:kube-apiserver/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:::1 (25-Dec-2021 14:12:23.797) (total time: 502ms):
Trace[1198217794]: ---"About to write a response" 502ms (14:12:00.299)
Trace[1198217794]: [502.112729ms] [502.112729ms] END
I1225 14:12:24.318639       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1225 14:12:24.356155       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:24.356216       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:24.389194       1 trace.go:205] Trace[414373803]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-node2,user-agent:kubelet/v1.19.0 (linux/amd64) kubernetes/e199641,client:10.0.0.63 (25-Dec-2021 14:12:23.849) (total time: 539ms):
Trace[414373803]: ---"About to write a response" 539ms (14:12:00.389)
Trace[414373803]: [539.865826ms] [539.865826ms] END
I1225 14:12:24.389582       1 trace.go:205] Trace[346194256]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-node1,user-agent:kubelet/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:10.0.0.62 (25-Dec-2021 14:12:23.761) (total time: 627ms):
Trace[346194256]: ---"About to write a response" 627ms (14:12:00.389)
Trace[346194256]: [627.763742ms] [627.763742ms] END
I1225 14:12:24.393405       1 trace.go:205] Trace[538299640]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-k8s-master,user-agent:kubelet/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:10.0.0.61 (25-Dec-2021 14:12:23.845) (total time: 547ms):
Trace[538299640]: ---"About to write a response" 510ms (14:12:00.356)
Trace[538299640]: [547.414287ms] [547.414287ms] END
I1225 14:12:24.512082       1 trace.go:205] Trace[82502510]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:10.0.0.61 (25-Dec-2021 14:12:23.846) (total time: 665ms):
Trace[82502510]: ---"Object stored in database" 665ms (14:12:00.511)
Trace[82502510]: [665.364934ms] [665.364934ms] END
I1225 14:12:24.516643       1 trace.go:205] Trace[1819760371]: "GuaranteedUpdate etcd3" type:*core.Event (25-Dec-2021 14:12:23.818) (total time: 698ms):
Trace[1819760371]: ---"Transaction prepared" 459ms (14:12:00.277)
Trace[1819760371]: ---"Transaction committed" 238ms (14:12:00.516)
Trace[1819760371]: [698.586941ms] [698.586941ms] END
I1225 14:12:24.523401       1 trace.go:205] Trace[1243567460]: "Patch" url:/api/v1/namespaces/default/events/configmap-demo-pod.16c404be8eee341b,user-agent:kubelet/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:10.0.0.62 (25-Dec-2021 14:12:23.795) (total time: 721ms):
Trace[1243567460]: ---"Object stored in database" 696ms (14:12:00.516)
Trace[1243567460]: [721.328661ms] [721.328661ms] END
I1225 14:12:24.629195       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:24.635269       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:24.851524       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:24.851565       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:24.925875       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:24.925902       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:24.941178       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1225 14:12:25.002392       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.002477       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:25.093867       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.093912       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
E1225 14:12:25.245098       1 customresource_handler.go:668] error building openapi models for hostendpoints.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.HostEndpoint.properties.spec.properties.ports.items.<array>.properties.protocol has invalid property: anyOf
I1225 14:12:25.245607       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.245627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:25.277321       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
E1225 14:12:25.283228       1 customresource_handler.go:668] error building openapi models for felixconfigurations.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.FelixConfiguration.properties.spec.properties.kubeNodePortRanges.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.FelixConfiguration.properties.spec.properties.natPortRange has invalid property: anyOf
I1225 14:12:25.284239       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.284261       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
E1225 14:12:25.329108       1 customresource_handler.go:668] error building openapi models for globalnetworkpolicies.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
I1225 14:12:25.330596       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.330710       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:25.357189       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.357217       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:25.392966       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.392992       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
E1225 14:12:25.438707       1 customresource_handler.go:668] error building openapi models for networkpolicies.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
I1225 14:12:25.439540       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.439593       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:25.448117       1 trace.go:205] Trace[1794566532]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit,user-agent:kube-apiserver/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:::1 (25-Dec-2021 14:12:24.878) (total time: 569ms):
Trace[1794566532]: ---"About to write a response" 569ms (14:12:00.447)
Trace[1794566532]: [569.28003ms] [569.28003ms] END
I1225 14:12:25.654884       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:25.654910       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:26.441116       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:26.441157       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:12:26.579163       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
E1225 14:12:29.106613       1 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.249.20:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.249.20:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1225 14:12:34.125834       1 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.249.20:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.249.20:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
E1225 14:12:39.126932       1 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.249.20:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.249.20:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1225 14:12:41.508946       1 controller.go:606] quota admission added evaluator for: endpoints
I1225 14:12:54.169433       1 client.go:360] parsed scheme: "passthrough"
I1225 14:12:54.169468       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:12:54.169476       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:12:56.191213       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1225 14:12:56.631645       1 client.go:360] parsed scheme: "endpoint"
I1225 14:12:56.631729       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1225 14:13:36.699461       1 client.go:360] parsed scheme: "passthrough"
I1225 14:13:36.699504       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:13:36.699512       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:14:12.928690       1 client.go:360] parsed scheme: "passthrough"
I1225 14:14:12.928831       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:14:12.928859       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:14:51.609220       1 client.go:360] parsed scheme: "passthrough"
I1225 14:14:51.609377       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:14:51.609409       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:15:30.414981       1 client.go:360] parsed scheme: "passthrough"
I1225 14:15:30.415048       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:15:30.415057       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:16:13.416069       1 client.go:360] parsed scheme: "passthrough"
I1225 14:16:13.416140       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:16:13.416158       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:16:53.202182       1 client.go:360] parsed scheme: "passthrough"
I1225 14:16:53.202277       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:16:53.202288       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:17:27.709485       1 client.go:360] parsed scheme: "passthrough"
I1225 14:17:27.709530       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:17:27.709542       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:18:10.159300       1 client.go:360] parsed scheme: "passthrough"
I1225 14:18:10.159338       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:18:10.159345       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:18:40.716569       1 client.go:360] parsed scheme: "passthrough"
I1225 14:18:40.716701       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:18:40.716722       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:19:24.247113       1 client.go:360] parsed scheme: "passthrough"
I1225 14:19:24.247185       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:19:24.247219       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:20:08.987275       1 client.go:360] parsed scheme: "passthrough"
I1225 14:20:08.987543       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:20:08.987583       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:20:44.858512       1 client.go:360] parsed scheme: "passthrough"
I1225 14:20:44.858557       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:20:44.858569       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:21:21.613762       1 client.go:360] parsed scheme: "passthrough"
I1225 14:21:21.613892       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:21:21.614077       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:21:52.143822       1 client.go:360] parsed scheme: "passthrough"
I1225 14:21:52.143911       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:21:52.143929       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:22:27.359651       1 client.go:360] parsed scheme: "passthrough"
I1225 14:22:27.359762       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:22:27.359787       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:23:11.063713       1 client.go:360] parsed scheme: "passthrough"
I1225 14:23:11.063746       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:23:11.063754       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:23:42.744602       1 client.go:360] parsed scheme: "passthrough"
I1225 14:23:42.744670       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:23:42.744688       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:24:15.053047       1 client.go:360] parsed scheme: "passthrough"
I1225 14:24:15.053141       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:24:15.053167       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:24:57.281040       1 client.go:360] parsed scheme: "passthrough"
I1225 14:24:57.286666       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:24:57.286712       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:25:38.863844       1 client.go:360] parsed scheme: "passthrough"
I1225 14:25:38.863903       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:25:38.863912       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:26:18.572451       1 client.go:360] parsed scheme: "passthrough"
I1225 14:26:18.572482       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:26:18.572489       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:26:53.678319       1 client.go:360] parsed scheme: "passthrough"
I1225 14:26:53.678531       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:26:53.678573       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:27:36.433874       1 client.go:360] parsed scheme: "passthrough"
I1225 14:27:36.434093       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:27:36.434125       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:28:18.084057       1 client.go:360] parsed scheme: "passthrough"
I1225 14:28:18.084239       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:28:18.084255       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:28:50.563060       1 client.go:360] parsed scheme: "passthrough"
I1225 14:28:50.563113       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:28:50.563124       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:29:21.855603       1 client.go:360] parsed scheme: "passthrough"
I1225 14:29:21.855751       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:29:21.856461       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1225 14:29:52.347034       1 client.go:360] parsed scheme: "passthrough"
I1225 14:29:52.347112       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1225 14:29:52.347130       1 clientconn.go:948] ClientConn switching balancer to "pick_first"# 二进制部署的,用journalctl -u kube-apiserver
[root@k8s-master ~]# journalctl -u kube-apiserver
-- No entries --
[root@k8s-master ~]# journalctl -u kubelet
-- Logs begin at Sat 2021-12-25 22:12:00 CST, end at Sat 2021-12-25 22:40:02 CST. --
Dec 25 22:12:07 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.729022    1419 server.go:411] Version: v1.19.3
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.729772    1419 server.go:831] Client rotation is on, will bootstrap in background
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.739425    1419 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.745546    1419 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Dec 25 22:12:10 k8s-master kubelet[1419]: W1225 22:12:10.157376    1419 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.208464    1419 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.209254    1419 container_manager_linux.go:276] container manager verified user specified cgroup-root exists: []
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.209278    1419 container_manager_linux.go:281] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.212786    1419 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.212810    1419 container_manager_linux.go:311] [topologymanager] Initializing Topology Manager with none policy
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.212815    1419 container_manager_linux.go:316] Creating device plugin manager: true
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.222255    1419 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.222336    1419 client.go:94] Start docker client with request timeout=2m0s
Dec 25 22:12:10 k8s-master kubelet[1419]: W1225 22:12:10.261344    1419 docker_service.go:565] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.261377    1419 docker_service.go:241] Hairpin mode set to "hairpin-veth"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.560569    1419 docker_service.go:256] Docker cri networking managed by cni
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.591354    1419 docker_service.go:261] Docker Info: &{ID:LZWZ:7SPV:BJT7:3OAX:HPZJ:2U5R:3D3E:SXVB:A5PX:PJX3:3IHG:OEDN Contain
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.591426    1419 docker_service.go:274] Setting cgroupDriver to cgroupfs
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.634518    1419 remote_runtime.go:59] parsed scheme: ""
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.634540    1419 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635527    1419 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <ni
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635572    1419 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635629    1419 remote_image.go:50] parsed scheme: ""
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635637    1419 remote_image.go:50] scheme "" not registered, fallback to default scheme
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635651    1419 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <ni
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635657    1419 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635731    1419 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635762    1419 kubelet.go:273] Watching apiserver
Dec 25 22:12:10 k8s-master kubelet[1419]: E1225 22:12:10.678016    1419 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to l
Dec 25 22:12:10 k8s-master kubelet[1419]: E1225 22:12:10.678147    1419 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1
Dec 25 22:12:10 k8s-master kubelet[1419]: E1225 22:12:10.678221    1419 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1
Dec 25 22:12:10 k8s-master kubelet[1419]: E1225 22:12:10.977478    1419 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Dep
Dec 25 22:12:10 k8s-master kubelet[1419]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.040815    1419 kuberuntime_manager.go:214] Container runtime docker initialized, version: 20.10.11, apiVersion: 1.41.0
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.042898    1419 server.go:1147] Started kubelet
Dec 25 22:12:11 k8s-master kubelet[1419]: E1225 22:12:11.044515    1419 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: fail
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.053448    1419 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Dec 25 22:12:11 k8s-master kubelet[1419]: E1225 22:12:11.056752    1419 event.go:273] Unable to write event: 'Post "https://10.0.0.61:6443/api/v1/namespaces/default/events": dial t
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.062250    1419 volume_manager.go:265] Starting Kubelet Volume Manager
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.062322    1419 server.go:152] Starting to listen on 0.0.0.0:10250
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.071183    1419 server.go:424] Adding debug handlers to kubelet server.
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.079514    1419 desired_state_of_world_populator.go:139] Desired state populator starts to run
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.088034    1419 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: db716200328937af6f50e1cd3c23d1391
Dec 25 22:12:11 k8s-master kubelet[1419]: E1225 22:12:11.102900    1419 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "https://10.0.0.61:64
Dec 25 22:12:11 k8s-master kubelet[1419]: E1225 22:12:11.103298    1419 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *
Dec 25 22:12:11 k8s-master kubelet[1419]: E1225 22:12:11.179554    1419 kubelet.go:2183] node "k8s-master" not found
Dec 25 22:12:11 k8s-master kubelet[1419]: I1225 22:12:11.186533    1419 client.go:87] parsed scheme: "unix"[root@k8s-master ~]# journalctl -u kubelet > a.txt
[root@k8s-master ~]# more a.txt
-- Logs begin at Sat 2021-12-25 22:12:00 CST, end at Sat 2021-12-25 22:40:02 CST. --
Dec 25 22:12:07 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.729022    1419 server.go:411] Version: v1.19.3
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.729772    1419 server.go:831] Client rotation is on, will bootstrap in background
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.739425    1419 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 25 22:12:08 k8s-master kubelet[1419]: I1225 22:12:08.745546    1419 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Dec 25 22:12:10 k8s-master kubelet[1419]: W1225 22:12:10.157376    1419 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.208464    1419 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.209254    1419 container_manager_linux.go:276] container manager verified user specified cgroup-root exists: []
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.209278    1419 container_manager_linux.go:281] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false No
deAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionT
hresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Si
gnal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUMa
nagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.212786    1419 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.212810    1419 container_manager_linux.go:311] [topologymanager] Initializing Topology Manager with none policy
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.212815    1419 container_manager_linux.go:316] Creating device plugin manager: true
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.222255    1419 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.222336    1419 client.go:94] Start docker client with request timeout=2m0s
Dec 25 22:12:10 k8s-master kubelet[1419]: W1225 22:12:10.261344    1419 docker_service.go:565] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to
"hairpin-veth"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.261377    1419 docker_service.go:241] Hairpin mode set to "hairpin-veth"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.560569    1419 docker_service.go:256] Docker cri networking managed by cni
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.591354    1419 docker_service.go:261] Docker Info: &{ID:LZWZ:7SPV:BJT7:3OAX:HPZJ:2U5R:3D3E:SXVB:A5PX:PJX3:3IHG:OEDN Contain
ers:27 ContainersRunning:0 ContainersPaused:0 ContainersStopped:27 Images:16 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true
] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-filelocal logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:tru
e IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2021-12-25T22:12:10.561900318+08:00 LoggingDrive
r:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1160.45.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:0xc00067cc40 NCPU:2 MemTotal:1907732480 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s-ma
ster Labels:[] ExperimentalBuild:false ServerVersion:20.10.11 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[]} io.containerd.runtime.v1.linux:
{Path:runc Args:[]} runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:
0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd
23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLic
ense: Warnings:[]}
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.591426    1419 docker_service.go:274] Setting cgroupDriver to cgroupfs
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.634518    1419 remote_runtime.go:59] parsed scheme: ""
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.634540    1419 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635527    1419 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <ni
l> <nil>}
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635572    1419 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635629    1419 remote_image.go:50] parsed scheme: ""
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635637    1419 remote_image.go:50] scheme "" not registered, fallback to default scheme
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635651    1419 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <ni
l> <nil>}
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635657    1419 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 25 22:12:10 k8s-master kubelet[1419]: I1225 22:12:10.635731    1419 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests[root@k8s-master ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
[root@k8s-master ~]# /usr/bin/kubelet
I1225 22:44:25.980513   28854 server.go:411] Version: v1.19.3
W1225 22:44:25.980849   28854 server.go:553] standalone mode, no API client
W1225 22:44:25.980983   28854 container_manager_linux.go:951] CPUAccounting not enabled for pid: 28854
W1225 22:44:25.980991   28854 container_manager_linux.go:954] MemoryAccounting not enabled for pid: 28854
W1225 22:44:26.082976   28854 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
W1225 22:44:26.127990   28854 server.go:468] No api server defined - no events will be sent to API server.
I1225 22:44:26.128031   28854 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I1225 22:44:26.128393   28854 container_manager_linux.go:276] container manager verified user specified cgroup-root exists: []
I1225 22:44:26.128413   28854 container_manager_linux.go:281] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1225 22:44:26.128818   28854 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
I1225 22:44:26.128827   28854 container_manager_linux.go:311] [topologymanager] Initializing Topology Manager with none policy
I1225 22:44:26.128833   28854 container_manager_linux.go:316] Creating device plugin manager: true
I1225 22:44:26.129807   28854 client.go:77] Connecting to docker on unix:///var/run/docker.sock
I1225 22:44:26.129830   28854 client.go:94] Start docker client with request timeout=2m0s
W1225 22:44:26.183754   28854 docker_service.go:565] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I1225 22:44:26.183783   28854 docker_service.go:241] Hairpin mode set to "hairpin-veth"
I1225 22:44:26.247433   28854 docker_service.go:256] Docker cri networking managed by kubernetes.io/no-op
......

3. Service访问异常排查

Service工作流程图

Service一般是访问不通,有以下可能性:

  1. Service是否关联Pod?
# 查看标签是否正确,已关联到Pod;是否已创建Pod
[root@k8s-master ~]# kubectl get ep
NAME             ENDPOINTS                                           AGE
fuseim.pri-ifs   <none>                                              4d12h
kubernetes       10.0.0.61:6443                                      33d
my-dep           <none>                                              30d
my-service       10.244.36.119:80,10.244.36.122:80,10.244.36.98:80   24d
nginx            10.244.36.119:80,10.244.36.122:80,10.244.36.98:80   4d11h
[root@k8s-master ~]# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
client1                                  1/1     Running   5          3d
client2                                  1/1     Running   5          3d
configmap-demo-pod                       1/1     Running   4          4d10h
my-pod2                                  1/1     Running   11         4d17h
nfs-client-provisioner-58d675cd5-dx7n4   1/1     Running   7          4d12h
pod-taint                                1/1     Running   9          10d
secret-demo-pod                          1/1     Running   4          4d9h
sh                                       1/1     Running   6          4d11h
test-76846b5956-gftn9                    1/1     Running   2          4d11h
test-76846b5956-r7s9k                    1/1     Running   2          4d11h
test-76846b5956-trpbn                    1/1     Running   2          4d11h
test2-78c4694588-87b9r                   1/1     Running   5          4d12h
web-0                                    1/1     Running   4          4d11h
web-1                                    1/1     Running   4          4d11h
web-2                                    1/1     Running   4          4d11h
web-96d5df5c8-vc9kf                      1/1     Running   3          3d
  1. Service指定target-port端口是否正常?
# 查看service中的target-port是否是指定的端口
[root@k8s-master ~]# kubectl edit svc nginx
Edit cancelled, no changes made.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:annotations:kubectl.kubernetes.io/last-applied-configuration: |{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"clusterIP":"None","ports":[{"name":"web","port":80}],"selector":{"app":"nginx"}}}creationTimestamp: "2021-12-21T02:56:44Z"labels:app: nginxname: nginxnamespace: defaultresourceVersion: "2334070"selfLink: /api/v1/namespaces/default/services/nginxuid: 5f07839a-04e4-4214-bbbe-d69357de10d4
spec:clusterIP: Noneports:- name: webport: 80protocol: TCPtargetPort: 80selector:app: nginxsessionAffinity: Nonetype: ClusterIP
status:loadBalancer: {}
  1. Pod正常工作吗?
# 查看Pod的IP地址,然后用curl命令去看是否返回正常的内容
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
client1                                  1/1     Running   5          3d      10.244.36.99     k8s-node1   <none>           <none>
client2                                  1/1     Running   5          3d      10.244.36.92     k8s-node1   <none>           <none>
configmap-demo-pod                       1/1     Running   4          4d10h   10.244.36.101    k8s-node1   <none>           <none>
my-pod2                                  1/1     Running   11         4d17h   10.244.169.130   k8s-node2   <none>           <none>
nfs-client-provisioner-58d675cd5-dx7n4   1/1     Running   7          4d12h   10.244.36.116    k8s-node1   <none>           <none>
pod-taint                                1/1     Running   9          10d     10.244.169.132   k8s-node2   <none>           <none>
secret-demo-pod                          1/1     Running   4          4d9h    10.244.36.118    k8s-node1   <none>           <none>
sh                                       1/1     Running   6          4d11h   10.244.36.114    k8s-node1   <none>           <none>
test-76846b5956-gftn9                    1/1     Running   2          4d11h   10.244.36.111    k8s-node1   <none>           <none>
test-76846b5956-r7s9k                    1/1     Running   2          4d11h   10.244.36.100    k8s-node1   <none>           <none>
test-76846b5956-trpbn                    1/1     Running   2          4d11h   10.244.169.185   k8s-node2   <none>           <none>
test2-78c4694588-87b9r                   1/1     Running   5          4d13h   10.244.36.123    k8s-node1   <none>           <none>
web-0                                    1/1     Running   4          4d12h   10.244.36.122    k8s-node1   <none>           <none>
web-1                                    1/1     Running   4          4d12h   10.244.36.119    k8s-node1   <none>           <none>
web-2                                    1/1     Running   4          4d12h   10.244.36.98     k8s-node1   <none>           <none>
web-96d5df5c8-vc9kf                      1/1     Running   3          3d      10.244.169.158   k8s-node2   <none>           <none>
[root@k8s-master ~]# curl 10.244.169.158
  1. Service是否通过DNS工作?
# 查看coredns组件是否正常工作
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-c4cg5   1/1     Running   3          31h
calico-node-4pwdc                         1/1     Running   16         33d
calico-node-9r6zd                         1/1     Running   16         33d
calico-node-vqzdj                         1/1     Running   17         33d
client1                                   1/1     Running   5          3d
coredns-6d56c8448f-gcgrh                  1/1     Running   16         33d
coredns-6d56c8448f-mdl7c                  1/1     Running   2          31h
etcd-k8s-master                           1/1     Running   3          31h
filebeat-5pwh7                            1/1     Running   11         10d
filebeat-pt848                            1/1     Running   11         10d
kube-apiserver-k8s-master                 1/1     Running   3          31h
kube-controller-manager-k8s-master        1/1     Running   3          31h
kube-proxy-87lbj                          1/1     Running   3          31h
kube-proxy-mcdnv                          1/1     Running   2          31h
kube-proxy-mchc9                          1/1     Running   2          31h
kube-scheduler-k8s-master                 1/1     Running   3          31h
metrics-server-84f9866fdf-rz676           1/1     Running   15         4d16h
  1. kube-proxy正常工作吗?
# kubeadm部署的查看kube-proxy看是否工作正常
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-c4cg5   1/1     Running   3          31h
calico-node-4pwdc                         1/1     Running   16         33d
calico-node-9r6zd                         1/1     Running   16         33d
calico-node-vqzdj                         1/1     Running   17         33d
client1                                   1/1     Running   5          3d
coredns-6d56c8448f-gcgrh                  1/1     Running   16         33d
coredns-6d56c8448f-mdl7c                  1/1     Running   2          31h
etcd-k8s-master                           1/1     Running   3          31h
filebeat-5pwh7                            1/1     Running   11         10d
filebeat-pt848                            1/1     Running   11         10d
kube-apiserver-k8s-master                 1/1     Running   3          31h
kube-controller-manager-k8s-master        1/1     Running   3          31h
kube-proxy-87lbj                          1/1     Running   3          31h
kube-proxy-mcdnv                          1/1     Running   2          31h
kube-proxy-mchc9                          1/1     Running   2          31h
kube-scheduler-k8s-master                 1/1     Running   3          31h
metrics-server-84f9866fdf-rz676           1/1     Running   15         4d16h# 二进制查看systemd
  1. kube-proxy是否正常写iptables规则?
# 使用iptables-save |grep service名称来查看,对应规则是否被创建
[root@k8s-master ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
cali-INPUT  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:Cz_u1IQiXIMmKD4c */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           Chain FORWARD (policy DROP)
target     prot opt source               destination
cali-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:wUHhoiAYhphO9Mso */
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
DOCKER-USER  all  --  0.0.0.0/0            0.0.0.0/0
DOCKER-ISOLATION-STAGE-1  all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
cali-OUTPUT  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:tVnHkvAo15HuiPy0 */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           Chain DOCKER (1 references)
target     prot opt source               destination         Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNATChain KUBE-FORWARD (1 references)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHEDChain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         Chain cali-FORWARD (1 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:vjrMJCRpqwy5oRoX */ MARK and 0xfff1ffff
cali-from-hep-forward  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:A_sPAO0mcxbT9mOV */ mark match 0x0/0x10000
cali-from-wl-dispatch  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:8ZoYfO5HKXWbB3pk */
cali-to-wl-dispatch  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:jdEuaPBe14V2hutn */
cali-to-hep-forward  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:12bc6HljsMKsmfr- */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:MH9kMp5aNICL-Olv */ /* Policy explicitly accepted packet. */ mark match 0x10000/0x10000Chain cali-INPUT (1 references)
target     prot opt source               destination
ACCEPT     4    --  0.0.0.0/0            0.0.0.0/0            /* cali:PajejrV4aFdkZojI */ /* Allow IPIP packets from Calico hosts */ match-set cali40all-hosts-net src ADDRTYPE match dst-type LOCAL
DROP       4    --  0.0.0.0/0            0.0.0.0/0            /* cali:_wjq-Yrma8Ly1Svo */ /* Drop IPIP packets from non-Calico hosts */
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:ss8lEMQsXi-s6qYT */ MARK and 0xfffff
cali-forward-check  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:PgIW-V0nEjwPhF_8 */
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:QMJlDwlS0OjHyfMN */ mark match ! 0x0/0xfff00000
cali-wl-to-host  all  --  0.0.0.0/0            0.0.0.0/0           [goto]  /* cali:nDRe73txrna-aZjG */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:iX2AYvqGXaVqwkro */ mark match 0x10000/0x10000
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:bhpnxD5IRtBP8KW0 */ MARK and 0xfff0ffff
cali-from-host-endpoint  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:H5_bccAbHV0sooVy */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:inBL01YlfurT0dbI */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000Chain cali-OUTPUT (1 references)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:Mq1_rAdXXH3YkrzW */ mark match 0x10000/0x10000
cali-forward-endpoint-mark  all  --  0.0.0.0/0            0.0.0.0/0           [goto]  /* cali:5Z67OUUpTOM7Xa1a */ mark match ! 0x0/0xfff00000
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:M2Wf0OehNdig8MHR */
ACCEPT     4    --  0.0.0.0/0            0.0.0.0/0            /* cali:AJBkLho_0Qd8LNr3 */ /* Allow IPIP packets to other Calico hosts */ match-set cali40all-hosts-net dst ADDRTYPE match src-type LOCAL
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:iz2RWXlXJDUfsLpe */ MARK and 0xfff0ffff
cali-to-host-endpoint  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:hXojbnLundZDgZyw */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:wankpMDC2Cy1KfBv */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000Chain cali-forward-check (1 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:Pbldlb4FaULvpdD8 */ ctstate RELATED,ESTABLISHED
cali-set-endpoint-mark  tcp  --  0.0.0.0/0            0.0.0.0/0           [goto]  /* cali:ZD-6UxuUtGW-xtzg */ /* To kubernetes NodePort service */ multiport dports 30000:32767 match-set cali40this-host dst
cali-set-endpoint-mark  udp  --  0.0.0.0/0            0.0.0.0/0           [goto]  /* cali:CbPfUajQ2bFVnDq4 */ /* To kubernetes NodePort service */ multiport dports 30000:32767 match-set cali40this-host dst
cali-set-endpoint-mark  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:jmhU0ODogX-Zfe5g */ /* To kubernetes service */ ! match-set cali40this-host dstChain cali-forward-endpoint-mark (1 references)
target     prot opt source               destination
cali-from-endpoint-mark  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:O0SmFDrnm7KggWqW */ mark match ! 0x100000/0xfff00000
cali-to-wl-dispatch  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:aFl0WFKRxDqj8oA6 */
cali-to-hep-forward  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:AZKVrO3i_8cLai5f */
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:96HaP1sFtb-NYoYA */ MARK and 0xfffff
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:VxO6hyNWz62YEtul */ /* Policy explicitly accepted packet. */ mark match 0x10000/0x10000Chain cali-from-endpoint-mark (1 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:9dpftzl-pNycbr37 */ /* Unknown interface */Chain cali-from-hep-forward (1 references)
target     prot opt source               destination         Chain cali-from-host-endpoint (1 references)
target     prot opt source               destination         Chain cali-from-wl-dispatch (2 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:zTj6P0TIgYvgz-md */ /* Unknown interface */Chain cali-set-endpoint-mark (3 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:MN61lcxFj1yWuYBo */ /* Unknown endpoint */
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:nKOjq8N2yzfmS3jk */ /* Non-Cali endpoint mark */ MARK xset 0x100000/0xfff00000Chain cali-to-hep-forward (2 references)
target     prot opt source               destination         Chain cali-to-host-endpoint (1 references)
target     prot opt source               destination         Chain cali-to-wl-dispatch (2 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* cali:7KNphB1nNHw80nIO */ /* Unknown interface */Chain cali-wl-to-host (1 references)
target     prot opt source               destination
cali-from-wl-dispatch  all  --  0.0.0.0/0            0.0.0.0/0            /* cali:Ee9Sbo10IpVujdIY */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* cali:nSZbcOoG1xPONxb8 */ /* Configured DefaultEndpointToHostAction */
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        33d
my-dep       NodePort    10.111.199.51   <none>        80:31734/TCP   30d
my-service   NodePort    10.100.228.0    <none>        80:32433/TCP   24d
nginx        ClusterIP   None            <none>        80/TCP         4d12h
[root@k8s-master ~]# iptables-save |grep nginx
[root@k8s-master ~]# iptables-save |grep my-dep
[root@k8s-master ~]# iptables-save |grep my-service
[root@k8s-master ~]# iptables-save |grep kubernetes
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-forward-check -p tcp -m comment --comment "cali:ZD-6UxuUtGW-xtzg" -m comment --comment "To kubernetes NodePort service" -m multiport --dports 30000:32767 -m set --match-set cali40this-host dst -g cali-set-endpoint-mark
-A cali-forward-check -p udp -m comment --comment "cali:CbPfUajQ2bFVnDq4" -m comment --comment "To kubernetes NodePort service" -m multiport --dports 30000:32767 -m set --match-set cali40this-host dst -g cali-set-endpoint-mark
-A cali-forward-check -m comment --comment "cali:jmhU0ODogX-Zfe5g" -m comment --comment "To kubernetes service" -m set ! --match-set cali40this-host dst -j cali-set-endpoint-mark
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
  1. cni网络插件是否正常工作?
    扁平化、转发、实现不同节点间的通信
    还是去查看IP,然后去curl那个IP地址,看是否返回正常值

小结:

  1. 关闭kube-apiserver,执行不了kubectl命令
  2. 关闭kube-controller-manager,创建deployment,但是deployment不会去创建具体的pod
  3. 关闭kube-scheduler,Pod无法调度
  4. 关闭kubelet,Node显示notready
  5. 关闭kube-proxy,网络不通

Kubernetes CKA认证运维工程师笔记-Kubernetes故障排查相关推荐

  1. Kubernetes CKA认证运维工程师笔记-Kubernetes安全

    Kubernetes CKA认证运维工程师笔记-Kubernetes安全 1. Kubernetes安全框架 2. 鉴权,授权,准入控制 2.1 鉴权 2.2 授权 2.3 准入控制 3. 基于角色的 ...

  2. Kubernetes CKA认证运维工程师笔记-Kubernetes网络

    Kubernetes CKA认证运维工程师笔记-Kubernetes网络 1. Service 存在的意义 2. Pod与Service的关系 3. Service三种常用类型 4. Service代 ...

  3. Kubernetes CKA认证运维工程师笔记-Kubernetes调度

    Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理 1. 创建一个Pod的工作流程 2. Pod中影响调度的主要属性 3. 资源限制对Pod调度的影响 4. no ...

  4. Kubernetes CKA认证运维工程师笔记-Kubernetes监控与日志

    Kubernetes CKA认证运维工程师笔记-Kubernetes监控与日志 1. 查看集群资源状况 2. 监控集群资源利用率 3. 管理K8s组件日志 4. 管理K8s应用日志 1. 查看集群资源 ...

  5. Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理

    Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理 1. 在Kubernetes中部署应用流程 2. 使用Deployment部署Java应用 2.1 Pod与D ...

  6. Kubernetes CKA认证运维工程师笔记-Docker快速入门

    Kubernetes CKA认证运维工程师笔记-Docker快速入门 1. Docker 概念与安装 1.1 Docker 是什么 1.2 Docker 基本组成 1.3 版本与支持平台 1.4 Do ...

  7. SRE运维工程师笔记-Linux基础入门

    SRE运维工程师笔记-Linux基础入门 1. Linux基础 1.1 用户类型 1.2 终端terminal 1.2.1 终端类型 1.2.2 查看当前的终端设备 1.3 交互式接口 1.3.1 交 ...

  8. SRE运维工程师笔记-Linux用户组和权限管理

    SRE运维工程师笔记-Linux用户组和权限管理 用户.组和权限 内容概述 1. Linux安全模型 1.1 用户 1.2 用户组 1.3 用户和组的关系 1.4 安全上下文 2. 用户和组的配置文件 ...

  9. SRE运维工程师笔记-文件查找和压缩

    SRE运维工程师笔记-文件查找和压缩 1. 文件查找 1.1 locate 1.2 find 1.2.1 指定搜索目录层级 1.2.2 对每个目录先处理目录内的文件,再处理目录本身 1.2.3 根据文 ...

  10. SRE运维工程师笔记-Linux文件管理和IO重定向

    SRE运维工程师笔记-Linux文件管理和IO重定向 1. 文件系统目录结构 1.1 文件系统的目录结构 1.2 常见的文件系统目录功能 1.3 应用程序的组成部分 1.4 CentOS 7 以后版本 ...

最新文章

  1. 大数据目标检测推理管道部署
  2. Struts 拦截器权限控制【通过拦截器实现登录后跳转到登录前页面】
  3. no module named 'social_core'
  4. linux查看 idt日志,实现RCP的日志管理
  5. 前端学习(597):查看和调试cookie
  6. python字典保存为excel_python将字典列表导出为Excel文件的方法
  7. 入职地府后我成了人生赢家_拿年终奖前跳槽,你才是赢家
  8. js中数组反向、排序reverse、sort
  9. gradle maven_Gradle vs Maven
  10. python excel 教程推荐_python脚本实现数据导出excel格式的简单方法(推荐)
  11. python项目实例-实例分享 | 4个Python实战项目(一)
  12. 一键GHOST v2019.08.12优盘教程
  13. oracle rman delete backupset,RMAN Crosscheck后delete obsolete遇到RMAN-06091的解决
  14. sql和mysql的区别
  15. “秒杀系统“设计原理
  16. 【PPT】折线线条怎么画?
  17. 赛门铁克SSL证书chrome不支持解决方法
  18. 南都周刊:别了,老兵乔布斯
  19. Python实现识别多个条码/二维码(二)
  20. 《Android 开发艺术探索》笔记2--IPC机制

热门文章

  1. 关于抢红包的_酷乐研究所 | 过年净抢红包了?我们准备了50种新玩法
  2. 14.如何在Linux电脑中使用终端运行INSTALL.sh文件
  3. 软件图形用户界面设计
  4. SimpleMind 1.27.1 小巧的思维导图工具
  5. 怎样使用手机的nfc功能模拟门禁?
  6. vue3中tsx的基本语法使用
  7. 青岛阳光计算机学校,青岛恒星科技学院
  8. ArrayList vs LinkedList
  9. 互联网开放医疗之中医
  10. 二进制、八进制、十六进制的写法