Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理

  • 1. 创建一个Pod的工作流程
  • 2. Pod中影响调度的主要属性
  • 3. 资源限制对Pod调度的影响
  • 4. nodeSelector & nodeAffinity
  • 5. Taints & Tolerations
  • 6. nodeName
  • 7. DaemonSet控制器
  • 8. 调度失败原因分析

1. 创建一个Pod的工作流程

Kubernetes基于list-watch机制的控制器架构,实现组件间交互的解耦。

kubectl run nginx --image=nginx
1、kubectl将创建pod的请求提交到apiserver
2、会将请求的信息写到etcd
3、apiserver通知scheduler有创建新的pod,收到创建之后就会根据调度算法选择一个合适的节点
4、给这个pod打一个标记,pod=k8s-node1
5、apiserver收到scheduler的调度结果写到etcd
6、k8s-node1上的kubelet收到事件,从apiserver获取pod的相关信息
7、kubelet调用docker api创建pod中所需的容器
8、会把这个pod的状态汇报给apiserver
9、apiserver把状态写入到etcd
kubectl get pods

[root@k8s-master ~]# kubectl run --help
Create and run a particular image in a pod.Examples:# Start a nginx pod.kubectl run nginx --image=nginx# Start a hazelcast pod and let the container expose port 5701.kubectl run hazelcast --image=hazelcast/hazelcast --port=5701# Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and
"POD_NAMESPACE=default" in the container.kubectl run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster"
--env="POD_NAMESPACE=default"# Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the
container.kubectl run hazelcast --image=hazelcast/hazelcast
--labels="app=hazelcast,env=prod"# Dry run. Print the corresponding API objects without creating them.kubectl run nginx --image=nginx --dry-run=client# Start a nginx pod, but overload the spec with a partial set of values parsed
from JSON.kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": {
... } }'# Start a busybox pod and keep it in the foreground, don't restart it if it
exits.kubectl run -i -t busybox --image=busybox --restart=Never# Start the nginx pod using the default command, but use custom arguments
(arg1 .. argN) for that command.kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN># Start the nginx pod using a different command and custom arguments.kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>Options:--allow-missing-template-keys=true: If true, ignore any errors in
templates when a field or map key is missing in the template. Only applies to
golang and jsonpath output formats.--attach=false: If true, wait for the Pod to start running, and then
attach to the Pod as if 'kubectl attach ...' were called.  Default false, unless
'-i/--stdin' is set, in which case the default is true. With '--restart=Never'
the exit code of the container process is returned.--cascade=true: If true, cascade the deletion of the resources managed by
this resource (e.g. Pods created by a ReplicationController).  Default true.--command=false: If true and extra arguments are present, use them as the
'command' field in the container, rather than the 'args' field which is the
default.--dry-run='none': Must be "none", "server", or "client". If client
strategy, only print the object that would be sent, without sending it. If
server strategy, submit server-side request without persisting the resource.--env=[]: Environment variables to set in the container.--expose=false: If true, service is created for the container(s) which are
run--field-manager='kubectl-run': Name of the manager used to track field
ownership.-f, --filename=[]: to use to replace the resource.--force=false: If true, immediately remove resources from API and bypass
graceful deletion. Note that immediate deletion of some resources may result in
inconsistency or data loss and requires confirmation.--grace-period=-1: Period of time in seconds given to the resource to
terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can
only be set to 0 when --force is true (force deletion).--hostport=-1: The host port mapping for the container port. To
demonstrate a single-machine container.--image='': The image for the container to run.--image-pull-policy='': The image pull policy for the container. If left
empty, this value will not be specified by the client and defaulted by the
server-k, --kustomize='': Process a kustomization directory. This flag can't be used
together with -f or -R.-l, --labels='': Comma separated labels to apply to the pod(s). Will override
previous values.--leave-stdin-open=false: If the pod is started in interactive mode or
with stdin, leave stdin open after the first attach completes. By default, stdin
will be closed after the first attach completes.--limits='': The resource requirement limits for this container.  For
example, 'cpu=200m,memory=512Mi'.  Note that server side components may assign
limits depending on the server configuration, such as limit ranges.-o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.--overrides='': An inline JSON override for the generated object. If this
is non-empty, it is used to override the generated object. Requires that the
object supply a valid apiVersion field.--pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher
than zero) to wait until at least one pod is running--port='': The port that this container exposes.--privileged=false: If true, run the container in privileged mode.--quiet=false: If true, suppress prompt messages.--record=false: Record current kubectl command in the resource annotation.
If set to false, do not record the command. If set to true, record the command.
If not set, default to updating the existing annotation value only if one
already exists.-R, --recursive=false: Process the directory used in -f, --filename
recursively. Useful when you want to manage related manifests organized within
the same directory.--requests='': The resource requirement requests for this container.  For
example, 'cpu=100m,memory=256Mi'.  Note that server side components may assign
requests depending on the server configuration, such as limit ranges.--restart='Always': The restart policy for this Pod.  Legal values
[Always, OnFailure, Never].--rm=false: If true, delete resources created in this command for attached
containers.--save-config=false: If true, the configuration of current object will be
saved in its annotation. Otherwise, the annotation will be unchanged. This flag
is useful when you want to perform kubectl apply on this object in the future.--serviceaccount='': Service account to set in the pod spec.-i, --stdin=false: Keep stdin open on the container(s) in the pod, even if
nothing is attached.--template='': Template string or path to template file to use when
-o=go-template, -o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].--timeout=0s: The length of time to wait before giving up on a delete,
zero means determine a timeout from the size of the object-t, --tty=false: Allocated a TTY for each container in the pod.--wait=false: If true, wait for resources to be gone before returning.
This waits for finalizers.Usage:kubectl run NAME --image=image [--env="key=value"] [--port=port]
[--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND]
[args...] [options]Use "kubectl options" for a list of global command-line options (applies to all
commands).
[root@k8s-master ~]# kubectl run nginx --image=nginx
pod/nginx created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               0/1     ContainerCreating   0          8s
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          78s
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
web                1/1     1            1           22d
[root@k8s-master ~]# kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         k8s-node2/10.0.0.63
Start Time:   Tue, 14 Dec 2021 22:24:34 +0800
Labels:       run=nginx
Annotations:  cni.projectcalico.org/podIP: 10.244.169.191/32cni.projectcalico.org/podIPs: 10.244.169.191/32
Status:       Running
IP:           10.244.169.191
IPs:IP:  10.244.169.191
Containers:nginx:Container ID:   docker://185b057460d60afd4965e7f8ae79e715cdca49c4dacd40eabd5a1c0d2c111dcfImage:          nginxImage ID:       docker-pullable://nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603Port:           <none>Host Port:      <none>State:          RunningStarted:      Tue, 14 Dec 2021 22:25:49 +0800Ready:          TrueRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True
Volumes:default-token-8grtj:Type:        Secret (a volume populated by a Secret)SecretName:  default-token-8grtjOptional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age    From                Message----    ------     ----   ----                -------Normal  Scheduled  2m13s  default-scheduler   Successfully assigned default/nginx to k8s-node2Normal  Pulling    2m11s  kubelet, k8s-node2  Pulling image "nginx"Normal  Pulled     58s    kubelet, k8s-node2  Successfully pulled image "nginx" in 1m12.991781419sNormal  Created    58s    kubelet, k8s-node2  Created container nginxNormal  Started    58s    kubelet, k8s-node2  Started container nginx

2. Pod中影响调度的主要属性

apiVersion: apps/v1
kind: Deployment
metadata:name: webnamespace: default
spec:
...containers:-image: lizhenliang/java-demoname: java-demoimagePullPolicy: AlwayslivenessProbe:initialDelaySeconds: 30periodSeconds: 20tcpSocket:port: 8080resources: {}        #资源调度依据restartPolicy: AlwaysschedulerName: default-scheduler     # 资源调度策略nodeName: ""nodeSelector: {}affinity: {}tolerations: []

3. 资源限制对Pod调度的影响

容器资源限制:

  • resources.limits.cpu
  • resources.limits.memory

容器使用的最小资源需求,作为容器调度时资源分配的依据:

  • resources.requests.cpu
  • resources.requests.memory
apiVersion: v1
kind: Pod
metadata:name: web
spec:containers:-name: webimage: nginxresources:requests:memory: "64Mi"cpu: "250m"limits:memory: "128Mi"cpu: "500m"

K8s会根据Request的值去查找有足够资源的Node来调度此Pod
CPU单位:可以写m也可以写浮点数,例如0.5=500m,1=1000m

[root@k8s-master ~]# cat dep.yaml
apiVersion: v1
kind: Pod
metadata:name: web
spec:containers:- name: web3image: nginxresources:requests:memory: "64Mi"cpu: "250m"limits:memory: "128Mi"cpu: "500m"
[root@k8s-master ~]# kubectl apply -f dep.yaml
pod/web created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          61m
web                                 0/1     ContainerCreating   0          12s
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          61m
web                                 1/1     Running   0          20s
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl describe pod web
Name:         web
Namespace:    default
Priority:     0
Node:         k8s-node2/10.0.0.63
Start Time:   Tue, 14 Dec 2021 23:25:55 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 10.244.169.132/32cni.projectcalico.org/podIPs: 10.244.169.132/32
Status:       Running
IP:           10.244.169.132
IPs:IP:  10.244.169.132
Containers:web3:Container ID:   docker://660aea8148685226b80d1f0e1ba7705919704d80c11d3209f9f3ae022cdee65bImage:          nginxImage ID:       docker-pullable://nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603Port:           <none>Host Port:      <none>State:          RunningStarted:      Tue, 14 Dec 2021 23:26:12 +0800Ready:          TrueRestart Count:  0Limits:cpu:     500mmemory:  128MiRequests:cpu:        250mmemory:     64MiEnvironment:  <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True
Volumes:default-token-8grtj:Type:        Secret (a volume populated by a Secret)SecretName:  default-token-8grtjOptional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age   From                Message----    ------     ----  ----                -------Normal  Scheduled  44s   default-scheduler   Successfully assigned default/web to k8s-node2Normal  Pulling    43s   kubelet, k8s-node2  Pulling image "nginx"Normal  Pulled     27s   kubelet, k8s-node2  Successfully pulled image "nginx" in 15.795143763sNormal  Created    27s   kubelet, k8s-node2  Created container web3Normal  Started    27s   kubelet, k8s-node2  Started container web3
[root@k8s-master ~]# kubectl describe node l8s-node2
Error from server (NotFound): nodes "l8s-node2" not found
[root@k8s-master ~]# kubectl describe node k8s-node2
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=k8s-node2kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.socknode.alpha.kubernetes.io/ttl: 0projectcalico.org/IPv4Address: 172.16.1.63/24projectcalico.org/IPv4IPIPTunnelAddr: 10.244.169.128volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 21 Nov 2021 23:23:27 +0800
Taints:             <none>
Unschedulable:      false
Lease:HolderIdentity:  k8s-node2AcquireTime:     <unset>RenewTime:       Tue, 14 Dec 2021 23:26:58 +0800
Conditions:Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message----                 ------  -----------------                 ------------------                ------                       -------NetworkUnavailable   False   Tue, 14 Dec 2021 22:24:09 +0800   Tue, 14 Dec 2021 22:24:09 +0800   CalicoIsUp                   Calico is running on this nodeMemoryPressure       False   Tue, 14 Dec 2021 23:26:44 +0800   Sun, 21 Nov 2021 23:23:27 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory availableDiskPressure         False   Tue, 14 Dec 2021 23:26:44 +0800   Sun, 21 Nov 2021 23:23:27 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressurePIDPressure          False   Tue, 14 Dec 2021 23:26:44 +0800   Sun, 21 Nov 2021 23:23:27 +0800   KubeletHasSufficientPID      kubelet has sufficient PID availableReady                True    Tue, 14 Dec 2021 23:26:44 +0800   Wed, 01 Dec 2021 10:48:31 +0800   KubeletReady                 kubelet is posting ready status
Addresses:InternalIP:  10.0.0.63Hostname:    k8s-node2
Capacity:cpu:                2ephemeral-storage:  30185064Kihugepages-1Gi:      0hugepages-2Mi:      0memory:             1863020Kipods:               110
Allocatable:cpu:                2ephemeral-storage:  27818554937hugepages-1Gi:      0hugepages-2Mi:      0memory:             1760620Kipods:               110
System Info:Machine ID:                 c20304a03ec54a0fa8aab6469d0a16dcSystem UUID:                46874D56-AB2E-7867-1BD5-C67713201686Boot ID:                    593535db-8a33-4979-8ff6-11aa4e524832Kernel Version:             3.10.0-1160.45.1.el7.x86_64OS Image:                   CentOS Linux 7 (Core)Operating System:           linuxArchitecture:               amd64Container Runtime Version:  docker://20.10.11Kubelet Version:            v1.19.0Kube-Proxy Version:         v1.19.0
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
Non-terminated Pods:          (12 in total)Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE---------                   ----                                          ------------  ----------  ---------------  -------------  ---default                     my-dep-5f8dfc8c78-j9fqp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19ddefault                     nginx                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         62mdefault                     nginx-deployment-7f78c49b8f-ftzb7             500m (25%)    0 (0%)      0 (0%)           0 (0%)         4d17hdefault                     web                                           250m (12%)    500m (25%)  64Mi (3%)        128Mi (7%)     68sdefault                     web2                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15dkube-system                 calico-node-9r6zd                             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22dkube-system                 coredns-6d56c8448f-gcgrh                      100m (5%)     0 (0%)      70Mi (4%)        170Mi (9%)     23dkube-system                 kube-proxy-5qpgc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23dkubernetes-dashboard        dashboard-metrics-scraper-7b59f7d4df-jxb4b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22dkubernetes-dashboard        kubernetes-dashboard-5dbf55bd9d-zpr7t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22dtest                        my-dep-5f8dfc8c78-58sdk                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19dtest                        my-dep-5f8dfc8c78-965w7                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
Allocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource           Requests     Limits--------           --------     ------cpu                1100m (55%)  500m (25%)memory             134Mi (7%)   298Mi (17%)ephemeral-storage  0 (0%)       0 (0%)hugepages-1Gi      0 (0%)       0 (0%)hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>

4. nodeSelector & nodeAffinity

nodeSelector:用于将Pod调度到匹配Label的Node上,如果没有匹配的标签会调度失败。
作用:

  • 完全匹配节点标签
  • 固定Pod到特定节点

给节点打标签:
kubectl label nodes [node] key=value
例如:kubectl label nodes k8s-node1 disktype=ssd

apiVersion: v1
kind: Pod
metadata:name: pod-example
spec:nodeSelector:disktype: "ssd"containers:-name: nginximage: nginx:1.19
[root@k8s-master ~]# kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
[root@k8s-master ~]# vi pod.yaml
[root@k8s-master ~]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginxname: pod-ssd
spec:nodeSelector:disktype: "ssd"containers:- image: nginxname: nginxresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always
[root@k8s-master ~]# kubectl apply -f pod.yaml
pod/pod-ssd created
[root@k8s-master ~]# kubectl get pod
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h10m
pod-ssd                             0/1     ContainerCreating   0          10s
web                                 1/1     Running             0          6h9m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          7h10m   10.244.169.191   k8s-node2   <none>           <none>
pod-ssd                             1/1     Running   0          18s     10.244.169.133   k8s-node2   <none>           <none>
web                                 1/1     Running   0          6h9m    10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running   4          15d     10.244.169.187   k8s-node2   <none>           <none>[root@k8s-master ~]# kubectl label node k8s-node2 disktype-
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl get node --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
k8s-master   Ready    master   23d   v1.19.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    Ready    <none>   23d   v1.19.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    Ready    <none>   23d   v1.19.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]# vi pod.yaml
[root@k8s-master ~]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginxname: pod-ssd2
spec:nodeSelector:disktype: "ssd"containers:- image: nginxname: nginxresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always
[root@k8s-master ~]# kubectl apply -f pod.yaml
pod/pod-ssd2 created
[root@k8s-master ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h13m
pod-ssd                             1/1     Running   0          3m16s
pod-ssd2                            0/1     Pending   0          6s
web                                 1/1     Running   0          6h12m
web2                                1/1     Running   4          15d

nodeAffinity:节点亲和类似于nodeSelector,可以根据节点上的标签来约束Pod可以调度到哪些节点。
相比nodeSelector:

  • 匹配有更多的逻辑组合,不只是字符串的完全相等
  • 调度分为软策略和硬策略,而不是硬性要求
    • 硬(required):必须满足
    • 软(preferred):尝试满足,但不保证

操作符:In、NotIn、Exists、DoesNotExist、Gt、Lt

apiVersion: v1
kind: Pod
metadata:name: with-node-affinity
spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key: gpuoperator: Invalues:-nvidia-teslapreferredDuringSchedulingIgnoredDuringExecution:-weight: 1preference:matchExpressions:-key: groupoperator: Invalues:-ai
containers:
-name: web
image: nginx
[root@k8s-master ~]# vi nodeaffinity.yaml
[root@k8s-master ~]# cat nodeaffinity.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx1
spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: disktypeoperator: Invalues:- ssd            containers:- name: with-node-affinityimage: nginx
[root@k8s-master ~]# kubectl apply -f nodeaffinity.yaml
pod/nginx1 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h25mnginx1                              0/1     Pending   0          15s
pod-ssd                             1/1     Running   0          14m
pod-ssd2                            0/1     Pending   0          11m
web                                 1/1     Running   0          6h23m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# vi nodeaffinity2.yaml
[root@k8s-master ~]# cat nodeaffinity2.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx2
spec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 1preference:matchExpressions:- key: disktypeoperator: Invalues:- ssd          containers:- name: with-node-affinityimage: nginx
[root@k8s-master ~]# kubectl apply -f nodeaffinity2.yaml
pod/nginx2 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h27m
nginx1                              0/1     Pending             0          2m28s
nginx2                              0/1     ContainerCreating   0          3s
pod-ssd                             1/1     Running             0          17m
pod-ssd2                            0/1     Pending             0          13m
web                                 1/1     Running             0          6h26m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl label node k8s-node2 disktype=ssd
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h30m
nginx1                              0/1     ContainerCreating   0          5m11s
nginx2                              1/1     Running             0          2m46s
pod-ssd                             1/1     Running             0          19m
pod-ssd2                            0/1     ContainerCreating   0          16m
web                                 1/1     Running             0          6h28m
web2                                1/1     Running             4          15d

5. Taints & Tolerations

Taints:避免Pod调度到特定Node上
应用场景:

  • 专用节点,例如配备了特殊硬件的节点
  • 基于Taint的驱逐

设置污点:
kubectl taint node [node] key=value:[effect]
其中[effect] 可取值:

  • NoSchedule :一定不能被调度。
  • PreferNoSchedule:尽量不要调度。
  • NoExecute:不仅不会调度,还会驱逐Node上已有的Pod。

去掉污点:
kubectl taint node [node] key:[effect]-

[root@k8s-master ~]# kubectl taint node k8s-node2 disktype=ssd:NoSchedule
node/k8s-node2 tainted
[root@k8s-master ~]# kubectl describe node k8s-node2|grep Taint
Taints:             disktype=ssd:NoSchedule
[root@k8s-master ~]# kubectl describe node k8s-node1|grep Taint
Taints:             <none>
[root@k8s-master ~]# kubectl run nginx3 --image=nginx
pod/nginx3 created
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY   STATUS              RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running             0          7h40m   10.244.169.191   k8s-node2   <none>           <none>
nginx1                              1/1     Running             0          15m     10.244.169.129   k8s-node2   <none>           <none>
nginx2                              1/1     Running             0          13m     10.244.169.135   k8s-node2   <none>           <none>
nginx3                              0/1     ContainerCreating   0          18s     <none>           k8s-node1   <none>           <none>
pod-ssd                             1/1     Running             0          30m     10.244.169.133   k8s-node2   <none>           <none>
pod-ssd2                            1/1     Running             0          27m     10.244.169.134   k8s-node2   <none>           <none>
web                                 1/1     Running             0          6h39m   10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running             4          15d     10.244.169.187   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl taint node k8s-node1 gpu=yes:NoSchedule
node/k8s-node1 tainted
[root@k8s-master ~]# kubectl describe node k8s-node1|grep Taint
Taints:             gpu=yes:NoSchedule
[root@k8s-master ~]# kubectl run nginx4 --image=nginx
pod/nginx4 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h42m
nginx1                              1/1     Running   0          17m
nginx2                              1/1     Running   0          14m
nginx3                              1/1     Running   0          111s
nginx4                              0/1     Pending   0          7s
pod-ssd                             1/1     Running   0          31m
pod-ssd2                            1/1     Running   0          28m
web                                 1/1     Running   0          6h40m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl describe node k8s-master|grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

Tolerations:允许Pod调度到持有Taints的Node上

apiVersion: v1
kind: Pod
metadata:name: pod-taints
spec:containers:- name: pod-taintsimage: busybox:latesttolerations:- key: "key"operator: "Equal"value: "value"effect: "NoSchedule"
[root@k8s-master ~]# vi taint.yaml
[root@k8s-master ~]# cat taint.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginxname: pod-taint
spec:tolerations:- key: "disktype"value: "ssd"effect: "NoSchedule"containers:- image: nginxname: nginxresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always
[root@k8s-master ~]# kubectl apply -f taint.yaml
pod/pod-taint created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h46m
nginx1                              1/1     Running             0          22m
nginx2                              1/1     Running             0          19m
nginx3                              1/1     Running             0          6m37s
nginx4                              0/1     Pending             0          4m53s
pod-ssd                             1/1     Running             0          36m
pod-ssd2                            1/1     Running             0          33m
pod-taint                           0/1     ContainerCreating   0          6s
web                                 1/1     Running             0          6h45m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          7h47m   10.244.169.191   k8s-node2   <none>           <none>
nginx1                              1/1     Running   0          22m     10.244.169.129   k8s-node2   <none>           <none>
nginx2                              1/1     Running   0          20m     10.244.169.135   k8s-node2   <none>           <none>
nginx3                              1/1     Running   0          7m      10.244.36.122    k8s-node1   <none>           <none>
nginx4                              0/1     Pending   0          5m16s   <none>           <none>      <none>           <none>
pod-ssd                             1/1     Running   0          37m     10.244.169.133   k8s-node2   <none>           <none>
pod-ssd2                            1/1     Running   0          33m     10.244.169.134   k8s-node2   <none>           <none>
pod-taint                           1/1     Running   0          29s     10.244.169.131   k8s-node2   <none>           <none>
web                                 1/1     Running   0          6h45m   10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running   4          15d     10.244.169.187   k8s-node2   <none>           <none>

6. nodeName

nodeName:指定节点名称,用于将Pod调度到指定的Node上,不经过调度器

apiVersion: v1
kind: Pod
metadata:name: pod-examplelabels:app: nginx
spec:nodeName: k8s-node2containers:- name: nginximage: nginx:1.15
[root@k8s-master ~]# kubectl run nginx5 --image=nginx
pod/nginx5 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h52m
nginx1                              1/1     Running   0          27m
nginx2                              1/1     Running   0          24m
nginx3                              1/1     Running   0          11m
nginx4                              0/1     Pending   0          10m
nginx5                              0/1     Pending   0          9s
pod-ssd                             1/1     Running   0          41m
pod-ssd2                            1/1     Running   0          38m
pod-taint                           1/1     Running   0          5m16s
web                                 1/1     Running   0          6h50m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h52m
nginx1                              1/1     Running   0          27m
nginx2                              1/1     Running   0          24m
nginx3                              1/1     Running   0          11m
nginx4                              0/1     Pending   0          10m
nginx5                              0/1     Pending   0          17s
pod-ssd                             1/1     Running   0          41m
pod-ssd2                            1/1     Running   0          38m
pod-taint                           1/1     Running   0          5m24s
web                                 1/1     Running   0          6h50m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# cp pod.yaml nodename.yaml
[root@k8s-master ~]# kubectl apply -f nodename.yaml
pod/nginx6 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h53m
nginx1                              1/1     Running             0          28m
nginx2                              1/1     Running             0          25m
nginx3                              1/1     Running             0          12m
nginx4                              0/1     Pending             0          11m
nginx5                              0/1     Pending             0          73s
nginx6                              0/1     ContainerCreating   0          8s
pod-ssd                             1/1     Running             0          42m
pod-ssd2                            1/1     Running             0          39m
pod-taint                           1/1     Running             0          6m20s
web                                 1/1     Running             0          6h51m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h53m
nginx1                              1/1     Running             0          28m
nginx2                              1/1     Running             0          25m
nginx3                              1/1     Running             0          12m
nginx4                              0/1     Pending             0          11m
nginx5                              0/1     Pending             0          79s
nginx6                              0/1     ContainerCreating   0          14s
pod-ssd                             1/1     Running             0          42m
pod-ssd2                            1/1     Running             0          39m
pod-taint                           1/1     Running             0          6m26s
web                                 1/1     Running             0          6h51m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h53m
nginx1                              1/1     Running   0          28m
nginx2                              1/1     Running   0          26m
nginx3                              1/1     Running   0          13m
nginx4                              0/1     Pending   0          11m
nginx5                              0/1     Pending   0          91s
nginx6                              1/1     Running   0          26s
pod-ssd                             1/1     Running   0          43m
pod-ssd2                            1/1     Running   0          40m
pod-taint                           1/1     Running   0          6m38s

7. DaemonSet控制器

DaemonSet功能:

  • 在每一个Node上运行一个Pod
  • 新加入的Node也同样会自动运行一个Pod

应用场景:网络插件、监控Agent、日志Agent

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d
calico-node-4pwdc                         1/1     Running   5          23d
calico-node-9r6zd                         1/1     Running   5          23d
calico-node-vqzdj                         1/1     Running   5          23d
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d
etcd-k8s-master                           1/1     Running   6          23d
kube-apiserver-k8s-master                 1/1     Running   12         23d
kube-controller-manager-k8s-master        1/1     Running   14         22d
kube-proxy-5qpgc                          1/1     Running   5          23d
kube-proxy-q2xfq                          1/1     Running   5          23d
kube-proxy-tvzpd                          1/1     Running   5          23d
kube-scheduler-k8s-master                 1/1     Running   16         22d
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d

示例:部署一个日志采集程序

apiVersion: apps/v1
kind: DaemonSet
metadata:name: filebeatnamespace: kube-system
spec:selector:matchLabels:name: filebeattemplate:metadata:labels:name: filebeatspec:containers:- name: logimage: elastic/filebeat:7.3.2
[root@k8s-master ~]# vi daemonset.yaml
[root@k8s-master ~]# kubectl apply -f daemonset.yaml
daemonset.apps/filebeat created
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d
calico-node-4pwdc                         1/1     Running   5          23d
calico-node-9r6zd                         1/1     Running   5          23d
calico-node-vqzdj                         1/1     Running   5          23d
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d
etcd-k8s-master                           1/1     Running   6          23d
kube-apiserver-k8s-master                 1/1     Running   12         23d
kube-controller-manager-k8s-master        1/1     Running   14         22d
kube-proxy-5qpgc                          1/1     Running   5          23d
kube-proxy-q2xfq                          1/1     Running   5          23d
kube-proxy-tvzpd                          1/1     Running   5          23d
kube-scheduler-k8s-master                 1/1     Running   16         22d
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d[root@k8s-master ~]# kubectl taint node k8s-node2 disktype-
node/k8s-node2 untainted
[root@k8s-master ~]# kubectl taint node k8s-node1 gpu-
node/k8s-node1 untainted
[root@k8s-master ~]# kubectl describe node |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d
calico-node-4pwdc                         1/1     Running   5          23d
calico-node-9r6zd                         1/1     Running   5          23d
calico-node-vqzdj                         1/1     Running   5          23d
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d
etcd-k8s-master                           1/1     Running   6          23d
filebeat-5pwh7                            1/1     Running   0          82s
filebeat-pt848                            1/1     Running   0          2m12s
kube-apiserver-k8s-master                 1/1     Running   12         23d
kube-controller-manager-k8s-master        1/1     Running   14         22d
kube-proxy-5qpgc                          1/1     Running   5          23d
kube-proxy-q2xfq                          1/1     Running   5          23d
kube-proxy-tvzpd                          1/1     Running   5          23d
kube-scheduler-k8s-master                 1/1     Running   16         22d
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d
[root@k8s-master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          8h     10.244.169.191   k8s-node2   <none>           <none>
nginx1                              1/1     Running   0          41m    10.244.169.129   k8s-node2   <none>           <none>
nginx2                              1/1     Running   0          39m    10.244.169.135   k8s-node2   <none>           <none>
nginx3                              1/1     Running   0          26m    10.244.36.122    k8s-node1   <none>           <none>
nginx4                              1/1     Running   0          24m    10.244.169.138   k8s-node2   <none>           <none>
nginx5                              1/1     Running   0          14m    10.244.169.137   k8s-node2   <none>           <none>
nginx6                              1/1     Running   0          13m    10.244.169.136   k8s-node2   <none>           <none>
pod-ssd                             1/1     Running   0          56m    10.244.169.133   k8s-node2   <none>           <none>
pod-ssd2                            1/1     Running   0          53m    10.244.169.134   k8s-node2   <none>           <none>
pod-taint                           1/1     Running   0          20m    10.244.169.131   k8s-node2   <none>           <none>
web                                 1/1     Running   0          7h5m   10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running   4          15d    10.244.169.187   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d     10.244.235.198   k8s-master   <none>           <none>
calico-node-4pwdc                         1/1     Running   5          23d     10.0.0.62        k8s-node1    <none>           <none>
calico-node-9r6zd                         1/1     Running   5          23d     10.0.0.63        k8s-node2    <none>           <none>
calico-node-vqzdj                         1/1     Running   5          23d     10.0.0.61        k8s-master   <none>           <none>
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d     10.244.169.184   k8s-node2    <none>           <none>
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d     10.244.36.120    k8s-node1    <none>           <none>
etcd-k8s-master                           1/1     Running   6          23d     10.0.0.61        k8s-master   <none>           <none>
filebeat-5pwh7                            1/1     Running   0          3m10s   10.244.36.123    k8s-node1    <none>           <none>
filebeat-pt848                            1/1     Running   0          4m      10.244.169.130   k8s-node2    <none>           <none>
kube-apiserver-k8s-master                 1/1     Running   12         23d     10.0.0.61        k8s-master   <none>           <none>
kube-controller-manager-k8s-master        1/1     Running   14         22d     10.0.0.61        k8s-master   <none>           <none>
kube-proxy-5qpgc                          1/1     Running   5          23d     10.0.0.63        k8s-node2    <none>           <none>
kube-proxy-q2xfq                          1/1     Running   5          23d     10.0.0.62        k8s-node1    <none>           <none>
kube-proxy-tvzpd                          1/1     Running   5          23d     10.0.0.61        k8s-master   <none>           <none>
kube-scheduler-k8s-master                 1/1     Running   16         22d     10.0.0.61        k8s-master   <none>           <none>
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d     10.244.36.118    k8s-node1    <none>           <none>

8. 调度失败原因分析

查看调度结果:
kubectl get pod -o wide
查看调度失败原因:kubectl describe pod

  • 节点CPU/内存不足
  • 有污点,没容忍
  • 没有匹配到节点标签

课后作业:
1、创建一个pod,分配到指定标签node上
•pod名称:web
•镜像:nginx
•node标签:disk=ssd

# vi web.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginxnodeSelector:disk: ssd
# kubectl apply -f web.yaml

2、确保在每个节点上运行一个pod,不考虑污点
•pod名称:nginx
•镜像:nginx

# vi nginx.yaml
apiVersion: apps/v1
kind: Pod
metadata:name: nginxnamespace: kube-system
spec:selector:matchLabels:name: nginxtemplate:metadata:labels:name: nginxspec:containers:- name: logimage: nginx
#kubectl apply -f nginx.yaml

3、查看集群中状态为ready的node数量,并将结果写到指定文件

kubectl describe node $(kubectl get nodes|grep Ready|awk '{print $1}') |grep Taint|grep -vc NoSchedule > /opt/node.txt

Kubernetes CKA认证运维工程师笔记-Kubernetes调度相关推荐

  1. Kubernetes CKA认证运维工程师笔记-Kubernetes安全

    Kubernetes CKA认证运维工程师笔记-Kubernetes安全 1. Kubernetes安全框架 2. 鉴权,授权,准入控制 2.1 鉴权 2.2 授权 2.3 准入控制 3. 基于角色的 ...

  2. Kubernetes CKA认证运维工程师笔记-Kubernetes网络

    Kubernetes CKA认证运维工程师笔记-Kubernetes网络 1. Service 存在的意义 2. Pod与Service的关系 3. Service三种常用类型 4. Service代 ...

  3. Kubernetes CKA认证运维工程师笔记-Kubernetes监控与日志

    Kubernetes CKA认证运维工程师笔记-Kubernetes监控与日志 1. 查看集群资源状况 2. 监控集群资源利用率 3. 管理K8s组件日志 4. 管理K8s应用日志 1. 查看集群资源 ...

  4. Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理

    Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理 1. 在Kubernetes中部署应用流程 2. 使用Deployment部署Java应用 2.1 Pod与D ...

  5. Kubernetes CKA认证运维工程师笔记-Docker快速入门

    Kubernetes CKA认证运维工程师笔记-Docker快速入门 1. Docker 概念与安装 1.1 Docker 是什么 1.2 Docker 基本组成 1.3 版本与支持平台 1.4 Do ...

  6. SRE运维工程师笔记-Linux基础入门

    SRE运维工程师笔记-Linux基础入门 1. Linux基础 1.1 用户类型 1.2 终端terminal 1.2.1 终端类型 1.2.2 查看当前的终端设备 1.3 交互式接口 1.3.1 交 ...

  7. SRE运维工程师笔记-Linux用户组和权限管理

    SRE运维工程师笔记-Linux用户组和权限管理 用户.组和权限 内容概述 1. Linux安全模型 1.1 用户 1.2 用户组 1.3 用户和组的关系 1.4 安全上下文 2. 用户和组的配置文件 ...

  8. SRE运维工程师笔记-文件查找和压缩

    SRE运维工程师笔记-文件查找和压缩 1. 文件查找 1.1 locate 1.2 find 1.2.1 指定搜索目录层级 1.2.2 对每个目录先处理目录内的文件,再处理目录本身 1.2.3 根据文 ...

  9. SRE运维工程师笔记-Linux文件管理和IO重定向

    SRE运维工程师笔记-Linux文件管理和IO重定向 1. 文件系统目录结构 1.1 文件系统的目录结构 1.2 常见的文件系统目录功能 1.3 应用程序的组成部分 1.4 CentOS 7 以后版本 ...

最新文章

  1. python解释器运行代码-python解释器怎么运行
  2. ubuntu12.10 64位编译Android4.1
  3. TabControl控件
  4. shell 变量相关的命令
  5. 力扣(简单+中等)50题整理总结
  6. 10 道关于 Java 泛型的面试题
  7. ur机械臂 控制器_OnRobot末端执行器和统一接口已通过UR +计划认证
  8. Scrapy 简介及初探
  9. 边工作边刷题:70天一遍leetcode: day 45-1
  10. 笔记软件对比之 思源笔记 VS Notion
  11. ubuntu下使用命令行查看opencv版本
  12. chrome扩展开发与上架
  13. linux下lamealsa进行音频流操作(八)用ffmpeg将mp3转为wav
  14. Winform Panel设置颜色、宽度
  15. win11打不开菜单怎么办?win11打不开开始菜单的9种解决方法
  16. Winrm后门在实战中的巧妙应用
  17. Alias Design 2019注册机
  18. princomp.m
  19. Zookeeper安装部署调试命令
  20. LDPC码的基础(1)

热门文章

  1. Flume sinks案例HDFS Sink(每 5 秒在 hdfs 上创建一个新的文件夹)
  2. 吴恩达 Andrew Ng深度学习deep learning.ai作业
  3. Code - Windows Overlapped
  4. 智驾发展的前世今生|为高阶自动驾驶而生的千寻FindAUTO NSSR解决方案
  5. 设计一个简单的单布电梯运行控制系统
  6. VXLAN 基本概念
  7. 【网络流24题】魔术球问题
  8. 树莓派-无屏幕安装官方系统及配置
  9. 程序人生 ---- 道与术的一点杂谈
  10. 实体型转换为一个关系模式