1 调度简介

[root@server2 ~]# kubectl get pod -n kube-system


2 影响kubernetes调度的因素

2.1 nodeName(针对节点)

[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginxnodeName: server3     指定节点调度到server3
[root@server2 ~]# kubectl apply -f pod.yaml   运行,pod直接被调度到server3上
[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginxnodeName: server13    指定错误的节点
[root@server2 ~]# kubectl get pod   运行不成功
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Pending   0          12s
缺点:直接无视存在的节点,直奔指定的节点

2.2 nodeSelector(针对节点)

[root@server2 ~]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginxnodeSelector:  disktype: ssd    标签
[root@server2 ~]# kubectl apply -f pod.yaml   创建
pod/nginx created
[root@server2 ~]# kubectl get pod  查看pod,没有运行成功,因为没有匹配的标签
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Pending   0          103s
[root@server2 ~]# kubectl label nodes server4 disktype=ssd  添加server4节点标签为disktype=ssd
node/server4 labeled
[root@server2 ~]# kubectl get pod   查看pod,运行成功
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          5m19s
[root@server2 ~]# kubectl get node --show-labels  查看节点标签,标签已经添加

[root@server2 ~]# kubectl label nodes server4 disktype-   如果删除标签,-表示去掉
node/server4 unlabeled
[root@server2 ~]# kubectl get pod  查看pod还是运行成功,因为以及调度过去了
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          10m

2.3 亲和与反亲和(针对节点)


调度官方网址

[root@server2 ~]# kubectl delete -f pod.yaml    删除之前pod
pod "nginx" deleted
[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginxaffinity:nodeAffinity:   节点亲和性requiredDuringSchedulingIgnoredDuringExecution:   必须满足nodeSelectorTerms:    匹配标签- matchExpressions:- key: disktype   匹配 的keyoperator: In   运算符为invalues:  取值- ssd   匹配的取值为ssd
[root@server2 ~]# kubectl apply -f pod.yaml  创建
pod/nginx created
[root@server2 ~]# kubectl get pod    查看pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Pending   0          19s
[root@server2 ~]# kubectl label nodes server3 disktype=ssd  给server3节点加标签
node/server3 labeled
[root@server2 ~]# kubectl get pod  查看pod,运行成功
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m57s [root@server2 ~]#  kubectl label nodes server4 disktype=ssd   给server4节点加标签
node/server4 labeled

添加倾向满足

[root@server2 ~]# kubectl delete -f pod.yaml
pod "nginx" deleted
[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginxaffinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: disktypeoperator: Invalues:- ssdpreferredDuringSchedulingIgnoredDuringExecution:  添加倾向性- weight: 1  preference:matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- server4    倾向server4[root@server2 ~]# kubectl apply -f pod.yaml
pod/nginx created
[root@server2 ~]# kubectl describe pod nginx 查看镜像详细信息,虽然sever3和server4上都有 disktype=ssd 标签,但是设置了倾向server4,所以会调度到server4

[root@server2 ~]# vim pod.yaml

[root@server2 ~]# kubectl delete -f pod.yaml
pod "nginx" deleted
[root@server2 ~]# kubectl apply -f pod.yaml    创建
pod/nginx created

2.4 针对pod亲和性和反亲和性

[root@server2 ~]# kubectl run nginx --image=nginx   运行一个pod
pod/nginx created[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: demo
spec:containers:- name: nginximage: nginxaffinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: runoperator: Invalues:- nginxtopologyKey: kubernetes.io/hostname
[root@server2 ~]# kubectl  get pod --show-labels   查看pod标签为 run=nginx
NAME    READY   STATUS    RESTARTS   AGE   LABELS
nginx   1/1     Running   0          13m   run=nginx
[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: demo
spec:containers:- name: nginximage: nginxaffinity:podAffinity:    表示pod亲和力requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: run                当运行demo时必须和带有run=nginx标签的pod在一起operator: In        values:- nginxtopologyKey: kubernetes.io/hostname    指定区域,通过主机名区分,也可以通过不同集群区分
[root@server2 ~]# kubectl apply -f pod.yaml    创建
pod/demo created
[root@server2 ~]# kubectl get pod -o wide   查看调度情况,demo和nginx都被调度到server4上
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
demo    1/1     Running   0          68s   10.244.2.17   server4   <none>           <none>
nginx   1/1     Running   0          20m   10.244.2.16   server4   <none>           <none>

pod反亲和性

[root@server2 ~]# kubectl delete -f pod.yaml   删除之前创建的
pod "demo" deleted
[root@server2 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: demo
spec:containers:- name: nginximage: nginxaffinity:podAntiAffinity:      反亲和性requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: runoperator: Invalues:- nginxtopologyKey: kubernetes.io/hostname
[root@server2 ~]# kubectl apply -f pod.yaml
pod/demo created
[root@server2 ~]# kubectl get pod -o wide  查看调度情况,demo和nginx在不同的节点上
NAME    READY   STATUS    RESTARTS      AGE   IP            NODE      NOMINATED NODE   READINESS GATES
demo    1/1     Running   0             13s   10.244.1.26   server3   <none>           <none>
nginx   1/1     Running   1 (45m ago)   11h   10.244.2.18   server4   <none>           <none>

2.5 Taints 污点

实验前先环境清理:
[root@server2 ~]# kubectl delete -f pod.yaml
pod "demo" deleted[root@server2 ~]# kubectl get all
NAME        READY   STATUS    RESTARTS        AGE
pod/nginx   1/1     Running   1 (4h30m ago)   15hNAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    4d23h
service/mysql        ClusterIP   None            <none>        3306/TCP   4d15h
service/mysql-read   ClusterIP   10.101.37.222   <none>        3306/TCP   4d15hNAME                     READY   AGE
statefulset.apps/mysql   0/0     39h[root@server2 ~]# kubectl delete statefulsets.apps  mysql   删除statefulsets控制器
[root@server2 ~]# kubectl delete svc mysql  删除服务
[root@server2 ~]# kubectl delete svc mysql-read   删除服务
[root@server2 ~]# kubectl delete pvc --all   删除所有pvc测试:
[root@server2 ~]# kubectl get pod  查看pod,现在就运行一个pod
NAME    READY   STATUS    RESTARTS        AGE
nginx   1/1     Running   1 (4h36m ago)   15h
[root@server2 ~]# kubectl get pod -o wide   查看调度节点,为server4
NAME    READY   STATUS    RESTARTS        AGE   IP            NODE      NOMINATED NODE   READINESS GATES
nginx   1/1     Running   1 (4h36m ago)   15h   10.244.2.18   server4   <none>           <none>
[root@server2 ~]# kubectl taint node server4 key1=v1:NoExecute   在server4上添加污点,server4上没有对应的key1=v1设置,直接躯离
node/server4 tainted
[root@server2 ~]# kubectl get pod  查看pod,直接被回收了
No resources found in default namespace.
[root@server2 ~]# kubectl describe nodes | grep Taints   查询所有节点污点
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             key1=v1:NoExecute
[root@server2 ~]# kubectl describe nodes server4 | grep Taints   查询server4节点上的污点
Taints:             key1=v1:NoExecute
[root@server2 ~]# kubectl taint node  server4 key1:NoExecute-    删除server4节点上的污点
node/server4 untainted
[root@server2 ~]# vim pod.yaml
[root@server2 ~]# kubectl apply -f pod.yaml
[root@server2 ~]# kubectl get pod -o wide   两个pod一个调度到server3一个调度到server4
NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
web-server-85b98978db-565cs   1/1     Running   0          8s    10.244.1.27   server3   <none>           <none>
web-server-85b98978db-hrwbm   1/1     Running   0          8s    10.244.2.20   server4   <none>           <none>
[root@server2 ~]# kubectl taint node server4 key1=v1:NoSchedule   在server4上加污点,不匹配key和value时,不参加调度,但是不会躯离
node/server4 tainted
[root@server2 ~]# vim pod.yaml

[root@server2 ~]# kubectl apply -f pod.yaml    创建
deployment.apps/web-server configured
[root@server2 ~]# kubectl get pod -o wide   查看调度情况,都调度到server3上了
NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
web-server-85b98978db-2zw8q   1/1     Running   0          39s   10.244.1.28   server3   <none>           <none>
web-server-85b98978db-565cs   1/1     Running   0          38m   10.244.1.27   server3   <none>           <none>
web-server-85b98978db-btcst   1/1     Running   0          39s   10.244.1.29   server3   <none>           <none>
web-server-85b98978db-hrwbm   1/1     Running   0          38m   10.244.2.20   server4   <none>           <none>

添加容忍度

apiVersion: apps/v1
kind: Deployment
metadata:name: web-server
spec:selector:matchLabels:app: nginxreplicas: 4template:metadata:labels:app: nginxspec:containers:- name: nginximage: nginxtolerations:    添加容忍- operator: "Exists"     不指定key,就表示匹配所有 key 和 vlaueeffect: "NoSchedule"   具备NoSchedule这样的容忍度[root@server2 ~]# kubectl apply -f pod.yaml   创建
deployment.apps/web-server configured[root@server2 ~]# kubectl get pod -o wide  查看调度情况,可以发现容忍了server2和srevr4上存在的污点,在server2和server4都调度了
NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
web-server-7fdd6fbffc-7l4r5   1/1     Running   0          7m5s    10.244.2.22   server4   <none>           <none>
web-server-7fdd6fbffc-92m56   1/1     Running   0          7m14s   10.244.2.21   server4   <none>           <none>
web-server-7fdd6fbffc-nhbdk   1/1     Running   0          7m6s    10.244.0.22   server2   <none>           <none>
web-server-7fdd6fbffc-xzs7t   1/1     Running   0          7m13s   10.244.1.30   server3   <none>           <none>
[root@server2 ~]# kubectl taint node server3 key1=v1:NoExecute    将server3躯离掉
node/server3 tainted
[root@server2 ~]# kubectl get pod -o wide   查看调度情况,server3被躯离到server2和server4上,因为容忍设置只能容忍NoSchedule
NAME                          READY   STATUS    RESTARTS      AGE   IP            NODE      NOMINATED NODE   READINESS GATES
web-server-7fdd6fbffc-7l4r5   1/1     Running   1 (52m ago)   16h   10.244.2.24   server4   <none>           <none>
web-server-7fdd6fbffc-92m56   1/1     Running   1 (52m ago)   16h   10.244.2.23   server4   <none>           <none>
web-server-7fdd6fbffc-nhbdk   1/1     Running   1 (52m ago)   16h   10.244.0.24   server2   <none>           <none>
web-server-7fdd6fbffc-tcl79   1/1     Running   0             62s   10.244.2.25   server4   <none>           <none>[root@server2 ~]# vim pod.yaml

[root@server2 ~]# kubectl apply -f pod.yaml   创建
deployment.apps/web-server configured
[root@server2 ~]# kubectl get pod -o wide  查看调度情况,可以发现server2、server3、server4都可以调度
NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
web-server-5d7f8b7699-28zjb   1/1     Running   0          36s   10.244.2.27   server4   <none>           <none>
web-server-5d7f8b7699-5wscd   1/1     Running   0          46s   10.244.2.26   server4   <none>           <none>
web-server-5d7f8b7699-9qsgj   1/1     Running   0          46s   10.244.1.32   server3   <none>           <none>
web-server-5d7f8b7699-kcc6g   1/1     Running   0          46s   10.244.1.33   server3   <none>           <none>
web-server-5d7f8b7699-mjs2q   1/1     Running   0          30s   10.244.0.29   server2   <none>           <none>
web-server-5d7f8b7699-ngxrx   1/1     Running   0          34s   10.244.0.28   server2   <none>           <none>
[root@server2 ~]# kubectl delete -f pod.yaml   回收
deployment.apps "web-server" deleted
[root@server2 ~]# kubectl taint node server3 key1:NoExecute-    删除server3上的污点
node/server3 untainted
[root@server2 ~]# kubectl taint node server4 key1:NoSchedule-   删除server4上的污点
node/server4 untainted

3 影响pod调度的指令

3.1 cordon—停止调度

[root@server2 ~]# kubectl cordon server3   停止server3调度
node/server3 cordoned
[root@server2 ~]# kubectl get nodes   查看节点
NAME      STATUS                     ROLES                  AGE     VERSION
server2   Ready                      control-plane,master   5d18h   v1.23.5
server3   Ready,SchedulingDisabled   <none>                 5d18h   v1.23.5    server3被停止调度了
server4   Ready                      <none>                 5d18h   v1.23.5
[root@server2 ~]# vim pod.yaml

[root@server2 ~]# kubectl apply -f pod.yaml   创建
deployment.apps/web-server configured
[root@server2 ~]# kubectl get pod -o wide   查看调度情况,server3不参与调度
NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
web-server-85b98978db-cjww9   1/1     Running   0          27s   10.244.2.33   server4   <none>           <none>
web-server-85b98978db-gsn2s   1/1     Running   0          36s   10.244.2.31   server4   <none>           <none>
web-server-85b98978db-nbrn9   1/1     Running   0          28s   10.244.2.32   server4   <none>           <none>
web-server-85b98978db-zm44t   1/1     Running   0          36s   10.244.2.30   server4   <none>           <none>

3.2 drain—驱离并停止调度

[root@server2 ~]# kubectl uncordon server3  先恢复server3的调度
node/server3 uncordoned
[root@server2 ~]# kubectl drain server4  驱离
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-24kvl, kube-system/kube-proxy-jvgb2      有DaemonSet控制器不让驱离
[root@server2 ~]# kubectl drain server4 --ignore-daemonsets    添加忽略daemonsets,驱离
[root@server2 ~]# kubectl get pod -o wide  查看调度节点情况,server4全部被驱离,全部调度在server3上
NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
web-server-85b98978db-2pdtp   1/1     Running   0          2m58s   10.244.1.38   server3   <none>           <none>
web-server-85b98978db-csrfp   1/1     Running   0          2m58s   10.244.1.35   server3   <none>           <none>
web-server-85b98978db-dht2r   1/1     Running   0          2m58s   10.244.1.36   server3   <none>           <none>
web-server-85b98978db-gwhsw   1/1     Running   0          2m58s   10.244.1.37   server3   <none>           <none>
[root@server2 ~]# kubectl get  node
NAME      STATUS                     ROLES                  AGE     VERSION
server2   Ready                      control-plane,master   5d18h   v1.23.5
server3   Ready                      <none>                 5d18h   v1.23.5
server4   Ready,SchedulingDisabled   <none>                 5d18h   v1.23.5   驱离节点上的pod并禁用调度

3.3 delete—删除节点

注意删除节点时,先驱离该节点上pod,再删除
[root@server2 ~]# kubectl delete nodes server4   删除节点
node "server4" deleted
[root@server2 ~]# kubectl get nodes   查看节点,server4被删除
NAME      STATUS   ROLES                  AGE     VERSION
server2   Ready    control-plane,master   5d19h   v1.23.5
server3   Ready    <none>                 5d19h   v1.23.5
如果从新加入server4
[root@server4 ~]# systemctl restart kubelet   在server4上重启kubelet
[root@server2 ~]# kubectl get nodes   server4会被重新添加上
NAME      STATUS   ROLES                  AGE     VERSION
server2   Ready    control-plane,master   5d19h   v1.23.5
server3   Ready    <none>                 5d19h   v1.23.5
server4   Ready    <none>                 12s     v1.23.5

k8s(八)—调度因素(nodeName、nodeSelector、亲和与反亲和、Taints 污点)、影响pod调度的指令相关推荐

  1. k8s调度(nodeName、nodeSelect、节点、pod的亲和和反亲和、Taints)

    k8s调度 nodeName nodeSelector 亲和与反亲和 节点亲和 pod 亲和性和反亲和性 Taints(污点) 调度器通过 kubernetes 的 watch 机制来发现集群中新创建 ...

  2. 图解 K8S(07):调度利器之亲和与反亲和(服务容灾)

    本系列教程目录(已发布): 图解 K8S(01):基于ubuntu 部署最新版 k8s 集群 图解 K8S(02):认识 K8S 中的资源对象 图解 K8S(03):从 Pause 容器理解 Pod ...

  3. k8s 亲和、反亲和、污点、容忍

    目录 一.K8s调度 二.亲和与反亲和 2.1.Pod和Node 2.2.硬亲和和软亲和 三.污点与容忍 3.1  污点(Taint) 3.1.1  污点的组成 3.1.2  污点的设置和去除 3.2 ...

  4. K8S node亲和与反亲和:affinity应用

    简介: affinity是K8S 1.2版本后引入的新特性,类似于nodeSelector,允许使用者指定一些pod在Node间调度的约束,目前支持两种形式: 1. requireDuringSche ...

  5. kubernetes 亲和、反亲和、污点、容忍

    目录 一.K8s调度 二.亲和与反亲和 1.Pod和Node 2.硬亲和和软亲和 三.污点与容忍 3.1  污点(Taint) 3.1.1 污点的组成 3.1.2 污点的设置和去除 3.2  容忍(T ...

  6. 容器亲和、反亲和、污点、容忍以及驱逐的介绍

    nodeSelector简介: 官方文档: https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/ 可用于干预pod的调度结果,例 ...

  7. k8s核心技术-Pod(调度策略)_影响Pod调度(资源限制和节点选择器)---K8S_Google工作笔记0025

    技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 然后咱们再说一下,在master节点的,schedule组件,进行调度的时候,他是怎么调度的呢? ...

  8. k8s核心技术-Pod(调度策略)_影响Pod调度(污点和污点容忍)---K8S_Google工作笔记0027

    技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 上面咱们说了亲和性,其实还有一个调度规则,就是反亲和性 ,根据反亲和性进行调度. 所谓的反亲和性就 ...

  9. k8s核心技术-Pod(调度策略)_影响Pod调度(节点亲和性)---K8S_Google工作笔记0026

    技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 然后咱们再看一下影响k8s调度的内容,就是节点亲和性. 其实就是指的是,上面的nodeAffini ...

最新文章

  1. 和12岁小同志搞创客开发:如何驱动 12864 OLED液晶显示屏?
  2. 这道题你怎么看?长春理工大学2021电子竞赛
  3. iOS---搜索功能
  4. JS页面跳转的各种形式
  5. 十年技术骨干面试被开出一万五薪资,直呼 “这是对我的侮辱”
  6. wordpress 自定义分类url 重写_WordPress导航主题-WebStack导航主题
  7. 红橙Darren视频笔记 自己捕获异常并保存到本地
  8. 数组的合并和升序排列_每日“力扣”系列10 下一个排列
  9. 一文搞定移动端适配!
  10. 通过ssh证书远程登录
  11. javascript || 简写 if
  12. 前端性能优化--图片懒加载(lazyload image)
  13. Geoserver 发布 shp文件
  14. 中兴交换机如何查看服务器设备,中兴交换机配置流程和配置命令(参考模板)...
  15. 使用AcronisTrueImage 2020迁移thinkpad x1 carbon 2016(4th gen) win10系统到1t的固态硬盘970evoPlus的过程
  16. cf 1677 B. Tokitsukaze and Meeting
  17. Few-Shot Object Detection with Fully Cross-Transformer论文精读
  18. ASR系统第二讲 语音识别基础
  19. fβ,fα,fT,fmax之间的关系
  20. 中科院京区博士生申请申根(德国)签证流程

热门文章

  1. 论文翻译神器:SCITranslate 10.0,一键翻译整篇文献
  2. 计算机类SCI与EI收录的外文期刊
  3. 内网渗透中的域管与域控快速定位
  4. cancase vector_vector CANcase 1630
  5. 携一抹恬淡,美丽人生
  6. 活的恬淡宁静  萃取生命真谛
  7. 企业管理软件如何选型?看完后恍然大悟
  8. element-ui中el-container容器与div布局区分
  9. pillow进行图像处理
  10. android fbe分析,(原创)Android FBE加密源码分析(二)