Kubernetes调度器-Pod分配给节点(node-pod亲和性、固定节点)
1、需求
约束一个Pod只能在特定的 Node(s)上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用标签选择器来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将pod分散到节点上,而不是将pod放置在可以资源不足的节点上等等),但在某些情况下,需要更多控制pod停靠的节点,例如,确保pod最终落在连接了SSD的机器上,或者将来自两个不通的服务且有大量通信的pod放置在同一个可用区。
2、nodeSelector
nodeSelector是节点选择约束的最简单推荐形式。nodeSelector是PodSpec的一个字段。它指定键值对的映射。
为了使pod可以在节点上运行,节点必须具有每个指定的键值对作为标签(它也可以具有其他标签),最常用的是一对键值对。
添加标签到节点
kubectl label nodes <node-name> <label-key>=<label-value> 命令将标签添加到你所选择的节点上
kubectl get nodes --show-labels 查看节点当前具有的标签来
创建pod运行到指定node标签上
[root@k8smaster node]# more nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: nodeselector-pod
spec:
containers:
- name: nodeselector-pod-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
restartPolicy: Never
nodeSelector:
disktype: ssd
[root@k8smaster node]# kubectl create -f nodeselector.yaml
pod/nodeselector-pod created
[root@k8smaster node]# kubectl get pods -o wide #由于节点标签不存在导致状态一直是Pending
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nodeselector-pod 0/1 Pending 0 2m26s <none> <none> <none> <none>
[root@k8smaster node]# kubectl describe pod nodeselector-pod
Name: nodeselector-pod
Namespace: default
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Containers:
nodeselector-pod-ctn:
Image: 192.168.23.100:5000/tomcat:v2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-vt7pl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vt7pl
Optional: false
QoS Class: BestEffort
Node-Selectors: disktype=ssd
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s (x2 over 104s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
[root@k8smaster node]#
[root@k8smaster node]# kubectl label nodes k8snode01 disktype=ssd #给节点打标签
node/k8snode01 labeled
[root@k8smaster node]# kubectl get nodes --show-labels #显示节点标签
NAME STATUS ROLES AGE VERSION LABELS
k8smaster Ready master 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster node]#
[root@k8smaster node]# kubectl get pod -o wide #pod正常运行到指定节点
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nodeselector-pod 1/1 Running 0 11m 10.244.1.28 k8snode01 <none> <none>
[root@k8smaster node]#
[root@k8smaster node]# kubectl label nodes k8snode01 disktype=ssd2 --overwrite=true #node标签修改不会驱除pod
node/k8snode01 labeled
[root@k8smaster node]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8smaster Ready master 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster node]#
[root@k8smaster node]# kubectl get pod -o wide #node标签修改成不匹配状态,不影响正在运行的pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nodeselector-pod 1/1 Running 0 35m 10.244.1.28 k8snode01 <none> <none>
[root@k8smaster node]#
3、节点亲和性
节点亲和概念上类似于nodeSelector,它使你可以根据节点上的标签来约束pod可以调度到哪些节点。目前有两种类型的节点亲和,分别为requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution,可以视它们为硬和软,前者指定了将pod调度到一个节点上*必须*满足的规则(就像nodeSelector但使用更具表现力的语法),后者指定调度器将尝试执行单不能保证的*偏好*。名称的IgnoredDuringExecution部分意味着,类似于nodeSelector的工作原理,如果节点的标签在运行时发生变更,从而不再满足pod上的亲和规则,那么pod将仍然继续在该节点上运行。而requiredDuringSchedulingRequiredDuringExecution,它将类似于requiredDuringSchedulingIgnoredDuringExecution,除了它会将pod从不再满足pod的节点亲和要求的节点上驱逐。
节点亲和通过PodSpec的affinity字段下的nodeAffinity字段进行指定。
节点亲和语法支持下面的操作符: In(label的值在某个列表中),NotIn(label的值不在某个列表中),Exists(某个label存在),DoesNotExist(某个label不存在),Gt(label的值大于某个值),Lt(label的值大于某个值)。可以使用NotIn和DoesNotExist来实现节点反亲和行为,或者使用节点污点将pod从特定节点中驱逐。
如果同时指定了nodeSelector和nodeAffinity,*两者*必须都要满足,才能将 pod 调度到候选节点上。
如果指定了多个与nodeAffinity类型关联的nodeSelectorTerms,则如果其中一个 nodeSelectorTerms满足的话,pod将可以调度到节点上。
如果指定了多个与nodeSelectorTerms关联的matchExpressions,则只有当所有matchExpressions满足的话,pod才会可以调度到节点上。
如果修改或删除了pod所调度到的节点的标签,pod不会被删除。换句话说,亲和选择只在pod调度期间有效。
preferredDuringSchedulingIgnoredDuringExecution中的weight字段值的范围是1-100。对于每个符合所有调度要求(资源请求,RequiredDuringScheduling 亲和表达式等)的节点,调度器将遍历该字段的元素来计算总和,并且如果节点匹配对应的MatchExpressions,则添加“权重”到总和。然后将这个评分与该节点的其他优先级函数的评分进行组合。总分最高的节点是最优选的。
创建pod不运行在disktype=ssd标签上【必须满足】
[root@k8smaster node]# kubectl label nodes k8snode01 disktype=ssd --overwrite #重置标签disktype=ssd
node/k8snode01 labeled
[root@k8smaster node]# kubectl label nodes k8snode02 disktype=ssd --overwrite #重置标签disktype=ssd
node/k8snode02 labeled
[root@k8smaster node]# more noderequire.yaml
apiVersion: v1
kind: Pod
metadata:
name: noderequire-pod
spec:
containers:
- name: noderequire-pod-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
restartPolicy: Never
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: NotIn
values:
- ssd
[root@k8smaster node]# kubectl create -f noderequire.yaml
pod/noderequire-pod created
[root@k8smaster node]# kubectl get pod -o wide #没有满足的标签导致pod一直是Pending状态
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
noderequire-pod 0/1 Pending 0 68s <none> <none> <none> <none>
[root@k8smaster node]# kubectl describe pod noderequire-pod
Name: noderequire-pod
Namespace: default
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Containers:
noderequire-pod-ctn:
Image: 192.168.23.100:5000/tomcat:v2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-vt7pl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vt7pl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 31s default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match node selector.
[root@k8smaster node]# kubectl label nodes k8snode02 disktype=ssd2 --overwrite #重置标签为disktype=ssd2
node/k8snode02 labeled
[root@k8smaster node]# kubectl get pod -o wide #满足条件,pod正常运行
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
noderequire-pod 1/1 Running 0 2m18s 10.244.2.30 k8snode02 <none> <none>
[root@k8smaster node]# kubectl label nodes k8snode02 disktype=ssd --overwrite #重置标签为disktype=ssd
node/k8snode02 labeled
[root@k8smaster node]# kubectl get pod -o wide #标签修改成不满足,运行的pod不受影响
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
noderequire-pod 1/1 Running 0 2m37s 10.244.2.30 k8snode02 <none> <none>
[root@k8smaster node]#
创建pod尽量不运行在disktype=ssd标签上【尽量满足】
[root@k8smaster node]# kubectl label nodes k8snode02 disktype=ssd2 --overwrite #重置标签为disktype=ssd2
node/k8snode02 labeled
[root@k8smaster node]# kubectl label nodes k8snode01 disktype=ssd2 --overwrite #重置标签为disktype=ssd2
node/k8snode01 labeled
[root@k8smaster node]# more nodeprefer.yaml
apiVersion: v1
kind: Pod
metadata:
name: nodeprefer-pod
spec:
containers:
- name: nodeprefer-pod-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
restartPolicy: Never
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
[root@k8smaster node]# kubectl create -f nodeprefer.yaml
pod/nodeprefer-pod created
[root@k8smaster node]# kubectl get pod -o wide #没有满足条件标签的node,pod也正常运行
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nodeprefer-pod 1/1 Running 0 18s 10.244.2.31 k8snode02 <none> <none>
[root@k8smaster node]#
4、pod亲和性
pod间亲和与反亲和可以*基于已经在节点上运行的pod的标签*来约束pod可以调度到的节点,而不是基于节点上的标签。规则的格式为"如果X节点上已经运行了一个或多个满足规则Y的pod,则这个pod应该(或者在非亲和的情况下不应该)运行在X节点”。Y表示一个具有可选的关联命令空间列表的LabelSelector;与节点不同,因为pod是命名空间限定的(因此pod上的标签也是命名空间限定的),因此作用于pod标签的标签选择器必须指定选择器应用在哪个命名空间。从概念上讲,X是一个拓扑域,如节点,机架,云供应商地区,云供应商区域等。可以使用topologyKey 来表示它,topologyKey是节点标签的键以便系统用来表示这样的拓扑域。
注意:
Pod间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。我们不建议在超过数百个节点的集群中使用它们。
Pod反亲和需要对节点进行一致的标记,即集群中的每个节点必须具有适当的标签能够匹配topologyKey。如果某些或所有节点缺少指定的topologyKey标签,可能会导致意外行为。
与节点亲和一样,当前有两种类型的pod亲和与反亲和,即requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution,分表表示"硬性"与"软性"要求。请参阅前面节点亲和部分中的描述。requiredDuringSchedulingIgnoredDuringExecution亲和的一个示例是"将服务A和服务B的pod放置在同一区域,因为它们之间进行大量交",而preferredDuringSchedulingIgnoredDuringExecution反亲和的示例将是"将此服务的pod跨区域分布"(硬性要求是说不通的,因为你可能拥有的pod数多于区域数)。
Pod间亲和通过PodSpec中affinity字段下的podAffinity字段进行指定。而pod间反亲和通过PodSpec中affinity字段下的podAntiAffinity字段进行指定。
Pod亲和与反亲和的合法操作符有 In,NotIn,Exists,DoesNotExist。
创建mypod2依赖mypod1[必须满足]
[root@k8smaster pod]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8smaster Ready master 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster pod]# more pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod1
labels:
app: mypod1
spec:
containers:
- name: mypod1-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
restartPolicy: Never
nodeSelector:
disktype: ssd
[root@k8smaster pod]# more pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
app: mypod2
spec:
containers:
- name: mypod2-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mypod1
topologyKey: disktype
restartPolicy: Never
[root@k8smaster pod]# kubectl create -f pod2.yaml
pod/mypod2 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide #由于标签app: mypod1的pod没有运行导致pod一直是Pengding
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
mypod2 0/1 Pending 0 21s <none> <none> <none> <none> app=mypod2
[root@k8smaster pod]# kubectl describe pod mypod2 #查看Pengding原因
Name: mypod2
Namespace: default
Priority: 0
Node: <none>
Labels: app=mypod2
Annotations: <none>
Status: Pending
IP:
Containers:
mypod2-ctn:
Image: 192.168.23.100:5000/tomcat:v2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-vt7pl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vt7pl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 38s default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity.
[root@k8smaster pod]# kubectl create -f pod1.yaml
pod/mypod1 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide #标签app: mypod1的pod运行后,mypod2才能正常运行
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
mypod1 1/1 Running 0 13s 10.244.2.35 k8snode02 <none> <none> app=mypod1
mypod2 1/1 Running 0 67s 10.244.2.34 k8snode02 <none> <none> app=mypod2
[root@k8smaster pod]#
创建mypod3依赖mypod2[尽量满足]
[root@k8smaster pod]# more pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod3
labels:
app: mypod3
spec:
containers:
- name: mypod3-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mypod2
topologyKey: disktype
restartPolicy: Never
[root@k8smaster pod]# kubectl create -f pod3.yaml
pod/mypod3 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
mypod1 1/1 Running 0 10m 10.244.2.35 k8snode02 <none> <none> app=mypod1
mypod2 1/1 Running 0 11m 10.244.2.34 k8snode02 <none> <none> app=mypod2
mypod3 1/1 Running 0 22s 10.244.2.36 k8snode02 <none> <none> app=mypod3
[root@k8smaster pod]# kubectl delete pod mypod2
pod "mypod2" deleted
[root@k8smaster pod]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8smaster Ready master 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster pod]#
[root@k8smaster pod]# more pod2.yaml #mypod2指定到disktype: ssd2节点运行
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
app: mypod2
spec:
containers:
- name: mypod2-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mypod1
topologyKey: disktype
restartPolicy: Never
nodeSelector:
disktype: ssd2
[root@k8smaster pod]#
[root@k8smaster pod]# kubectl create -f pod2.yaml
pod/mypod2 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide #mypod2指定到disktype: ssd2节点运行,由于与mypod1【app=mypod1】不在同一个域导致pod一直处于Pending
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
mypod1 1/1 Running 0 12m 10.244.2.35 k8snode02 <none> <none> app=mypod1
mypod2 0/1 Pending 0 6s <none> <none> <none> <none> app=mypod2
mypod3 1/1 Running 0 2m56s 10.244.2.36 k8snode02 <none> <none> app=mypod3
[root@k8smaster pod]# kubectl describe pod mypod2 #查看原因
Name: mypod2
Namespace: default
Priority: 0
Node: <none>
Labels: app=mypod2
Annotations: <none>
Status: Pending
IP:
Containers:
mypod2-ctn:
Image: 192.168.23.100:5000/tomcat:v2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-vt7pl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vt7pl
Optional: false
QoS Class: BestEffort
Node-Selectors: disktype=ssd2
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 24s default-scheduler 0/3 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match node selector.
[root@k8smaster pod]#
5、nodeName
nodeName是节点选择约束的最简单方法,但是由于其自身限制,通常不使用它。nodeName是PodSpec的一个字段。如果它不为空,调度器将忽略pod,并且运行在它指定节点上的kubelet进程尝试运行 pod。因此,如果nodeName在PodSpec中指定了,则它优先于上面的节点选择方法。
使用nodeName来选择节点的一些限制:
如果指定的节点不存在,
如果指定的节点没有资源来容纳 pod,pod 将会调度失败并且其原因将显示为,比如 OutOfmemory 或 OutOfcpu。
云环境中的节点名称并非总是可预测或稳定的。
创建pod在k8snode01上运行
[root@k8smaster pod]# more pod4.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod4
labels:
app: mypod1
spec:
containers:
- name: mypod4-ctn
image: 192.168.23.100:5000/tomcat:v2
imagePullPolicy: IfNotPresent
nodeName: k8snode01
restartPolicy: Never
[root@k8smaster pod]# kubectl create -f pod4.yaml
pod/mypod4 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide #mypod4运行后,mypod2也正常运行起来了,因为k8snode01节点运行的mypod4有标签app=mypod1
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
mypod1 1/1 Running 0 16m 10.244.2.35 k8snode02 <none> <none> app=mypod1
mypod2 1/1 Running 0 3m55s 10.244.1.31 k8snode01 <none> <none> app=mypod2
mypod3 1/1 Running 0 6m45s 10.244.2.36 k8snode02 <none> <none> app=mypod3
mypod4 1/1 Running 0 13s 10.244.1.30 k8snode01 <none> <none> app=mypod1
[root@k8smaster pod]# 。
Kubernetes调度器
官网:https://kubernetes.io/zh/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
Kubernetes调度器-Pod分配给节点(node-pod亲和性、固定节点)相关推荐
- Kubernetes 调度器nodeSelector,nodeName 固定节点
前面已经说了调度的亲和性,其实都比较含蓄,比如软硬亲和性,污点和容忍. 固定节点就比较粗暴了,可以通过节点名称去选择,也可以通过节点的标签去选择,固定在某个节点上运行.这就是所谓固定节点的调度. 固定 ...
- Kubernetes 调度器实现初探
戳蓝字"CSDN云计算"关注我们哦! 作者:阿里云智能事业群高级开发工程师 萧元 转自:阿里系统软件技术 Kubernetes作为一个分布式容器编排调度引擎,资源调度是它的最重要的 ...
- 【博客507】学习阿里巴巴如何扩展Kubernetes 调度器支持 AI 和大数据作业
学习阿里巴巴如何扩展Kubernetes 调度器支持 AI 和大数据作业
- Kubernetes调度器-Pod分配给节点(Taint污点和Toleration容忍)
Taint和Toleration 节点亲和性是pod的一种属性(偏好或硬性要求),它使pod被吸引到一类特定的节点.Taint则相反,它使节点能够排斥一类特定的pod. Taint和toleratio ...
- Kubernetes调度器源码学习(三):Preempt抢占机制、调度失败与重试处理
本文基于Kubernetes v1.22.4版本进行源码学习 5.Preempt抢占机制 当高优先级的Pod没有找到合适的节点时,调度器会尝试抢占低优先级的Pod的节点.抢占过程是将低优先级的Pod从 ...
- kubernetes调度器
目录 文章目录 目录 实验环境 实验软件 本节实践 前置知识 调度器 1.调度流程 1.默认调度器 2.扩展调度器(extender) 3.调度框架 1.扩展点(Extension Points) 2 ...
- 进击的 Kubernetes 调度系统(二):支持批任务的 Coscheduling/Gang scheduling
作者 | 王庆璨(阿里云技术专家).张凯(阿里云高级技术专家) **导读:**阿里云容器服务团队结合多年 Kubernetes 产品与客户支持经验,对 Kube-scheduler 进行了大量优化和扩 ...
- kubernetes 简介:调度器和调度算法((Affinity/Anti-Affinity, Taints and Tolerations, 自定义调度器 )
全栈工程师开发手册 (作者:栾鹏) 架构系列文章 简介 scheduler 是 kubernetes 的调度器,主要的任务是把定义的 pod 分配到集群的节点上.听起来非常简单,但有很多要考虑的问题: ...
- Kubernetes — 调度系统
目录 文章目录 目录 调度系统 Kubernetes 调度器的设计 Kubernetes 调度器的工作流 Kubernetes 调度系统的未来 Scheduler Extender(调度器扩展) Mu ...
最新文章
- python3:module
- 开源的恶果,程序员正在「自掘坟墓」
- 2018年各大互联网前端面试题五(今日头条)
- md5字符串输入c语言,请问C语言怎么实现对一长串字符进行MD5加密?
- 解决ImageLoader加载HTTPS图片证书校验异常问题
- 用SQL Server 2017图形数据库替换数据仓库中的桥表
- add jar and proxy repo
- Linux下静态库.a与.so库文件的生成与使用
- Win11字体显示不全怎么解决?
- 模拟集成电路设计基础知识(一):MOS管结构及其I/V特性
- 关于Alipay支付宝接口(Java版)
- [ 人力资源面试篇 ] 应届生 “ HR 面 “ 面试分析
- 一个人怎么做好社群的日常高效管理?
- 不知道Android开发中有哪些权限?看这里
- springboot 分组校验和顺序校验
- android模拟器 百度云盘,MEmu逍遥安卓模拟器海外纯净版
- 奥运14日看点:杨威金牌最稳 领衔7大夺金点
- 数据转换字符串 查字符串出现次数 位置返回字符串和操作方法 2021-05-26
- 计算机毕业设计源码—SpringBoot+Vue鲜花商城
- SQLite 初学者注意事项