一、kubernetes资源限制

参考自:https://kubernetes.io/zh/docs/
k8s采用request和limit两种限制类型来对资源进行分配

  • request(资源需求):即运行pod的节点必须满足运行pod的最基本需求才能运行pod。
  • limit(资源限制):即运行pod期间,可能内存使用量会增加,那最多能使用多少内存,这就是资源限额

资源类型:

  • CPU 以核心为单位。
  • memory 以字节为单位。
  • requests 为kubernetes scheduler执行pod调度时node节点至少需要拥有的资源。
  • limits 为pod运行成功后最多可以使用的资源上限。

1.1 内存和CPU限制

内存单位:

  • K,M,G,T,P,E #通常是以1000为换算标准的。
  • Ki,Mi,Gi,Ti,Pi,Ei #通常是以1024为换算标准的。
root@master1:~/yaml/resource-limit/limit-case# vim case1-pod-memory-limit.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:name: limit-test-deploymentnamespace: test
spec:replicas: 1selector:matchLabels:app: limit-test-podtemplate:metadata:labels:app: limit-test-podspec:containers:- name: limit-test-containerimage: lorel/docker-stress-ng   #压测的镜像resources:limits: #硬限制memory: "512Mi" cpu: 500mrequests:  #软限制memory: "100Mi"args: ["--vm", "2", "--vm-bytes", "256M"] #起了2个线程进行压测root@master1:~/yaml/resource-limit/limit-case# kubectl apply -f case1-pod-memory-limit.yml

最大内存可以使用512m

1.2 限制范围(LimitRange)

默认情况下, Kubernetes 集群上的容器运行使用的计算资源没有限制。 使用资源配额,集群管理员可以以名字空间为单位,限制其资源的使用与创建。 在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。 有人担心,一个 Pod 或 Container 会垄断所有可用的资源。 LimitRange 是在命名空间内限制资源分配(给多个 Pod 或 Container)的策略对象。

一个 LimitRange(限制范围) 对象提供的限制能够做到:

  • 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
  • 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
  • 在一个命名空间中实施对一种资源的申请值和限制值的比值的控制。
  • 设置一个命名空间中对计算资源的默认申请/限制值,并且自动的在运行时注入到多个 Container 中。

1.2.1 创建RequestRatio比例限制

root@master1:~/yaml/resource-limit/limit-case# vim case3-LimitRange.yaml
apiVersion: v1
kind: LimitRange
metadata:name: limitrange-testnamespace: test
spec:limits:- type: Container       #限制的资源类型max:cpu: "2"            #限制单个容器的最大CPUmemory: "2Gi"       #限制单个容器的最大内存min:cpu: "500m"         #限制单个容器的最小CPUmemory: "512Mi"     #限制单个容器的最小内存default:cpu: "500m"         #默认单个容器的CPU限制memory: "512Mi"     #默认单个容器的内存限制defaultRequest:cpu: "500m"         #默认单个容器的CPU创建请求memory: "512Mi"     #默认单个容器的内存创建请求maxLimitRequestRatio:cpu: 2              #限制CPU limit/request比值最大为2  memory: 2         #限制内存limit/request比值最大为1.5- type: Podmax:cpu: "4"            #限制单个Pod的最大CPUmemory: "4Gi"       #限制单个Pod最大内存- type: PersistentVolumeClaimmax:storage: 50Gi        #限制PVC最大的requests.storagemin:storage: 30Gi        #限制PVC最小的requests.storageroot@master1:~/yaml/resource-limit/limit-case# kubectl apply -f case3-LimitRange.yaml
limitrange/limitrange-test created#查看container和pod的资源限制
root@master1:~/yaml/resource-limit/limit-case# kubectl describe limitranges limitrange-test -n test
Name:                  limitrange-test
Namespace:             test
Type                   Resource  Min    Max   Default Request  Default Limit  Max Limit/Request Ratio
----                   --------  ---    ---   ---------------  -------------  -----------------------
Container              cpu       500m   2     500m             500m           2
Container              memory    512Mi  2Gi   512Mi            512Mi          2
Pod                    cpu       -      4     -                -              -
Pod                    memory    -      4Gi   -                -              -
PersistentVolumeClaim  storage   30Gi   50Gi  -                -              -

1.2.2 验证RequestRatio比例限制

root@master1:~/yaml/resource-limit/limit-case# vim case4-pod-RequestRatio-limit.yaml
kind: Deployment
apiVersion: apps/v1
metadata:labels:app: test-wordpress-deployment-labelname: test-wordpress-deploymentnamespace: test
spec:replicas: 1selector:matchLabels:app: test-wordpress-selectortemplate:metadata:labels:app: test-wordpress-selectorspec:containers:- name: test-wordpress-nginx-containerimage: nginx:1.16.1imagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: httpenv:- name: "password"value: "123456"- name: "age"value: "18"resources:limits:cpu: 2memory: 1Girequests:cpu: 2memory: 0.5Gi- name: test-wordpress-php-containerimage: php:5.6-fpm-alpine imagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: httpenv:- name: "password"value: "123456"- name: "age"value: "18"resources:limits:#cpu: 1cpu: 2  #现在是1:4的比例memory: 1Girequests:cpu: 500mmemory: 512Mi---
kind: Service
apiVersion: v1
metadata:labels:app: test-wordpress-service-labelname: test-wordpress-servicenamespace: test
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 8080nodePort: 30063selector:app: test-wordpress-selectorroot@master1:~/yaml/resource-limit/limit-case# kubectl apply -f case4-pod-RequestRatio-limit.yaml
deployment.apps/test-wordpress-deployment created
service/test-wordpress-service created
#创建完成,你会发现deployment一直创建不成功
root@master1:~/yaml/resource-limit/limit-case# kubectl get deploy -n test
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
test-wordpress-deployment   0/1     0            0           116s
#使用describe也看不出什么错误
root@master1:~/yaml/resource-limit/limit-case# kubectl describe deployments test-wordpress-deployment -n test
---
Events:Type    Reason             Age    From                   Message----    ------             ----   ----                   -------Normal  ScalingReplicaSet  2m44s  deployment-controller  Scaled up replica set test-wordpress-deployment-5886648dd7 to 1#你可以使用-o json 转化为json格式进行查看
root@master1:~/yaml/resource-limit/limit-case# kubectl get deployments test-wordpress-deployment -n test -o json
---
{"lastTransitionTime": "2021-12-15T05:37:11Z","lastUpdateTime": "2021-12-15T05:37:11Z","message": "pods \cpu max limit to request ratio per Container is 2, but provided ratio is 4.000000]", #意思就是你配置的RequestRatio比例限制是2,但是你现在的比例是4,所以创建不成功"reason": "FailedCreate","status": "True","type": "ReplicaFailure"}

1.2.3 修改cpu,再验证RequestRatio比例限制

root@master1:~/yaml/resource-limit/limit-case# vim case4-pod-RequestRatio-limit.yaml
kind: Deployment
apiVersion: apps/v1
metadata:labels:app: test-wordpress-deployment-labelname: test-wordpress-deploymentnamespace: test
spec:replicas: 1selector:matchLabels:app: test-wordpress-selectortemplate:metadata:labels:app: test-wordpress-selectorspec:containers:- name: test-wordpress-nginx-containerimage: nginx:1.16.1imagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: httpenv:- name: "password"value: "123456"- name: "age"value: "18"resources:limits:cpu: 2memory: 1Girequests:cpu: 2memory: 0.5Gi- name: test-wordpress-php-containerimage: php:5.6-fpm-alpine imagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: httpenv:- name: "password"value: "123456"- name: "age"value: "18"resources:limits:cpu: 1  #CPU修改成1 ,刚好比例相同#cpu: 2memory: 1Girequests:cpu: 500mmemory: 512Mi---
kind: Service
apiVersion: v1
metadata:labels:app: test-wordpress-service-labelname: test-wordpress-servicenamespace: test
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 8080nodePort: 30063selector:app: test-wordpress-selector
root@master1:~/yaml/resource-limit/limit-case# kubectl get pod -n test
NAME                                         READY   STATUS    RESTARTS   AGE
test-wordpress-deployment-69f88fd994-fgkbw   2/2     Running   0          77s

1.3 资源配额(ResourceQuota)

资源配额,通过 ResourceQuota 对象来定义,对每个命名空间的资源消耗总量提供限制。 它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命令空间中的 Pod 可以使用的计算资源的总上限。

资源配额的工作方式如下:

  • 不同的团队可以在不同的命名空间下工作,目前这是非约束性的,在未来的版本中可能会通过 ACL (Access Control List 访问控制列表) 来实现强制性约束。
  • 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
  • 当用户在命名空间下创建资源(如 Pod、Service 等)时,Kubernetes 的配额系统会 跟踪集群的资源使用情况,以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
  • 如果资源创建或者更新请求违反了配额约束,那么该请求会报错(HTTP 403 FORBIDDEN), 并在消息中给出有可能违反的约束。
  • 如果命名空间下的计算资源 (如 cpu 和 memory)的配额被启用,则用户必须为 这些资源设定请求值(request)和约束值(limit),否则配额系统将拒绝 Pod 的创建。 提示: 可使用 LimitRanger 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。

配额机制所支持的资源类型:

资源名称 描述
limits.cpu 所有非终止状态的 Pod,其 CPU 限额总量不能超过该值。
limits.memory 所有非终止状态的 Pod,其内存限额总量不能超过该值。
requests.cpu 所有非终止状态的 Pod,其 CPU 需求总量不能超过该值。
requests.memory 所有非终止状态的 Pod,其内存需求总量不能超过该值。
hugepages-<size> 对于所有非终止状态的 Pod,针对指定尺寸的巨页请求总数不能超过此值。
cpu 与 requests.cpu 相同。
memory 与 requests.memory 相同。

1.3.1 创建资源配额

root@master1:~/yaml/resource-limit/limit-case# vim case6-ResourceQuota-magedu.yaml
apiVersion: v1
kind: ResourceQuota
metadata:name: quota-testnamespace: test
spec:hard:requests.cpu: "8"limits.cpu: "8"requests.memory: 4Gilimits.memory: 4Girequests.nvidia.com/gpu: 4pods: "2"services: "6"root@master1:~/yaml/resource-limit/limit-case# kubectl apply -f case6-ResourceQuota-magedu.yaml
resourcequota/quota-test created
root@master1:~/yaml/resource-limit/limit-case# kubectl get ResourceQuota -n test
NAME         AGE   REQUEST                                                                                             LIMIT
quota-test   10s   pods: 1/2, requests.cpu: 0/8, requests.memory: 0/4Gi, requests.nvidia.com/gpu: 0/4, services: 0/6   limits.cpu: 0/8, limits.memory: 0/4Gi

1.3.2 验证资源配额

root@master1:~/yaml/resource-limit/limit-case# vim case7-namespace-pod-limit-test.yaml
kind: Deployment
apiVersion: apps/v1
metadata:labels:app: test-nginx-deployment-labelname: test-nginx-deploymentnamespace: test
spec:replicas: 5 #创建5个副本,pod限制是2个selector:matchLabels:app: test-nginx-selectortemplate:metadata:labels:app: test-nginx-selectorspec:containers:- name: test-nginx-containerimage: nginx:1.16.1imagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: httpenv:- name: "password"value: "123456"- name: "age"value: "18"resources:limits:cpu: 1memory: 1Girequests:cpu: 500mmemory: 512Mi---
kind: Service
apiVersion: v1
metadata:labels:app: test-nginx-service-labelname: test-nginx-servicenamespace: test
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 80#nodePort: 30033selector:app: test-nginx-selector
#进行创建
root@master1:~/yaml/resource-limit/limit-case# kubectl apply -f case7-namespace-pod-limit-test.yaml
#你会发现只创建了一个pod,当前有2个pod,其它4个创建不了,提示限制创建的pod最大值为2
root@master1:~/yaml/resource-limit/limit-case# kubectl get pod -n test
NAME                                     READY   STATUS    RESTARTS   AGE
net-test4                                1/1     Running   12         25d
test-nginx-deployment-788f64656f-74vvq   1/1     Running   0          2m43s
root@master1:~/yaml/resource-limit/limit-case# kubectl get deployments test-nginx-deployment -n test
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
test-nginx-deployment   1/5     1            1           2m25s
root@master1:~/yaml/resource-limit/limit-case# kubectl get deployments test-nginx-deployment -n test -o json
{"lastTransitionTime": "2021-12-15T06:42:42Z","lastUpdateTime": "2021-12-15T06:42:42Z","message": "pods \"test-nginx-deployment-788f64656f-sspzd\" is forbidden: exceeded quota: quota-test, requested: pods=1, used: pods=2, limited: pods=2","reason": "FailedCreate","status": "True","type": "ReplicaFailure"}

二、 RBAC实现 多账户

2.1 鉴权模块

  • Node - 一个专用鉴权组件,根据调度到 kubelet 上运行的 Pod 为 kubelet 授予权限。 了解有关使用节点鉴权模式的更多信息,请参阅节点鉴权。
  • ABAC - 基于属性的访问控制(ABAC)定义了一种访问控制范型,通过使用将属性组合 在一起的策略,将访问权限授予用户。策略可以使用任何类型的属性(用户属性、资源属性、 对象,环境属性等)。要了解有关使用 ABAC 模式的更多信息,请参阅 ABAC 模式。
  • RBAC - 基于角色的访问控制(RBAC)是一种基于企业内个人用户的角色来管理对 计算机或网络资源的访问的方法。在此上下文中,权限是单个用户执行特定任务的能力, 例如查看、创建或修改文件。要了解有关使用 RBAC 模式的更多信息,请参阅 RBAC 模式。
    被启用之后,RBAC(基于角色的访问控制)使用 rbac.authorization.k8s.io API 组来 驱动鉴权决策,从而允许管理员通过 Kubernetes API 动态配置权限策略。
    要启用 RBAC,请使用 --authorization-mode = RBAC 启动 API 服务器。
  • Webhook - WebHook 是一个 HTTP 回调:发生某些事情时调用的 HTTP POST; 通过 HTTP POST 进行简单的事件通知。实现 WebHook 的 Web 应用程序会在发生某些事情时 将消息发布到 URL。要了解有关使用 Webhook 模式的更多信息,请参阅 Webhook 模式。

2.2 RBAC鉴权

2.2.1 在指定namespace创建账户

root@master1:~# kubectl create serviceaccount user1  -n test
serviceaccount/user1 created
root@master1:~# kubectl get serviceaccounts -n test
NAME      SECRETS   AGE
default   1         28d
user1     1         22s

2.2.2 创建role规则

root@master1:~/yaml/role# vim user1-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:namespace: testname: user1-role
rules:
- apiGroups: ["*"] resources: ["pods/exec"]#verbs: ["*"]##RO-Roleverbs: ["get", "list", "watch", "create"]- apiGroups: ["*"]resources: ["pods"]#verbs: ["*"]##RO-Roleverbs: ["get", "list", "watch"]- apiGroups: ["apps/v1"]resources: ["deployments"]#verbs: ["get","list","watch","create","update","patch","delete"]##RO-Roleverbs: ["get", "watch", "list"]root@master1:~/yaml/role# kubectl apply -f user1-role.yaml
role.rbac.authorization.k8s.io/user1-role created

2.2.3 将规则与账户进行绑定

root@master1:~/yaml/role# vim user1-role-bind.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: role-bind-user1namespace: test
subjects:
- kind: ServiceAccountname: user1 #需要绑定的用户namespace: test
roleRef:kind: Rolename: user1-role  #需要绑定的规则apiGroup: rbac.authorization.k8s.ioroot@master1:~/yaml/role# kubectl apply -f user1-role-bind.yaml
rolebinding.rbac.authorization.k8s.io/role-bind-user1 created

2.2.4 获取token名称

root@master1:~/yaml/role# kubectl get secrets -n test|grep user1
user1-token-8vmzf     kubernetes.io/service-account-token   3      14m
root@master1:~/yaml/role# kubectl describe secrets user1-token-8vmzf -n test
---
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlhHcE5HMGdpaHRrVU81dzBjdHEyTHVoMXpBalhkbVFEYmNwck5IbHZoeW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJ0ZXN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InVzZXIxLXRva2VuLTh2bXpmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InVzZXIxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjIzMDhmZGQtYWUxZC00ZjY5LTlmZDItMTVmZDk3MDg4NjdjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3Q6dXNlcjEifQ.L_HSzSM31HryB6eUEglqEWOusYzDUNm_1jCoklV7IxGA4vGCpSeK7q0xKSfCC3AlJX13PqBoFTcU3czwUHHn3JzWjdZS3APKINDUOx_IlOzp_16nZDHRvTALqvhHoU8NKvKzgTifogRm1i9qP7p_u1ZioJWWgU33nOX0gsA4jyqcRoTSFELhRlOqDsGkc8Y-6bKppxM6eSA_dxCFcRhCHtATUexkTqfQh4OrerDUf0FSp6lgmBvr7dTvcoOb3BwcbtEg4DA2LSoefiLi0yIospX8a09NdOMxdxrO7ejjdlfdKpOnpRokj0ilc6nHlu4eHa3MKUFNvPrEj37iXRJUPQ

2.2.5 进行检验

登录dashboard进行测试


账户权限是get, watch, list,你尝试删除的动作,他会出现一个提醒

token15分钟会过期,所以为了方便登录,生成kube-config进行登录,就不用每次都去获取token

2.2.6 基于kube-config文件登录

2.2.6.1 创建csr文件

root@master1:~/yaml/role/yaml-case# vim user1-csr.json
{"CN": "China","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}

2.2.6.2 签发证书

root@master1:~/yaml/role/yaml-case# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem  -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubeasz/clusters/k8s-01/ssl/ca-config.json -profile=kubernetes user1-csr.json | cfssljson -bare  user1

2.2.6.3 生成普通用户kubeconfig文件

root@master1:~/yaml/role/yaml-case# kubectl config set-cluster cluster1 --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://10.0.0.188:6443 --kubeconfig=user1.kubeconfig
Cluster "cluster1" set.

2.2.6.4 设置客户端认证参数

root@master1:~/yaml/role/yaml-case# cp *.pem /etc/kubernetes/ssl/
root@master1:~/yaml/role/yaml-case# kubectl config set-credentials user1 \
> --client-certificate=/etc/kubernetes/ssl/user1.pem \
> --client-key=/etc/kubernetes/ssl/user1-key.pem \
> --embed-certs=true \
> --kubeconfig=user1.kubeconfig
User "user1" set.
root@master1:~/yaml/role/yaml-case# cat user1.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVUzVra1ZlUGFXWUZWRWlaWUlTdERvWXJTSW9nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNVEUyTVRVek9EQXdXaGdQTWpFeU1URXdNak14TlRNNE1EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBxMGlPL3N0cmpMblJJczMvTWdLYlFLOERjMGEKbDhYaFBNV25GUW9KNmZPN1UzMXdzMEY1akx0L1E3TkVrc0xJSWg1aVlnZ2l6NCtjZ1J0TE1BUXE2dVlmZThLcQpNMUl1Q3lXZ210bkx1Q3QzOGsvWDVkVXRrWjFFSHR3Z1ZwdGhJblhudHEzZkYwdVQ0Mjk0S211S2FHS0U5VW1KCk9SL0pJQ0JNL0d0Yngva3UwR3ordUdZVzZOQ292QzRIdDVjdW52a0s1aDVWMGhTQ2dDSjBKVlQrVHllVHBFb2MKQXdCVWE0UW1MclFRWTFQRzdZK3lhZlZPbGhyM01zY2U5Y1QwWDZMZFY0SzRISGM2Vm5YaytGdm9HLy85M2Z2KwpMS0owbWVZcHhZTGdOT2dWSUNLOXB1b3Z3c2R4N2FwWTViVkVjdzlObEdGdkFsbVkvTDU1NkUyYnV3SURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVS9FaU85RjJicm90VHYyOUsvaUZxV1dUaHYxUXdId1lEVlIwakJCZ3dGb0FVL0VpTzlGMmJyb3RUdjI5SwovaUZxV1dUaHYxUXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQWlhSDcwNW93MVFNODdZQTFRN2xIYmM4alpJClRVOGw4b2tJWFhZWWtPdTBSb0xyOHg0a1V4TUR5MFAvTzNEVFFDak14YVl2VndnMUtKMU0vQWxKQVR1NVFRcG4KZ0NzZ2NCVUcwU0VJdC96Tkl1ejNTYkZxTkJEc0lkckVHSVptNVJNYmRXQXFpUk5sNXhpZzEvcWVBS1JEaTNESApLVWdWdGJ4Z3VsK2NId1F1UXhLZXZoeDNBWFp3cWNtME1wZ2dmQ2hWMGNQY2tZaGJDU205dGlQdnlXOWJ4djdHCi8yMUhHTjVzaENYRTBlVnE0dWxGUW9jSkhJTmJOVnVkUHRIVG50UmJ4dUtLd1B0RERKV1prZXhkaTJIN25NOXoKVjhOMGlyOU4rUDNsM1oxNSsvVDRyTHBMeTE1eHlJQmhyOENIMDk0eTVIdXB2eWhMd3orb05vUXdrWlE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://10.0.0.188:6443name: cluster1
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: user1user:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQwRENDQXJpZ0F3SUJBZ0lVY2Y1OUl1REhvM3VhOE5rR0lIbFN5Mm1rZDA4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakUzTURVd01UQXdXaGdQTWpBM01URXlNRFV3TlRBeE1EQmFNR0F4Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2RDWldsS2FXNW5NUkF3RGdZRFZRUUhFd2RDWldsS2FXNW5NUXd3Q2dZRApWUVFLRXdOck9ITXhEekFOQmdOVkJBc1RCbE41YzNSbGJURU9NQXdHQTFVRUF4TUZRMmhwYm1Fd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDb2RrT3dPZE1GVHk4Q1d3cGN4YW9xNWRxdFlKSysKSjh2VllTZlNlbEFJZEFHajlDVWFETXpZOFhhcXNjV09vZjFIb1lONUFpUlJZbFlQQ0V3bGNtalQ2c0pJTGRpNwpIeS9id0dXZExBTVdxTWtLcW00NHh1Q3ZrYStrb2JrZWU4bVpCL1k1eW1BMkRKUk5CclRjcHp6Z2ZnSGJQWlZqClVpK2w5SHN0bjdMN0pmcExic3pSZ2YxOERveldtMkJWNjhLQXo5WjEwSlgwRnNhL3N5TnQ4VlkwSURSTlYrQnIKY1dtbEo0Z0R1QW5qYTJ1bnZxV0tJeUZpY1FOU2dyWjI2bVpJbkFuZ2pEOTl5SmU2WFhrWWVGd0o0bTJ4MGVidQp1cWw3TjNuMFdoNUNZVHE0YjExTTJJUU5HRGlqWHZvMVNQTlhJdFkzNHA2U2dYOS93NkRmVnN1dEFnTUJBQUdqCmZ6QjlNQTRHQTFVZER3RUIvd1FFQXdJRm9EQWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFkQmdOVkhRNEVGZ1FVNGI1QnkvRk8zN3dYZ0wwc0ZuaHlMT0xMMXZrdwpId1lEVlIwakJCZ3dGb0FVL0VpTzlGMmJyb3RUdjI5Sy9pRnFXV1RodjFRd0RRWUpLb1pJaHZjTkFRRUxCUUFECmdnRUJBRzBYbW5sK1ZDcllJQnRrV0JwTjh0RDJ0eUJtcmgrcVQrVVN4Q2RYNTJubWM0aUF0d1V0S3NIN0ZIYk0Kd1hadjBMTmtMUHkxOFgvTmkwWDNFYUxoOGRHOTJPRlJNeTVxR1crZGd6eUg1dkNxL01IY0lRcHpvdEFJdlRPWgpEZ2hLTFhsbGRrS0UrbURtbjN2VWtpbHZVY1Zub0RPRGltQXFMdUVzaFFNeHJ2b1dPeWNrQVpobjBzWWFXVGdPCi9za2Nzd0o1djNodGlSdFBsZ3RRZnRkcHFrVnJ5MnN4dFNza01mMGFpZmt0OEJPcC9iblYySDBUMSt0aEpsT1EKL1pSTEFwejJtUXBHekdXOUxUbEFveTZ1Ym83VU82UEJ0N2ZXd29MN3dLNUhwSUlWOVlLb1hJUjFIM0szUWEvZApKeXB4bFB0enlSbkJpQW8veTMwRC83amQyNmM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcUhaRHNEblRCVTh2QWxzS1hNV3FLdVhhcldDU3ZpZkwxV0VuMG5wUUNIUUJvL1FsCkdnek0yUEYycXJIRmpxSDlSNkdEZVFJa1VXSldEd2hNSlhKbzArckNTQzNZdXg4djI4QmxuU3dERnFqSkNxcHUKT01iZ3I1R3ZwS0c1SG52Sm1RZjJPY3BnTmd5VVRRYTAzS2M4NEg0QjJ6MlZZMUl2cGZSN0xaK3kreVg2UzI3TQowWUg5ZkE2TTFwdGdWZXZDZ00vV2RkQ1Y5QmJHdjdNamJmRldOQ0EwVFZmZ2EzRnBwU2VJQTdnSjQydHJwNzZsCmlpTWhZbkVEVW9LMmR1cG1TSndKNEl3L2ZjaVh1bDE1R0hoY0NlSnRzZEhtN3JxcGV6ZDU5Rm9lUW1FNnVHOWQKVE5pRURSZzRvMTc2TlVqelZ5TFdOK0tla29GL2Y4T2czMWJMclFJREFRQUJBb0lCQUJPMGtjSmhZUyt6elhubgpFRlU5d2VQMnN4ZW92a0dFQWpIWmhZRDNVYmxMYUkyM0YwZnV5MTl0RDBaME9QbXdOU0pWNEQwZFpRWW9ESTBCCm1YYWY1V2MwaExsUXM1TmYySWRLQUJqY2R4Z0ZjazdQRk1tTGFlamZqNzRnTkxrK0haeks4NkJhN2Rva3FveEEKQnBQdzlBd0djVTBsN1AyTE5ZdWlCMjZVeWFqYTNjKzZoczljSTlmRE81dTBpNTNkbmdDeEUrdlA5SCtUcmd5MAowR3Y4Q0FOaWQ5eXdsSlFVZjZydGZUVTVoa1ozZjVqdlBZWGpLME5OK1ZrVjZHV3lDdXA3cXRkK3g0QzZWbE1lCkJLQnZLNGhFL0pYNTQ1Y3A2eGE3UlUzQzZRcXQzelBCaDNtYzZnbE1FL3NyR0tJZi96bmI2VjZpL01YSmZZM3AKU1dCY0Mva0NnWUVBMG9PejhJMG1od2JNN1hrN2FzZ1NhSzlQdTdtd0UwMjFudmxCL3k2RVNxcEZ3bG15MFl6MApRczlHak12emlZTzFaeEYreVBIV3J2eUJlWlNieHdxZmxlaHF2V1diUHRPdWtkQnFMa2hYK3FaVldGZlZJMDNVClRCWC9mL2dHZUk2TDNIeDNZbEJXQ0s1Y1FXL01hOG40WlBCb3JaRUZyWU1lMTZ5WkVGUG4zaDhDZ1lFQXpOeDkKWmFlU3VTN1RUeDdiRXFqOW54RlFaYVJKcEVLWGQ0SDBlZ3RRM0VXVVY2cEwybXNLNHBqTHE5bHdHUHhXWC9CaQp4dG5jbXNvVHR6QjRGWEJhQlhSNkRjczcxczM0N1h6T2dCaFU0SktlbnNVWm9aa2FIZzdxZ0RJbXEvNktDT1FlCkUxTjBpT3ZKblZSVVlnZnBseGF0T2tRQkdBbm1ZbUtJYlJlQ0JMTUNnWUVBaTdQallpd0orV25GN1lLYXI4NSsKaVFKdXc0SURHNHhpajFHVFBxbThHV0RPVXAvOFQ1eGZMVWNvNXA4aXk0dWdndm5WVGIxUVgyZ3E5R2h1eUxTQQpHNWZWM2tManQ5bjY2OEdIOVpjRTY4NGVyVFg4dUNVYVVqUDNEeEdtR2JOZmxiN3o2MGF0RWEzRWc1aVI3S1pvCk5YUmx3Mm1PZnd1WkdEL3VoQ3Rxb0xrQ2dZQkQ4MWE4b3lxdHRmUnRLQVR1V1pOV2NiM0RHUTA4S01KbzUzZ2EKQ3lyVkJWZEJCTUdJUHowckVCZHVkdjhScXBGVDNUNUdTdms3ZG8rM2thSWpLbE1Sd0NMRDlJZHlwbnRNK3JyYwpEallKRDFrQnZNclZxUnphbjRQMDVhMmlHeG5aL1NCa3RLZlF5clRqTkplUXRLTXNkRjhkRm5WdWJjbzNGQXZBCmM2Mnl0UUtCZ0NsZTRCZXVwWUtqaFJKTlYwM2pnMjZOQ1IvcUk2WTFXdEFzVzc5S2swbnQwSHAxUUE0cnY0eEoKRnA0VmtjQXJYYU50amJwU1JYRThneXhnQW80VkFBamNSekhCNVBPYmtPbVdBZG8xaUtFTVlZQlBGM3JzcDdUQgo4cDA2dFljUzRJSHFBQzFBOUdzanRETnJPRGFTeUFxM1pjRU1ieDFGTEJPUGJhOC8vR0tKCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

2.2.6.5 设置上下文参数(多集群使用上下文区分)

root@master1:~/yaml/role/yaml-case# kubectl config set-context cluster1 \
> --cluster=cluster1 \
> --user=user1 \
> --namespace=test\
> --kubeconfig=user1.kubeconfig
Context "cluster1" created.

2.2.6.6 设置默认上下文

root@master1:~/yaml/role/yaml-case# kubectl config use-context cluster1 --kubeconfig=user1.kubeconfig
Switched to context "cluster1".

2.2.6.7 获取token

root@master1:~/yaml/role/yaml-case# kubectl describe secrets user1-token-8vmzf -n test
---
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlhHcE5HMGdpaHRrVU81dzBjdHEyTHVoMXpBalhkbVFEYmNwck5IbHZoeW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJ0ZXN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InVzZXIxLXRva2VuLTh2bXpmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InVzZXIxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjIzMDhmZGQtYWUxZC00ZjY5LTlmZDItMTVmZDk3MDg4NjdjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3Q6dXNlcjEifQ.L_HSzSM31HryB6eUEglqEWOusYzDUNm_1jCoklV7IxGA4vGCpSeK7q0xKSfCC3AlJX13PqBoFTcU3czwUHHn3JzWjdZS3APKINDUOx_IlOzp_16nZDHRvTALqvhHoU8NKvKzgTifogRm1i9qP7p_u1ZioJWWgU33nOX0gsA4jyqcRoTSFELhRlOqDsGkc8Y-6bKppxM6eSA_dxCFcRhCHtATUexkTqfQh4OrerDUf0FSp6lgmBvr7dTvcoOb3BwcbtEg4DA2LSoefiLi0yIospX8a09NdOMxdxrO7ejjdlfdKpOnpRokj0ilc6nHlu4eHa3MKUFNvPrEj37iXRJUPQ

2.2.6.8 将token写入用户kube-config文件

root@master1:~/yaml/role/yaml-case# vim user1.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVUzVra1ZlUGFXWUZWRWlaWUlTdERvWXJTSW9nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNVEUyTVRVek9EQXdXaGdQTWpFeU1URXdNak14TlRNNE1EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBxMGlPL3N0cmpMblJJczMvTWdLYlFLOERjMGEKbDhYaFBNV25GUW9KNmZPN1UzMXdzMEY1akx0L1E3TkVrc0xJSWg1aVlnZ2l6NCtjZ1J0TE1BUXE2dVlmZThLcQpNMUl1Q3lXZ210bkx1Q3QzOGsvWDVkVXRrWjFFSHR3Z1ZwdGhJblhudHEzZkYwdVQ0Mjk0S211S2FHS0U5VW1KCk9SL0pJQ0JNL0d0Yngva3UwR3ordUdZVzZOQ292QzRIdDVjdW52a0s1aDVWMGhTQ2dDSjBKVlQrVHllVHBFb2MKQXdCVWE0UW1MclFRWTFQRzdZK3lhZlZPbGhyM01zY2U5Y1QwWDZMZFY0SzRISGM2Vm5YaytGdm9HLy85M2Z2KwpMS0owbWVZcHhZTGdOT2dWSUNLOXB1b3Z3c2R4N2FwWTViVkVjdzlObEdGdkFsbVkvTDU1NkUyYnV3SURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVS9FaU85RjJicm90VHYyOUsvaUZxV1dUaHYxUXdId1lEVlIwakJCZ3dGb0FVL0VpTzlGMmJyb3RUdjI5SwovaUZxV1dUaHYxUXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQWlhSDcwNW93MVFNODdZQTFRN2xIYmM4alpJClRVOGw4b2tJWFhZWWtPdTBSb0xyOHg0a1V4TUR5MFAvTzNEVFFDak14YVl2VndnMUtKMU0vQWxKQVR1NVFRcG4KZ0NzZ2NCVUcwU0VJdC96Tkl1ejNTYkZxTkJEc0lkckVHSVptNVJNYmRXQXFpUk5sNXhpZzEvcWVBS1JEaTNESApLVWdWdGJ4Z3VsK2NId1F1UXhLZXZoeDNBWFp3cWNtME1wZ2dmQ2hWMGNQY2tZaGJDU205dGlQdnlXOWJ4djdHCi8yMUhHTjVzaENYRTBlVnE0dWxGUW9jSkhJTmJOVnVkUHRIVG50UmJ4dUtLd1B0RERKV1prZXhkaTJIN25NOXoKVjhOMGlyOU4rUDNsM1oxNSsvVDRyTHBMeTE1eHlJQmhyOENIMDk0eTVIdXB2eWhMd3orb05vUXdrWlE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://10.0.0.188:6443name: cluster1
contexts:
- context:cluster: cluster1namespace: testuser: user1name: cluster1
current-context: cluster1
kind: Config
preferences: {}
users:
- name: user1user:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQwRENDQXJpZ0F3SUJBZ0lVY2Y1OUl1REhvM3VhOE5rR0lIbFN5Mm1rZDA4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakUzTURVd01UQXdXaGdQTWpBM01URXlNRFV3TlRBeE1EQmFNR0F4Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2RDWldsS2FXNW5NUkF3RGdZRFZRUUhFd2RDWldsS2FXNW5NUXd3Q2dZRApWUVFLRXdOck9ITXhEekFOQmdOVkJBc1RCbE41YzNSbGJURU9NQXdHQTFVRUF4TUZRMmhwYm1Fd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDb2RrT3dPZE1GVHk4Q1d3cGN4YW9xNWRxdFlKSysKSjh2VllTZlNlbEFJZEFHajlDVWFETXpZOFhhcXNjV09vZjFIb1lONUFpUlJZbFlQQ0V3bGNtalQ2c0pJTGRpNwpIeS9id0dXZExBTVdxTWtLcW00NHh1Q3ZrYStrb2JrZWU4bVpCL1k1eW1BMkRKUk5CclRjcHp6Z2ZnSGJQWlZqClVpK2w5SHN0bjdMN0pmcExic3pSZ2YxOERveldtMkJWNjhLQXo5WjEwSlgwRnNhL3N5TnQ4VlkwSURSTlYrQnIKY1dtbEo0Z0R1QW5qYTJ1bnZxV0tJeUZpY1FOU2dyWjI2bVpJbkFuZ2pEOTl5SmU2WFhrWWVGd0o0bTJ4MGVidQp1cWw3TjNuMFdoNUNZVHE0YjExTTJJUU5HRGlqWHZvMVNQTlhJdFkzNHA2U2dYOS93NkRmVnN1dEFnTUJBQUdqCmZ6QjlNQTRHQTFVZER3RUIvd1FFQXdJRm9EQWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFkQmdOVkhRNEVGZ1FVNGI1QnkvRk8zN3dYZ0wwc0ZuaHlMT0xMMXZrdwpId1lEVlIwakJCZ3dGb0FVL0VpTzlGMmJyb3RUdjI5Sy9pRnFXV1RodjFRd0RRWUpLb1pJaHZjTkFRRUxCUUFECmdnRUJBRzBYbW5sK1ZDcllJQnRrV0JwTjh0RDJ0eUJtcmgrcVQrVVN4Q2RYNTJubWM0aUF0d1V0S3NIN0ZIYk0Kd1hadjBMTmtMUHkxOFgvTmkwWDNFYUxoOGRHOTJPRlJNeTVxR1crZGd6eUg1dkNxL01IY0lRcHpvdEFJdlRPWgpEZ2hLTFhsbGRrS0UrbURtbjN2VWtpbHZVY1Zub0RPRGltQXFMdUVzaFFNeHJ2b1dPeWNrQVpobjBzWWFXVGdPCi9za2Nzd0o1djNodGlSdFBsZ3RRZnRkcHFrVnJ5MnN4dFNza01mMGFpZmt0OEJPcC9iblYySDBUMSt0aEpsT1EKL1pSTEFwejJtUXBHekdXOUxUbEFveTZ1Ym83VU82UEJ0N2ZXd29MN3dLNUhwSUlWOVlLb1hJUjFIM0szUWEvZApKeXB4bFB0enlSbkJpQW8veTMwRC83amQyNmM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcUhaRHNEblRCVTh2QWxzS1hNV3FLdVhhcldDU3ZpZkwxV0VuMG5wUUNIUUJvL1FsCkdnek0yUEYycXJIRmpxSDlSNkdEZVFJa1VXSldEd2hNSlhKbzArckNTQzNZdXg4djI4QmxuU3dERnFqSkNxcHUKT01iZ3I1R3ZwS0c1SG52Sm1RZjJPY3BnTmd5VVRRYTAzS2M4NEg0QjJ6MlZZMUl2cGZSN0xaK3kreVg2UzI3TQowWUg5ZkE2TTFwdGdWZXZDZ00vV2RkQ1Y5QmJHdjdNamJmRldOQ0EwVFZmZ2EzRnBwU2VJQTdnSjQydHJwNzZsCmlpTWhZbkVEVW9LMmR1cG1TSndKNEl3L2ZjaVh1bDE1R0hoY0NlSnRzZEhtN3JxcGV6ZDU5Rm9lUW1FNnVHOWQKVE5pRURSZzRvMTc2TlVqelZ5TFdOK0tla29GL2Y4T2czMWJMclFJREFRQUJBb0lCQUJPMGtjSmhZUyt6elhubgpFRlU5d2VQMnN4ZW92a0dFQWpIWmhZRDNVYmxMYUkyM0YwZnV5MTl0RDBaME9QbXdOU0pWNEQwZFpRWW9ESTBCCm1YYWY1V2MwaExsUXM1TmYySWRLQUJqY2R4Z0ZjazdQRk1tTGFlamZqNzRnTkxrK0haeks4NkJhN2Rva3FveEEKQnBQdzlBd0djVTBsN1AyTE5ZdWlCMjZVeWFqYTNjKzZoczljSTlmRE81dTBpNTNkbmdDeEUrdlA5SCtUcmd5MAowR3Y4Q0FOaWQ5eXdsSlFVZjZydGZUVTVoa1ozZjVqdlBZWGpLME5OK1ZrVjZHV3lDdXA3cXRkK3g0QzZWbE1lCkJLQnZLNGhFL0pYNTQ1Y3A2eGE3UlUzQzZRcXQzelBCaDNtYzZnbE1FL3NyR0tJZi96bmI2VjZpL01YSmZZM3AKU1dCY0Mva0NnWUVBMG9PejhJMG1od2JNN1hrN2FzZ1NhSzlQdTdtd0UwMjFudmxCL3k2RVNxcEZ3bG15MFl6MApRczlHak12emlZTzFaeEYreVBIV3J2eUJlWlNieHdxZmxlaHF2V1diUHRPdWtkQnFMa2hYK3FaVldGZlZJMDNVClRCWC9mL2dHZUk2TDNIeDNZbEJXQ0s1Y1FXL01hOG40WlBCb3JaRUZyWU1lMTZ5WkVGUG4zaDhDZ1lFQXpOeDkKWmFlU3VTN1RUeDdiRXFqOW54RlFaYVJKcEVLWGQ0SDBlZ3RRM0VXVVY2cEwybXNLNHBqTHE5bHdHUHhXWC9CaQp4dG5jbXNvVHR6QjRGWEJhQlhSNkRjczcxczM0N1h6T2dCaFU0SktlbnNVWm9aa2FIZzdxZ0RJbXEvNktDT1FlCkUxTjBpT3ZKblZSVVlnZnBseGF0T2tRQkdBbm1ZbUtJYlJlQ0JMTUNnWUVBaTdQallpd0orV25GN1lLYXI4NSsKaVFKdXc0SURHNHhpajFHVFBxbThHV0RPVXAvOFQ1eGZMVWNvNXA4aXk0dWdndm5WVGIxUVgyZ3E5R2h1eUxTQQpHNWZWM2tManQ5bjY2OEdIOVpjRTY4NGVyVFg4dUNVYVVqUDNEeEdtR2JOZmxiN3o2MGF0RWEzRWc1aVI3S1pvCk5YUmx3Mm1PZnd1WkdEL3VoQ3Rxb0xrQ2dZQkQ4MWE4b3lxdHRmUnRLQVR1V1pOV2NiM0RHUTA4S01KbzUzZ2EKQ3lyVkJWZEJCTUdJUHowckVCZHVkdjhScXBGVDNUNUdTdms3ZG8rM2thSWpLbE1Sd0NMRDlJZHlwbnRNK3JyYwpEallKRDFrQnZNclZxUnphbjRQMDVhMmlHeG5aL1NCa3RLZlF5clRqTkplUXRLTXNkRjhkRm5WdWJjbzNGQXZBCmM2Mnl0UUtCZ0NsZTRCZXVwWUtqaFJKTlYwM2pnMjZOQ1IvcUk2WTFXdEFzVzc5S2swbnQwSHAxUUE0cnY0eEoKRnA0VmtjQXJYYU50amJwU1JYRThneXhnQW80VkFBamNSekhCNVBPYmtPbVdBZG8xaUtFTVlZQlBGM3JzcDdUQgo4cDA2dFljUzRJSHFBQzFBOUdzanRETnJPRGFTeUFxM1pjRU1ieDFGTEJPUGJhOC8vR0tKCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==#添加tokentoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IlhHcE5HMGdpaHRrVU81dzBjdHEyTHVoMXpBalhkbVFEYmNwck5IbHZoeW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJ0ZXN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InVzZXIxLXRva2VuLTh2bXpmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InVzZXIxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjIzMDhmZGQtYWUxZC00ZjY5LTlmZDItMTVmZDk3MDg4NjdjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3Q6dXNlcjEifQ.L_HSzSM31HryB6eUEglqEWOusYzDUNm_1jCoklV7IxGA4vGCpSeK7q0xKSfCC3AlJX13PqBoFTcU3czwUHHn3JzWjdZS3APKINDUOx_IlOzp_16nZDHRvTALqvhHoU8NKvKzgTifogRm1i9qP7p_u1ZioJWWgU33nOX0gsA4jyqcRoTSFELhRlOqDsGkc8Y-6bKppxM6eSA_dxCFcRhCHtATUexkTqfQh4OrerDUf0FSp6lgmBvr7dTvcoOb3BwcbtEg4DA2LSoefiLi0yIospX8a09NdOMxdxrO7ejjdlfdKpOnpRokj0ilc6nHlu4eHa3MKUFNvPrEj37iXRJUPQ

2.2.6.9 进行登录测试


Kubernetes进阶使用(三)—— 资源限制RBAC多账户相关推荐

  1. Kubernetes 进阶训练营 Pod基础

    Pod基础 K8s架构图 组件 kube-apiserver kube-controller-manager kube-controller-manageer kube-scheduler kubel ...

  2. 5. Kubernetes 进阶之容器组(Pod)

    Pod详解 Pod介绍 术语中英文对照: 英文全称 英文缩写 中文翻译 Pod Pod 容器组 Container Container 容器 Controller Controller 控制器 什么是 ...

  3. Android日志[进阶篇]三-Logcat 命令行工具

    Android日志[进阶篇]一-使用 Logcat 写入和查看日志 Android日志[进阶篇]二-分析堆栈轨迹(调试和外部堆栈) Android日志[进阶篇]三-Logcat命令行工具 Androi ...

  4. Kubernetes 进阶训练营 控制器

    控制器 前面我们一起学习了 Pod 的原理和一些基本使用,但是在实际使用的时候并不会直接使用 Pod,而是会使用各种控制器来满足我们的需求,Kubernetes 中运行了一系列控制器来确保集群的当前状 ...

  5. Kubernetes HPA 的三个误区与避坑指南

    01 前言 Aliware 云计算带来的优势之一便是弹性能力,云原生场景下 Kubernetes 提供了水平弹性扩容能力(HPA),让应用可以随着实时指标进行扩/缩.然而 HPA 的实际工作情况可能和 ...

  6. 直播软件搭建Android音视频方向进阶路线及资源合集

    直播软件搭建Android音视频方向进阶路线及资源合集 直播软件搭建的音视频从采集到播放都经历了哪些流程呢:: 通过上面的图,我们简单的把音视频方向分为主要的两块: 媒体部分(蓝色+绿色) 传输部分( ...

  7. kubernetes集群节点资源预留

    问题 默认kubelet没有配置资源预留,host上所有的资源(cpu, 内存, 磁盘) 都是可以给 pod 使用的.而当一个节点上的 pod 将资源吃满时,系统层面可能会干掉 k8s 核心组件进程, ...

  8. Kubernetes部署(三):CA证书制作

    相关内容: Kubernetes部署(一):架构及功能说明 Kubernetes部署(二):系统环境初始化 Kubernetes部署(三):CA证书制作 Kubernetes部署(四):ETCD集群部 ...

  9. [Qt教程] 第43篇 进阶(三)对象树与拥有权

    [Qt教程] 第43篇 进阶(三)对象树与拥有权 楼主  发表于 2013-9-12 16:39:33 | 查看: 255| 回复: 1 对象树与拥有权 版权声明 该文章原创于Qter开源社区 导语 ...

最新文章

  1. ExtJs grid合并单元格
  2. Linux 主机被入侵后的处理案例
  3. 论文信息系统项目管理的进度管理
  4. NLP复习资料(1)-绪论、数学基础
  5. 作者:靳小龙,中国科学院计算技术研究所副研究员,博士生导师。
  6. 富文本编辑器 CKeditor
  7. linux 如何查看远程代码分支,linux看git 创建分支、删除本地分支、查看远程分支、本地分支例子...
  8. SpringBoot部署Jar文件,瘦身优化指南!
  9. Fragment学习2--简单的添加Fragment
  10. 若依后端实现pdfjs预览PDF文件
  11. 字符串%百分号 和 format 格式化
  12. O(N)求出1~n逆元
  13. ASP.NET 控制页概览
  14. Solidity 系列教程
  15. 【githubgirl】如何通过实现一个简单的编译器(TinyC),并借助实例来描述基本的编译原理及过程
  16. 通过http网页链接下载单词音频文件
  17. 如何用计算机制作pop海报,手绘POP海报的制作 -电脑资料
  18. 做好加密手机 任重而道远
  19. html文本内容自动滚动,网页HTML代码滚动文字制作
  20. IBM再次出手,蓝色巨人收购蓝色巨狼

热门文章

  1. element-ui —— 照片墙
  2. 【Linux】深入理解Shell用法,从入门到精通
  3. android 绘画开源,Android 开源可缩放平移的绘画板
  4. 碳化硅MOS 助力 隔离型双向DCDC变换器-双向变流器PCS
  5. 在Linux环境(Centos 7)安装JAVA jdk1.8
  6. 基于stm32的红外遥控
  7. CTB-LOCKER敲诈者病毒下载器脱壳之样本1
  8. 手把手教你基于HTML、CSS搭建我的相册(下)
  9. python学习——POP3收取邮件
  10. 【网络基础】RIP基础概念