环境为centos7.9 安装k8s 1.23.1

一、自动扩缩容

1、安装Metrics Server

 wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml

修改yaml文件

修改第140行 image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.1

kubectl apply -f components.yamlkubectl get pods -n kube-systemkubectl top pods
kubectl top node

2、hpa

部署podinfo应用

vim  podinfo-dep.yaml---
apiVersion: apps/v1
kind: Deployment
metadata:name: podinfo
spec:selector:matchLabels:app: podinforeplicas: 2template:metadata:labels:app: podinfoannotations:prometheus.io/scrape: "true"spec:containers:- name: podinfodimage: stefanprodan/podinfo:0.0.1imagePullPolicy: Alwayscommand:- ./podinfo- -port=9898- -logtostderr=true- -v=2volumeMounts:- name: metadatamountPath: /etc/podinfod/metadatareadOnly: trueports:- containerPort: 9898protocol: TCPreadinessProbe:httpGet:path: /readyzport: 9898initialDelaySeconds: 1periodSeconds: 2failureThreshold: 1livenessProbe:httpGet:path: /healthzport: 9898initialDelaySeconds: 1periodSeconds: 3failureThreshold: 2resources:requests:memory: "32Mi"cpu: "1m"limits:memory: "256Mi"cpu: "100m"volumes:- name: metadatadownwardAPI:items:- path: "labels"fieldRef:fieldPath: metadata.labels- path: "annotations"fieldRef:fieldPath: metadata.annotationskubectl apply -f podinfo-dep.yamlvim podinfo-svc.yaml---
apiVersion: v1
kind: Service
metadata:name: podinfolabels:app: podinfo
spec:type: NodePortports:- port: 9898targetPort: 9898nodePort: 31198protocol: TCPselector:app: podinfokubectl apply -f podinfo-svc.yaml

部署hpa

vim podhpa.yamlapiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:name: podinfonamespace: default
spec:maxReplicas: 10minReplicas: 1metrics:- resource:name: cputarget:averageUtilization: 50type: Utilizationtype: Resource- resource:name: memorytarget:averageValue: 100Mitype: AverageValuetype: Resource       scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: podinfokubectl apply -f podhpa.yaml
kubectl get hpa

yum install -y httpd

ab -n 1000 -c 100 http://192.168.3.115:31198

查看Pod 已自动扩容

3、VPA

Vertical Pod Autoscaler(VPA):垂直Pod自动扩缩容

yum install gcc gcc-c++ perl -y
wget https://www.openssl.org/source/openssl-1.1.1k.tar.gz --no-check-certificate && tar zxf openssl-1.1.1k.tar.gz && cd openssl-1.1.1k
./config
make && make install
mv /usr/local/bin/openssl /usr/local/bin/openssl.bak
mv apps/openssl /usr/local/bin
ln -s /usr/local/lib64/libssl.so.1.1 /usr/lib64/libssl.so.1.1
ln -s /usr/local/lib64/libcrypto.so.1.1 /usr/lib64/libcrypto.so.1.1
cd
git clone https://github.com/kubernetes/autoscaler.git
cd /root/autoscaler-master/vertical-pod-autoscaler/hack
kubectl get pods -n kube-system

创建deployment

vim myvpa.yamlapiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginxname: vpa-nginx
spec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- image: nginxname: nginximagePullPolicy: IfNotPresentresources:requests:cpu: 200mmemory: 300Mi
---
apiVersion: v1
kind: Service
metadata:name: vpa-nginx-svc
spec:type: NodePortports:- port: 80targetPort: 80selector:app: nginxkubectl apply -f myvpa.yaml
kubectl get pods
kubectl get svc

创建vpa

vim vpa-nginx.yamlapiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:name: nginx-vpa
spec:targetRef:apiVersion: "apps/v1"kind: Deploymentname: vpa-nginxupdatePolicy:updateMode: "Auto"resourcePolicy:containerPolicies:- containerName: "nginx"minAllowed:cpu: "100m"memory: "50Mi"maxAllowed:cpu: "2000m"memory: "2600Mi"kubectl apply -f vpa-nginx.yaml
kubectl get vpa
kubectl  describe  vpa nginx-vpa

kubectl describe vpa nginx-vpa |tail -n 20
ab -c 100 -n 10000000 http://10.1.1.33/
kubectl describe vpa nginx-vpa |tail -n 20

二、健康检查

liveness:存活探针  捕获容器的状态是否处于存活状态(异常时kill  pod,并重新启动新Pod)

readiness  就绪探针     检测Pod是否可对外提供服务(等待,失败后从endpoint中移除)

三种handler类型

execaction 在容器内执行指定的命令

TCPSocketAction  对指定容器进行端口检查

HTTPGetAction    对指定容器执行HTTP  get请求

1、 execaction

vim liveness-exec.yamlapiVersion: v1
kind: Pod
metadata:labels:app: livenessname: liveness-exec
spec:containers:- name: livenessargs:- /bin/sh- -c- touch /tmp/healthy; sleep 30;rm -rf /tmp/healthy;sleep 300image: busyboxlivenessProbe:exec:command:- cat- /tmp/healthyinitialDelaySeconds: 5periodSeconds:  5kubectl apply -f liveness-exec.yaml
kubectl get pod  -w
kubectl describe pod liveness-exec

initialDelaySeconds:   5              5秒后开始检测

periodSeconds:  5                       每隔5秒检测一次

timeoutSecounds:  1                    容器1秒内反馈信息给探针,超时则重启Pod

successThreshold: 1                    连续成功的次数  1次成功则为成功

failureThreshold:   3                     连续失败的次数,3次以上重启Pod

Pod 30秒后会重启

2、HTTP探针

vim liveness-http.yamlapiVersion: v1
kind: Pod
metadata:labels:app: livenessname: liveness-http
spec:containers:- name: livenessimage: mirrorgooglecontainers/livenessargs:- /serverlivenessProbe:httpGet:path: /healthzport: 8080httpHeaders:- name: X-Custom-Headervalue: AwesomeinitialDelaySeconds: 5periodSeconds:  5kubectl apply -f liveness-http.yaml
kubectl get pod -w
kubectl describe pod liveness-http

3、tcp探针

vim liveness-tcp.yamlapiVersion: v1
kind: Pod
metadata:labels:app: httpdname: liveness-tcp
spec:containers:- name: httpdimage: httpdimagePullPolicy: IfNotPresentlivenessProbe:tcpSocket:port: 80initialDelaySeconds: 10periodSeconds:  10kubectl apply -f liveness-tcp.yaml
kubectl get pods -o wide

将http端口由80改为8080

kubectl exec  -it liveness-tcp -- bash
cd conf/
ls
cat -n httpd.conf | grep 80
sed -i "52c Listen 8080" httpd.conf
sed -i "241c ServerName www.example.com:8080" httpd.conf
cat -n httpd.conf | grep 80
httpd -k restart

kubectl describe pod liveness-tcp

此时pod重建,端口已改回80,可访问

4、readiness探针

vim  readiness.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: readiness
spec:replicas: 3selector:matchLabels:app: httpdtemplate:metadata:labels:app: httpdspec:containers:- name: httpdimage: httpdimagePullPolicy: IfNotPresentports:- containerPort: 80readinessProbe:exec:command:- cat- /usr/local/apache2/htdocs/index.htmlinitialDelaySeconds: 10periodSeconds:  10
---
apiVersion: v1
kind:  Service
metadata:name: readiness-svc
spec:selector:app: httpdports:- protocol: TCPport: 8080targetPort: 80kubectl apply -f readiness.yaml
kubectl get pod

删除其中一个pod下的index.html

kubectl exec -it readiness-5c86bbb5bb-5gx4r -- bash
cd htdocs/
ls
rm index.html
ls
exit

此时pod不会重建,修复pod      在删除文件的pod上新建index.html

kubectl exec -it readiness-5c86bbb5bb-5gx4r -- bash
cd htdocs/
ls
touch index.html
ls
exit
kubectl get pod
kubectl get endpoints

三、Qos

pod划分3种不同的Qos级别

guaranteed: pod里每个容器必需有CPU与内存的限制和请求,并且必需一样(request=limit),优先级最高

burstable:    pod里至少有一个容器有内存或CPU请求  (request<limit)              优先级居中

besteffort     pod里不包含cpu或内存请求    (未设置request  limit)                     优先级最低

kubectl create namespace qos-ns
vim gua-pod.yamlapiVersion: v1
kind: Pod
metadata:labels:app: guapodname: gua-podnamespace: qos-ns
spec:containers:- name: httpdimage: httpdimagePullPolicy: IfNotPresentresources:limits:memory: "200Mi"cpu: "700m"requests:memory: "200Mi"cpu: "700m"vim bur-pod.yamlapiVersion: v1
kind: Pod
metadata:labels:app: burpodname: bur-podnamespace: qos-ns
spec:containers:- name: httpdimage: httpdimagePullPolicy: IfNotPresentresources:limits:memory: "200Mi"requests:memory: "100Mi"vim bes-pod.yamlapiVersion: v1
kind: Pod
metadata:labels:app: bespodname: bes-podnamespace: qos-ns
spec:containers:- name: httpdimage: httpdimagePullPolicy: IfNotPresentkubectl apply -f gua-pod.yaml
kubectl apply -f bur-pod.yaml
kubectl apply -f bes-pod.yaml
kubectl get pod -n qos-ns
kubectl describe pod -n qos-ns bes-pod
kubectl describe pod -n qos-ns bur-pod
kubectl describe pod -n qos-ns gua-pod

当node节点资源不足时  先kill   besteffort     中 burstable    最后  guarnteed

当使用资源大于申请资源时oomkilled

vim myqospod.yamlapiVersion: v1
kind: Pod
metadata:labels:app: myqosname: qos-podnamespace: qos-ns
spec:containers:- name: stressimage: polinux/stressimagePullPolicy: IfNotPresentresources:limits:memory: "100Mi"requests:memory: "50Mi"command: ["stress"]args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]kubectl apply -f myqospod.yaml
kubectl get pod -n qos-ns
kubectl describe pod -n qos-ns qos-pod

查看Node资源

kubectl get nodes node1 -o yaml

当申请资源大于Node资源时  pending

vim myqospendpod.yamlapiVersion: v1
kind: Pod
metadata:labels:app: myqospendname: qospend-podnamespace: qos-ns
spec:containers:- name: stressimage: polinux/stressimagePullPolicy: IfNotPresentresources:limits:memory: "10G"requests:memory: "10G"command: ["stress"]args: ["--vm", "1", "--vm-bytes", "5G", "--vm-hang", "1"]kubectl apply -f myqospendpod.yaml
kubectl get pod -n qos-ns
kubectl describe pod -n qos-ns qospend-pod

四、资源管理

1、eviction

eviction  默认阈值  memory.available< 100Mi

nodefs.available< 10%

nodefs.inodesfree< 5%

imagefs.available< 15%

时进行资源回收

eviction 两种模式

soft模式: 允许为某一项参数设置一个优雅时间,给pod预留退出时间,当达到阈值之后的一定时                    间后,kubelet才会启动eviction过程

hard模式: 当某一项参数达到阈值之后立刻触发,直接杀掉pod

pod优先级

vim mypriority.yamlapiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:name: high-priority
value: 10000
globalDefault: false
description: "This priority class shoud be used for xyz service pods only"vim  myhighpod.yamlapiVersion: v1
kind: Pod
metadata:name: highnginxlabels:env: highpri
spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentpriorityClassName: high-prioritykubectl apply -f mypriority.yaml
kubectl apply -f myhighpod.yaml
kubectl describe pod highnginx

2、亲和与反亲和

1)node节点选择器

vim mynodename.yamlapiVersion: v1
kind: Pod
metadata:name: nodenamenginxlabels:env: nodenamenginx
spec:nodeName: node1containers:- name: nginximage: nginximagePullPolicy: IfNotPresentkubectl apply -f mynodename.yaml
kubectl get pod -o wide

 2) nodeselector

kubectl label nodes node1 disk=ssd
vim mynodeselector.yamlapiVersion: v1
kind: Pod
metadata:name: nodeselectornginxlabels:env: nodeselectornginx
spec:nodeSelector:disk: ssdcontainers:- name: nginximage: nginximagePullPolicy: IfNotPresentkubectl apply -f mynodeselector.yaml
kubectl get pod -o wide

3) nodeAffinity

vim nodeaffinity.yamlapiVersion: v1
kind: Pod
metadata:name: node-affinitylabels:app: myapp1
spec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 1preference:matchExpressions:- key: diskoperator: Invalues:- ssdcontainers:- name: with-node-affinityimage: nginxkubectl apply -f nodeaffinity.yaml
kubectl get pod -owide

4)podAntiAffinity

vim mypodaffinity.yamlapiVersion: v1
kind: Pod
metadata:name: pod-affinity
spec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- myapp1topologyKey: kubernetes.io/hostnamecontainers:- name: nginximage: nginximagePullPolicy: IfNotPresentkubectl apply -f mypodaffinity.yaml
kubectl get pod -o wide

可以看到两个pod被调度到了不同node上

5)podAffinity

vim mytomcataffinity.yamlapiVersion: v1
kind: Pod
metadata:name: tomcat-affinity
spec:affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- myapp1topologyKey: kubernetes.io/hostnamecontainers:- name: tomcatimage: tomcatimagePullPolicy: IfNotPresentkubectl apply -f mytomcataffinity.yaml
kubectl get pod -o wide

此pod与带有标签app=myapp1的pod 调度到同一node节点

3、污点和容忍度

noschedule:仅影响调度过程,当pod能容忍这个节点污,就可以调度到当前node,对现存的pod                        不产生影响

NoExecute :既影 响调度过程,又影响现存的pod对象,如果现存的pod不能容忍则会被驱逐

PreferNoSchedule:最好不,也可以,居中

kubectl taint node node1 type=production:NoSchedule
kubectl describe nodes node1 | grep -i  Taints

vim mytaint.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: taint-test
spec:replicas: 2selector:matchLabels:app: taint-testtemplate:metadata:labels:app: taint-testspec:containers:- image: nginxname: nginximagePullPolicy: IfNotPresent        kubectl apply -f mytaint.yaml
kubectl get pods -o wide

查看到两个pod都调度到node2上

修改yaml文件,增加Tolerations

vim mytaint.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: taint-test
spec:replicas: 5selector:matchLabels:app: taint-testtemplate:metadata:labels:app: taint-testspec:containers:- image: nginxname: nginximagePullPolicy: IfNotPresent        tolerations:- key: "type"operator: "Equal"value: "production"effect: "NoSchedule"kubectl apply -f mytaint.yaml
kubectl get pods -o wide

此时pod可以调度到node1节点上

去除污点

kubectl taint node node1 type-
kubectl delete deployments.apps taint-test
vim mytaint1.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: taintexcute
spec:replicas: 5selector:matchLabels:app: taint-testtemplate:metadata:labels:app: taint-testspec:containers:- image: nginxname: nginximagePullPolicy: IfNotPresent        kubectl apply -f mytaint1.yaml
kubectl get pods -o wide

给节点node1打污点

kubectl taint node node1 type=production:NoExecute
kubectl get pods -o wide

查看到所有运行在node1上的Pod 已经被驱逐

vim mytaint1.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: taintexcute
spec:replicas: 8selector:matchLabels:app: taint-testtemplate:metadata:labels:app: taint-testspec:containers:- image: nginxname: nginximagePullPolicy: IfNotPresent      tolerations:- key: "type"operator: "Equal"value: "production"effect: "NoExecute"  kubectl apply -f mytaint1.yaml
kubectl get pods -o wide

k8s自动扩缩容、健康检查、Qos、资源管理、亲和度、污点与宽容相关推荐

  1. k8s自定义指标HPA实践(微服务基于自定义指标自动扩缩容的实践)附demo

    先上demo代码仓库 https://github.com/wentjiang/prometheus-HPA-test 自动扩缩容的使用场景 在开发微服务时,我们会有一些请求量突增的场景,举个例子,快 ...

  2. K8S集群Pod资源自动扩缩容方案

    K8S集群Pod资源自动扩缩容方案 1.为什么要是有自动扩缩容 在K8S集群中部署的应用程序都是以Pod的形式部署的,我们在部署Pod资源时都会指定Pod资源的副本数,但是这个数量是写死的,平时可能启 ...

  3. 通过Dapr实现一个简单的基于.net的微服务电商系统(十一)——一步一步教你如何撸Dapr之自动扩/缩容...

    上一篇我们讲到了dapr提供的bindings,通过绑定可以让我们的程序轻装上阵,在极端情况下几乎不需要集成任何sdk,仅需要通过httpclient+text.json即可完成对外部组件的调用,这样 ...

  4. Kubernetes:HPA 详解-基于 CPU、内存和自定义指标自动扩缩容

    目录 HPA 基本原理 Metrics Server 聚合 API 安装Metrics Server HPA 基于 CPU自动扩缩容 查看 HPA 资源的对象了解工作过程: HPA 基于 内存自动扩缩 ...

  5. Knative 基本功能深入剖析:Knative Serving 自动扩缩容 Autoscaler

    Knative Serving 默认情况下,提供了开箱即用的快速.基于请求的自动扩缩容功能 - Knative Pod Autoscaler(KPA).下面带你体验如何在 Knative 中玩转 Au ...

  6. Knative 驾驭篇:带你 '纵横驰骋' Knative 自动扩缩容实现

    Knative 中提供了自动扩缩容灵活的实现机制,本文从 三横两纵 的维度带你深入了解 KPA 自动扩缩容的实现机制.让你轻松驾驭 Knative 自动扩缩容. 注:本文基于最新 Knative v0 ...

  7. 8.HPA自动扩缩容

    1 什么是HPA? HPA(Horizontal Pod Autoscaler,水平Pod自动伸缩器)可根据观察到的CPU.内存使用率或自定义度量标准来自动扩展或缩容Pod的数量.HPA不适用于无法缩 ...

  8. 22,Horizontal Pod Autoscaler(HPA),自动扩缩容

    在前面的课程中,我们已经可以实现通过手工执行kubectl scale命令实现Pod扩容或缩容,但是这显然不符合Kubernetes的定位目标–自动化.智能化. Kubernetes期望可以实现通过监 ...

  9. 测试Hpa自动扩缩容

    一.Hpa设置 最大pod副本数为5,最小pod副本数为3 平均cpu为10% 二.查看当前pod资源消耗 `kubectl top pod -n test` 三.创建一个service服务,type ...

最新文章

  1. 2018年摩拜校招嵌入式工程师笔试卷
  2. Mysql 操作技巧
  3. 深思:如何堂堂正正的做事
  4. 补充spring事务传播性没有考虑的几种情况
  5. hαbits的意思_hαppy什么么意思
  6. JZOJ 5669. 【GDSOI2018模拟4.19】排列
  7. 对于软件测试四大误区的认识
  8. mysql x锁 u锁_讲解更新锁(U)与排它锁(X)的相关知识
  9. python 连接数据库 pymysql模块的使用
  10. 一、Java多线程基础
  11. InnoDB之锁机制
  12. 【Thinking In Java】笔记之一 一切都是对象
  13. java case 语句_Java switch case 语句
  14. 单元格下拉全选快捷键_wps表格怎么选中单元格,快捷键是什么?
  15. 最大流最小割算法入门理解
  16. 基于OAI协议元数据收割的.NET资源
  17. 英特尔Sandy Bridge处理器深度解析
  18. Android 解决 adapter.notifyDataSetChanged() 不起作用
  19. mysql搜索结果去重_mysql数据库去重查询
  20. 机器学习工程师 — Udacity 基于CNN和迁移学习创建狗品种分类器

热门文章

  1. 好嗨呦是谁_好嗨哦什么梗
  2. 给一段文字设置多种颜色
  3. 400亿暴风跌下神坛的教训,好项目才能持续赚钱
  4. DNF辅助程序功能介绍
  5. UltraEdit FTP 配置方法
  6. Unity ProjectSettings发布设置 - Player篇(未完待续)
  7. [数学模型]疯狂的UNO
  8. android获取手机内部存储空间和外部存储空间
  9. 读金庸故事,品程序人生05刀狂剑痴
  10. ltv价值 应用_浅谈LTV模型的概念、算法及作用意义