K8S学习之helm
helm 是tiller的客户端,tiller是守护进程,helm 请求,tiller接收,与api交互完成创建
helm 包管理工具,下载chart图表, chart部署实例release,chart里有个values.yaml,定义值进行模板安装,其参数可以修改
helm吧k8s的资源打包到图表中
Helm相当于linux环境下的yum包管理工具。
helm是一k8s中的一个命令行客户端工具,helm是tiller的客户端,tiller是一个守护进程,接收helm的请求,helm把请求交给tiller,tiler和apiserver交互,由apiserver负责完成创建,我们用哪个chart需要下载到本地,基于本地这个chart部署实例,这个部署的实例叫做release
chart:一个helm程序包,比方说我们部署nginx,需要deployment的yaml,需要service的yaml,这两个清单文件就是一个helm程序包,在k8s中把这些yaml清单文件叫做chart图表,vlues.yaml文件为模板中的文件赋值,可以实现我们自定义安装
如果是chart开发者需要自定义模板,如果是chart使用者只需要修改values.yaml即可
helm把kubernetes资源打包到一个chart中,制作并完成各个chart和chart本身依赖关系并利用chart仓库实现对外分发,而helm还可实现可配置的对外发布,通过values.yaml文件完成可配置的发布,如果chart版本更新了,helm自动支持滚更更新机制,还可以一键回滚,但是不是适合在生产环境使用,除非具有定义自制chart的能力
helm属于kubernetes一个项目:下载地址:
https://github.com/helm/helm/releases
找这个Linux amd64 checksum的,解压之后按下面解压即可
helm官方网站:
https://helm.sh/
helm 官方的chart站点:
https://hub.kubeapps.com/
repository:存放chart图表的仓库,提供部署k8s应用程序需要的那些yaml清单文件
release:特定的chart部署于目标集群上的一个实例
Helm: go语言开发的
核心术语:
Chart: 一个helm程序包;
Repository: Charts仓库,https/http服务器;
Release:特定的Chart部署于目标集群上的一个实例;Chart -> Config -> Release
程序架构:
helm:客户端,管理本地的Chart仓库,管理Chart,与Tiller服务器交互,发送Chart,实例安装、查询、卸载等操作
Tiller: 服务端
Tiller:服务端,接收helm发来的Charts与Config,合并生成relase;
chart—>通过values.yaml这个文件赋值–>生成release实例
安装helm客户端
[root@master ~]# tar xf helm-v2.13.1-linux-amd64.tar.gz
[root@master ~]# cd linux-amd64/
[root@master linux-amd64]# cp helm /usr/local/bin/
[root@master linux-amd64]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find tiller
#此时未安装tillter
安装helm服务端tiller
rbac.yaml参考链接如下:
https://github.com/helm/helm/blob/master/docs/rbac.md
[root@master ~]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: tillernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: tiller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:- kind: ServiceAccountname: tillernamespace: kube-system
[root@master ~]# kubectl apply -f rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
[root@master ~]# kubectl get sa -n kube-system | grep tiller
tiller 1 7m18s
[root@master ~]# docker load -i tiller_2_13_1.tar.gz
[root@master ~]# vim tiller.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:creationTimestamp: nulllabels:app: helmname: tillername: tiller-deploynamespace: kube-system
spec:selector:matchLabels:app: helmname: tillerreplicas: 1strategy: {}template:metadata:creationTimestamp: nulllabels:app: helmname: tillerspec:automountServiceAccountToken: trueserviceAccount: tillercontainers:- env:- name: TILLER_NAMESPACEvalue: kube-system- name: TILLER_HISTORY_MAXvalue: "0"image: gcr.io/kubernetes-helm/tiller:v2.13.1imagePullPolicy: IfNotPresentlivenessProbe:httpGet:path: /livenessport: 44135initialDelaySeconds: 1timeoutSeconds: 1name: tillerports:- containerPort: 44134name: tiller- containerPort: 44135name: httpreadinessProbe:httpGet:path: /readinessport: 44135initialDelaySeconds: 1timeoutSeconds: 1resources: {}
status: {}---
apiVersion: v1
kind: Service
metadata:creationTimestamp: nulllabels:app: helmname: tillername: tiller-deploynamespace: kube-system
spec:ports:- name: tillerport: 44134targetPort: tillerselector:app: helmname: tillertype: ClusterIP
status:loadBalancer: {}
[root@master ~]# kubectl apply -f tiller.yaml
deployment.apps/tiller-deploy created
service/tiller-deploy created
[root@master ~]# kubectl get pod -n kube-system | grep tiller
tiller-deploy-7bd89687c8-tjnkp 1/1 Running 0 28s
[root@master ~]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
创建一个chart实例
[root@master ~]# helm create test
Creating test
[root@master ~]# cd test/
[root@master test]# ls
charts Chart.yaml templates values.yaml
[root@master test]# tree
.
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml3 directories, 8 files
Chart.yaml 用来描述当前chart有哪属性信息,存放当前程序包的元数据信息,包的名字,版本等,跟部署k8s应用无关系,只是记录chart的信息的
[root@master test]# cat Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: test
version: 0.1.0
templates 模板,定义k8s的yaml文件,大量调用go语言的语法,跟ansible的playbook一样,ansible的playbook也可以使用模板
[root@master templates]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: {{ include "test.fullname" . }}labels:app.kubernetes.io/name: {{ include "test.name" . }}helm.sh/chart: {{ include "test.chart" . }}app.kubernetes.io/instance: {{ .Release.Name }}app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:replicas: {{ .Values.replicaCount }}selector:matchLabels:app.kubernetes.io/name: {{ include "test.name" . }}app.kubernetes.io/instance: {{ .Release.Name }}template:metadata:labels:app.kubernetes.io/name: {{ include "test.name" . }}app.kubernetes.io/instance: {{ .Release.Name }}spec:containers:- name: {{ .Chart.Name }}image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"imagePullPolicy: {{ .Values.image.pullPolicy }}ports:- name: httpcontainerPort: 80protocol: TCPlivenessProbe:httpGet:path: /port: httpreadinessProbe:httpGet:path: /port: httpresources:{{- toYaml .Values.resources | nindent 12 }}{{- with .Values.nodeSelector }}nodeSelector:{{- toYaml . | nindent 8 }}{{- end }}{{- with .Values.affinity }}affinity:{{- toYaml . | nindent 8 }}{{- end }}{{- with .Values.tolerations }}tolerations:{{- toYaml . | nindent 8 }}{{- end }}
values.yaml 为模板中的每一个属性提供值的。如这里默认的image: repository: nginx,将被templates里的deployment.yaml调用,
对应为 image: “{{ .Values.image.repository }}:{{ .Values.image.tag }}”
[root@master test]# cat values.yaml
# Default values for test.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.replicaCount: 1image:repository: nginxtag: stablepullPolicy: IfNotPresentnameOverride: ""
fullnameOverride: ""service:type: ClusterIPport: 80ingress:enabled: falseannotations: {}# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: "true"hosts:- host: chart-example.localpaths: []tls: []# - secretName: chart-example-tls# hosts:# - chart-example.localresources: {}# We usually recommend not to specify default resources and to leave this as a conscious# choice for the user. This also increases chances charts run on environments with little# resources, such as Minikube. If you do want to specify resources, uncomment the following# lines, adjust them as necessary, and remove the curly braces after 'resources:'.# limits:# cpu: 100m# memory: 128Mi# requests:# cpu: 100m# memory: 128MinodeSelector: {}tolerations: []affinity: {}
helm install . 创建的chart部署k8s应用
执行命令,创建test目录里文件描述的应用
[root@master test]# helm install .
NAME: halting-lizard #pod的命名将相匹配
LAST DEPLOYED: Thu Jan 6 09:00:32 2022
NAMESPACE: default
STATUS: DEPLOYEDRESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
halting-lizard-test 0/1 1 0 1s==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
halting-lizard-test-9dcd9757b-j2dvt 0/1 ContainerCreating 0 1s==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
halting-lizard-test ClusterIP 10.101.4.80 <none> 80/TCP 1sNOTES:
1. Get the application URL by running these commands:export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=test,app.kubernetes.io/instance=halting-lizard" -o jsonpath="{.items[0].metadata.name}")echo "Visit http://127.0.0.1:8080 to use your application"kubectl port-forward $POD_NAME 8080:80[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
halting-lizard-test-9dcd9757b-j2dvt 1/1 Running 0 71s
#执行上面NOTES的命令可以主机端口映射到80
[root@master test]# export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=test,app.kubernetes.io/instance=halting-lizard" -o jsonpath="{.items[0].metadata.name}")
[root@master test]# kubectl port-forward $POD_NAME 8088:80
Forwarding from 127.0.0.1:8088 -> 80
Forwarding from [::1]:8088 -> 80
Handling connection for 8088
新的终端访问
[root@master test]# curl 127.0.0.1:8088
Welcome to nginx!
helm list 查看有哪些release
[root@master test]# helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
halting-lizard 1 Thu Jan 6 09:00:32 2022 DEPLOYED test-0.1.0 1.0 default
helm delete 删除指定的release(helm list查看到的),同时删除了部署在kubernetes上的服务
[root@master ~]# helm delete halting-lizard
release "halting-lizard" deleted
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
halting-lizard-test-9dcd9757b-j2dvt 0/1 Terminating 0 20m
helm package 打包chart
###生成的tgz包可以发送到任意服务器上,通过helm fetch就可以获取该chart
[root@master ~]# helm package test
Successfully packaged chart and saved it to: /root/test-0.1.0.tgz
Error: stat /root/.helm/repository/local: no such file or directory
[root@master ~]# lstest test-0.1.0.tgz
helm repo list 查看chart库
[root@master ~]# mkdir -p /root/.helm/repository/
[root@master ~]# mv repositories.yaml /root/.helm/repository/
[root@master ~]# cat /root/.helm/repository/repositories.yaml
apiVersion: v1
generated: 2019-04-03T21:52:41.714422328-04:00
repositories:
- caFile: ""cache: /root/.helm/repository/cache/stable-index.yamlcertFile: ""keyFile: ""name: stablepassword: ""url: https://cnych.github.io/kube-charts-mirrorusername: ""
[root@master ~]# helm repo list
NAME URL
stable https://cnych.github.io/kube-charts-mirror
helm repo add 添加repo,执行完毕后输入helm repo update进行更新
[root@master ~]# mkdir -p /root/.helm/repository/cache/
[root@master ~]# mv bitnami-index.yaml /root/.helm/repository/cache/
[root@master ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
[root@master ~]# helm repo list
NAME URL
stable https://cnych.github.io/kube-charts-mirror
bitnami https://charts.bitnami.com/bitnami
[root@master ~]# helm repo update
查找chart
helm search 输出所有的chart
helm search mysql 搜索mysql chart,redis,memcached,Jenkins等等
helm inspect bitnami/mysql 查看指定chart的详细信息
[root@master ~]# helm search |wc -l
347
[root@master ~]# helm search mysql
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/mysql 8.8.20 8.0.27 Chart to create.....
[root@master ~]# helm inspect bitnami/mysql |head -5
annotations:category: Database
apiVersion: v2
appVersion: 8.0.27
description: Chart to create a Highly available MySQL cluster
....
helm 官方的chart站点:https://hub.kubeapps.com/
部署memcache
[root@master ~]# helm search memcached
[root@master ~]# helm fetch stable/memcached
[root@master ~]# ls
memcached-2.3.1.tgz
[root@master ~]# tar xf memcached-2.3.1.tgz
[root@master ~]# cd memcached/
[root@master memcached]# ls
Chart.yaml README.md templates values.yaml
[root@master memcached]# helm install .
Error: validation failed: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1"
#这里修改了apiVersion
[root@master memcached]# head -1 templates/statefulset.yaml
apiVersion: apps/v1
[root@master memcached]# helm install .
Error: release worn-squirrel failed: StatefulSet.apps "worn-squirrel-memcached" is invalid: [spec.selector: Required value, spec.template.metadata.labels: Invalid value: map[string]string{"app":"worn-squirrel-memcached", "chart":"memcached-2.3.1", "heritage":"Tiller", "release":"worn-squirrel"}: `selector` does not match template `labels`]
###selector没有些需要加上,10-14行,label 20、22行10 spec:11 selector:12 matchLabels:13 app: {{ template "memcached.fullname" . }}14 release: "{{ .Release.Name }}"15 serviceName: {{ template "memcached.fullname" . }}16 replicas: {{ .Values.replicaCount }}17 template:18 metadata:19 labels:20 app: {{ template "memcached.fullname" . }}21 chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"22 release: "{{ .Release.Name }}"###这里先把副本改成1,因为是单node节点
[root@master memcached]# grep replicaCount: values.yaml
replicaCount: 1
[root@master memcached]# helm install .
[root@master ~]# helm search rabbitmq-ha
[root@master ~]# helm fetch stable/rabbitmq-ha
[root@master ~]# tar xf rabbitmq-ha-1.14.0.tgz
[root@master ~]# cd rabbitmq-ha/
[root@master rabbitmq-ha]# vim templates/statefulset.yaml
[root@master rabbitmq-ha]# head -1 templates/statefulset.yaml
apiVersion: apps/v1
[root@master rabbitmq-ha]# cat -n templates/statefulset.yaml
##14-17添加,匹配labels: 26-2713 spec:14 selector:15 matchLabels:16 app: {{ template "rabbitmq-ha.name" . }}17 release: {{ .Release.Name }}18 podManagementPolicy: {{ .Values.podManagementPolicy }}19 serviceName: {{ template "rabbitmq-ha.fullname" . }}-discovery20 replicas: {{ .Values.replicaCount }}21 updateStrategy:22 type: {{ .Values.updateStrategy }}23 template:24 metadata:25 labels:26 app: {{ template "rabbitmq-ha.name" . }}27 release: {{ .Release.Name }}
###这里先把副本改成1,因为是单node节点
[root@master memcached]# grep replicaCount: values.yaml
replicaCount: 1
[root@master rabbitmq-ha]# helm install .
NOTES:
** Please be patient while the chart is being deployed **Credentials:Username : guestPassword : $(kubectl get secret --namespace default wizened-tarsier-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)ErLang Cookie : $(kubectl get secret --namespace default wizened-tarsier-rabbitmq-ha -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)
helm install ./
在创建一个service.yaml
cat service.yaml
apiVersion: v1
kind: Service
metadata:name: rabbitmq-managementlabels:app: rabbitmq-ha
spec:ports:- port: 15672name: httpselector:app: rabbitmq-hatype: NodePort
kubectl apply -f service.yaml
kubectl get svc
在浏览器输入网址登录到rabbitmq的管理节点中
加密密码:
kubectl get secret --namespace default hipster-tarsier-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
helm常用命令
(1)release相关的:
helm upgrade [RELEASE] [CHART] [flags] 升级一个版本
helm delete release 删除创建的release
helm rollback [flags] [RELEASE] [REVISION] 回滚一个版本
helm install . 创建一个release实例
helm history 查看历史
(2)chart相关的
helm serach
helm inspect 查看chart的详细信息
helm fetch 把chart下载下来
helm package 把chart打包
8.helm template语法
可以通过如下命令获取渲染后的yaml文件
cd rabbitmq-ha
helm install --debug --dry-run ./
K8S学习之helm相关推荐
- K8S 学习笔记三 核心技术 Helm nfs prometheus grafana 高可用集群部署 容器部署流程
K8S 学习笔记三 核心技术 2.13 Helm 2.13.1 Helm 引入 2.13.2 使用 Helm 可以解决哪些问题 2.13.3 Helm 概述 2.13.4 Helm 的 3 个重要概念 ...
- 文章目录 | .NET Core on K8s学习之旅 (更新至20200618)
.NET Core on K8s学习之旅 更新记录: -- 20200511 增加Ingress & Nginx Ingress介绍 -- 20200515 增加Ocelot API网关集成示 ...
- .NET Core on K8S 学习与实践系列文章索引 (更新至20191126)
更新记录: -- 2019-11-26 增加Docker容器日志系列文章 近期在学习Kubernetes,基于之前做笔记的习惯,已经写了一部分文章,因此给自己立一个flag:完成这个<.NET ...
- .NET Core on K8S 学习与实践系列文章索引 (更新至20191116)
更新记录: -- 2019-11-16 增加Docker容器监控系列文章 // 此外,今天是11月17日,我又老了一岁,祝我自己生日快乐! 近期在学习Kubernetes,基于之前做笔记的习惯,已经写 ...
- ASP.NET Core on K8s学习之旅(14)Ingress灰度发布
[云原生]| 作者/Edison Zhou 这是恰童鞋骚年的第236篇原创文章 上一篇介绍了Ingress的基本概念和Nginx Ingress的基本配置和使用,然后我还录了一个快速分享小视频介绍了一 ...
- ASP.NET Core on K8s学习之旅(13)Ocelot API网关接入
[云原生]| 作者/Edison Zhou 这是恰童鞋骚年的第232篇原创文章 上一篇介绍了Ingress的基本概念和Nginx Ingress的基本配置和使用,考虑到很多团队都在使用Ocelot作为 ...
- .NET Core on K8S学习实践系列文章索引(持续更新)
近期在学习Kubernetes,基于之前做笔记的习惯,已经写了一部分文章,因此给自己立一个2019年的flag:完成这个<.NET Core on K8S学习实践>系列文章!这个系列会持续 ...
- ASP.NET Core on K8S学习初探(1)
" [LOG] ASP.NET Core on K8S Starting..." 01 - 写在之前 当近期的一个App上线后,发现目前的docker实例(应用服务BFF+中台服务 ...
- K8s学习进阶月刊第一期:Kubernetes and Cloud Native Meetup (北京站)...
欢迎订阅K8s学习进阶月刊 干货资料尽情下载 线上直播: 直播主题内容 第一部分:混沌工程的背景和原理 第二部分:结合容器场景,介绍混沌工程的一些实践方法和常见工具 分享嘉宾:中亭 阿里巴巴高级技术专 ...
- k8s学习-污点和容忍(概念、模版、创建、删除)
目录 概念 模版 实战 添加污点 添加容忍 移除污点 参考 概念 污点(Taint)使节点能够排斥/驱逐一类特定的 Pod,通过给 Node 打一些污点,来限制 Pod 调度到某些 Node 上. 容 ...
最新文章
- 找出最接近的相似串(DP思想)
- 比特币核心(BCE)或许并没有你想象的强大
- mysql job 运行时间设置_mysql 的job 设置
- 一次实现可以在某些场合替代菱形继承?
- 基于Spring Boot+Vue的考试系统
- CoreAnimation编程指南(八)事务 转自:http://www.dreamingwish.com/
- RabbitMQ ACK 机制的意义
- UNIX/Linux RHEL6.3 root密码破解,GRUB加密(图文并茂)
- Linux创翼拨号上网,创翼客户端下载(网络拨号工具) v4.11.4.731 最新版_数码资源网...
- 最新个人引导页导航源码 极致酷范儿
- Windows安全加固系列---日志配置操作
- 【转】京东商城思维导图
- 力促产学研结合 中国医学人工智能大会圆满落幕
- 腾讯网上共享excel使用总结
- Excel学习经验总结
- h5微信f分享链接给对方获取对方手机号_互删的微信好友怎么找回?这些技巧你值得拥有...
- C语言 —— 多维数组
- 关于t-SNE(T-distributed Stochastic Neighbor Embedding) t-分布随机近邻嵌入的简单理解
- J-Flash 读取Flash数据
- 谈谈激活函数以零为中心的问题