Tekton实战案例--S2I
案例环境说明
示例项目:
代码仓库:https://gitee.com/mageedu/spring-boot-helloWorld.git
构建工具maven
pipeline各Task
git-clone:克隆项目的源代码
build-to-package: 代码测试,构建和打包
generate-build-id:生成build id
image-build-and-push:镜像构建和推送
deploy-to-cluster:将新版本的镜像部署到kubernetes集群
Workspace
- 基于PVC,跨task数据共享
2.2.5.2 pipeline完成Image构建,推送和部署
01-git-clone的Task
apiVersion: tekton.dev/v1beta1 kind: Task metadata:name: git-clone spec:description: Clone code to the workspaceparams:- name: urltype: stringdescription: git url to clonedefault: ""- name: branchtype: stringdescription: git branch to checkoutdefault: "main"workspaces:- name: sourcedescription: The code repo will clone in the workspacesteps:- name: git-cloneimage: alpine/git:v2.36.1script: git clone -b $(params.branch) -v $(params.url) $(workspaces.source.path)/source
02–build-to-package.yaml
apiVersion: tekton.dev/v1beta1 kind: Task metadata:name: build-to-package spec:workspaces:- name: sourcedescription: The code repo in the workspacessteps:- name: buildimage: maven:3.8-openjdk-11-slimworkingDir: $(workspaces.source.path)/sourcevolumeMounts:- name: m2mountPath: /root/.m2script: mvn clean install# 定义volume提供maven cache,但是前提得创建出来maven-cache的pvcvolumes:- name: m2persistentVolumeClaim:claimName: maven-cache
03-generate-build-id.yaml
apiVersion: tekton.dev/v1beta1 kind: Task metadata:name: generate-build-id spec:params:- name: versiondescription: The version of the applicationtype: stringresults:- name: datetimedescription: The current date and time- name: buildIddescription: The build IDsteps:- name: generate-datetimeimage: ikubernetes/admin-box:v1.2script: |#!/usr/bin/env bashdatetime=`date +%Y%m%d-%H%M%S`echo -n ${datetime} | tee $(results.datetime.path)- name: generate-buildidimage: ikubernetes/admin-box:v1.2script: |#!/usr/bin/env bashbuildDatetime=`cat $(results.datetime.path)`buildId=$(params.version)-${buildDatetime}echo -n ${buildId} | tee $(results.buildId.path)
04-build-image-push.yaml
要想能推送镜像到镜像仓库,必须创建一个secret对象,挂在到kaniko的/kaniko/.docker目录下,具体创建secret的方法有两种:
1、先在一台机器上login镜像仓库,这里以dockerhub为例,将会把认证文件保存在
~/.docker/config.json
:
基于config,json创建sectet,这里的secret的类型选择generic
kubectl create secret generic docker-config --from-file=/root/.docker/config.json
2、先基于user/password创建一个base64:
echo -n USER:PASSWORD | base64
创建一个config.json,然后将创建出来的base64替换到下面xxxxxxxxxxxxxxx
{"auths": {"https://index.docker.io/v1/": {"auth": "xxxxxxxxxxxxxxx"}} }
最后创建一个secret
kubectl create secret generic docker-config --from-file=<path to .docker/config.json>
05-deploy-task.yaml
apiVersion: tekton.dev/v1beta1 kind: Task metadata:name: deploy-using-kubectl spec:workspaces:- name: sourcedescription: The git repoparams:- name: deploy-config-filedescription: The path to the yaml file to deploy within the git source- name: image-urldescription: Image name including repository- name: image-tagdescription: Image tagsteps:- name: update-yamlimage: alpine:3.16command: ["sed"]args:- "-i"- "-e"- "s@__IMAGE__@$(params.image-url):$(params.image-tag)@g"- "$(workspaces.source.path)/source/deploy/$(params.deploy-config-file)"- name: run-kubectlimage: lachlanevenson/k8s-kubectlcommand: ["kubectl"]args:- "apply"- "-f"- "$(workspaces.source.path)/source/deploy/$(params.deploy-config-file)"
06-pipelinerun-s2i.yaml
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata:name: source-to-image spec:params:- name: git-url- name: pathToContextdescription: The path to the build context, used by Kaniko - within the workspacedefault: .- name: image-urldescription: Url of image repository- name: deploy-config-filedescription: The path to the yaml file to deploy within the git sourcedefault: all-in-one.yaml- name: versiondescription: The version of the applicationtype: stringdefault: "v0.10" workspaces:- name: codebase- name: docker-configtasks:- name: git-clonetaskRef:name: git-cloneparams:- name: urlvalue: "$(params.git-url)"workspaces:- name: sourceworkspace: codebase- name: build-to-packagetaskRef:name: build-to-packageworkspaces:- name: sourceworkspace: codebaserunAfter:- git-clone- name: generate-build-idtaskRef:name: generate-build-idparams:- name: versionvalue: "$(params.version)"runAfter:- git-clone- name: image-build-and-pushtaskRef:name: image-build-and-pushparams:- name: image-urlvalue: "$(params.image-url)"- name: image-tagvalue: "$(tasks.generate-build-id.results.buildId)"workspaces:- name: sourceworkspace: codebase- name: dockerconfigworkspace: docker-configrunAfter:- generate-build-id- build-to-package- name: deploy-to-clustertaskRef:name: deploy-using-kubectlworkspaces:- name: sourceworkspace: codebaseparams:- name: deploy-config-filevalue: $(params.deploy-config-file)- name: image-urlvalue: $(params.image-url)- name: image-tagvalue: "$(tasks.generate-build-id.results.buildId)"runAfter:- image-build-and-push
07-rbac.yaml
因为06task的容器要执行kubectl,所以,给这个pod要指定一个serviceaccount,这样才能操作集群的资源
--- apiVersion: v1 kind: ServiceAccount metadata:name: helloworld-admin --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: helloworld-admin roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: helloworld-adminnamespace: default
08-pipelinerun-s2i.yaml
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata:name: s2i-buildid-run-00002 spec:serviceAccountName: defaulttaskRunSpecs:- pipelineTaskName: deploy-to-clustertaskServiceAccountName: helloworld-adminpipelineRef:name: source-to-imageparams:- name: git-urlvalue: https://gitee.com/mageedu/spring-boot-helloWorld.git- name: image-urlvalue: icloud2native/spring-boot-helloworld- name: versionvalue: v0.1.2workspaces:- name: codebasevolumeClaimTemplate:spec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: nfs-csi- name: docker-configsecret:secretName: docker-config
运行:
kubectl apply -f .
结果:
- 整个pipeline执行成功
2、image推送到dockerhub
3、查看部署
更多关于tekton文章,后续更新。。。
- 整个pipeline执行成功
Tekton实战案例--S2I相关推荐
- 2021年大数据Spark(四十一):SparkStreaming实战案例六 自定义输出 foreachRDD
目录 SparkStreaming实战案例六 自定义输出-foreachRDD 需求 注意: 代码实现 SparkStreaming实战案例六 自定义输出-foreachRDD 需求 对上述案例的结果 ...
- 2021年大数据Spark(三十九):SparkStreaming实战案例四 窗口函数
目录 SparkStreaming实战案例四 窗口函数 需求 代码实现 SparkStreaming实战案例四 窗口函数 需求 使用窗口计算: 每隔5s(滑动间隔)计算最近10s(窗口长度)的数据! ...
- 2021年大数据Spark(三十七):SparkStreaming实战案例二 UpdateStateByKey
目录 SparkStreaming实战案例二 UpdateStateByKey 需求 1.updateStateByKey 2.mapWithState 代码实现 SparkStreaming实战案例 ...
- 2021年大数据Spark(三十六):SparkStreaming实战案例一 WordCount
目录 SparkStreaming实战案例一 WordCount 需求 准备工作 代码实现 第一种方式:构建SparkConf对象 第二种方式:构建SparkContext对象 完整代码如下所示: 应 ...
- 【分布式事务】tcc-transaction分布式TCC型事务框架搭建与实战案例(基于Dubbo/Dubbox)...
一.背景 有一定分布式开发经验的朋友都知道,产品/项目/系统最初为了能够快速迭代上线,往往不太注重产品/项目/系统的高可靠性.高性能与高扩展性,采用单体应用和单实例数据库的架构方式快速迭代开发:当产品 ...
- 7个实战案例、24个学习视频、12G干货资料...今天带你免费入门Python数据分析!...
相信许多做数据的都有这样的经历: 你花了大半天整合了一张数据表,却因为其他部门的错误,导致表格结构全错了!于是你又要吭哧吭哧重新来过... 每次数据都重复洗一遍,还这么慢,要是有一劳永逸的方法就好了. ...
- R语言诊断试验数据处理与ROC分析实战案例2
R语言诊断试验数据处理与ROC分析实战案例2 目录 R语言诊断试验数据处理与ROC分析实战案例2 #ROC指标 #样例数据
- R语言诊断试验数据处理与ROC分析实战案例1
R语言诊断试验数据处理与ROC分析实战案例1 目录 R语言诊断试验数据处理与ROC分析实战案例1 #ROC指标 #样例数据
- R语言Kaplan-Meier绘制生存分析、Log-rank假设检验、Cox回归曲线实战案例:恶性黑色素瘤的术后数据生存分析
R语言Kaplan-Meier绘制生存分析.Log-rank假设检验.Cox回归曲线实战案例:恶性黑色素瘤的术后数据生存分析 目录
最新文章
- 用C#调用Windows API向指定窗口发送按键消息
- 算法提高课-搜索-最小步数模型-AcWing 1107. 魔板:bfs、复杂、八数码类似的题目
- python进行linux编程,Python之函数进阶
- CF9D-How many trees?【dp】
- 教你玩转CSS padding(填充)
- React学习笔记(持续更新)
- 前端学习(3099):vue+element今日头条管理-使用富文本比编辑器
- 第二章 二进制数值和记数系统
- Cilium提供并透明地保护应用程序工作负载之间的网络连接和负载平衡:什么是eBPF和XDP?
- mac php apache mysql 集成环境 的软件
- 使用ifconfig命令来看网卡的IP,但是,输入命令之后,eht0里面只有 inet6 addr 而没有 inet addr...
- bubblesort java,算法bubbleSort()
- 最近在学习Floquet理论,主要是想用于稳定性分析
- 普通有刷直流电机 H桥驱动
- Keil5下载烧录错误常见问题
- IBM Lotus Connections 2.5 评审指南
- 数字+字母+特殊字符 的正则表达式
- 关于Tomcat以及我是个小机灵鬼这回事
- bboss-elasticsearch--API
- iOS 设备的屏幕尺寸