前言:

kubernetes集群根据部署手法来分类,一般是两种,一种是基于kubeadm搭建的集群,一种是二进制方式搭建的集群。那么,二进制集群升级和证书更新就完全是手动处理了,而kubeadm的集群可以自动的升级集群(有一些手动的操作,但不多)和证书更新。

kubernetes集群升级的目的和意义:

1,

某些版本有安全隐患的时候需要升级集群

例如,kubernetes CVE-2022-3172 安全漏洞,kubernetes CVE-2019-11247,kubernetesCVE-2018-1002105,kubernetes CVE-2020-8559等等漏洞。

这些内容可以在阿里云查找到,链接如下:

【CVE安全】漏洞修复公告_容器服务Kubernetes版-阿里云帮助中心

一般情况下,不管是kubernetes集群还是什么其它的集群,还是什么tomcat,什么elasticsearch,ssh等等各类软件,当然也包括各个操作系统,比如centos7.Ubuntu,debian等等所有软件类,规避各种各样的安全漏洞基本都是通过升级版本的方式来完成的。

多说一句,在软件制作者的角度来说,升级软件版本的动力一个是安全漏洞的压力,一个是更多的,更   新的功能以及更美观的软件界面。在软件使用者的角度来说,升级软件版本的动力第一个是安全漏洞,其次才是更 新的功能,最后才是美观的软件界面。

2,

有更加强大的功能

例如,kubernetes集群从最初的1.1版本发展到现在的1.26版本,无疑功能是更多了,某些版本也更加的好用,易用,是能够提升生产效率的。那么,很多的新功能无疑是需要升级集群版本才可以做到的。

3,

有更加美观的界面

例如,kubernetes的组件dashboard,每个版本界面都是不一样的,虽然总体的风格类似,但,其中的某个web界面总能打动某些人的心,这些也是需要通过升级版本(这里是组件的版本)才可以做到的。

4,

对于kubernetes集群来说,kubernetes集群升级会同时刷新集群内的证书期限,某些时候,这个比较特定的机制也是kubernetes升级的一个小小动力吧。




OK,以上是对kubernetes集群升级的目的以及升级的用途做了一个简单的总结,下面将就一个Ubuntu-18.04版本的操作系统内部署的kubernetes-1.22.0集群做一个小小的升级---从1.22.0升级到1.22.2,说明如何升级一个kubernetes集群并更新集群的所有证书。

一,

环境介绍

操作系统:Ubuntu-18.04

集群架构:三个节点,一主两从节点,kubernetes集群初始版本1.22.0,kubeadm部署的集群

IP地址:192.168.123.150,192.168.123.151,192.168.123.152

集群状态:三个节点由于证书过期,全部挂掉的

二,

kubernetes集群和证书的关系

由于kubernetes集群出于安全方面的考虑,因此,从一开始发布就基于RBAC(权限管理系统),到1.16还是1.17版本,这个不太记得了,集群基本是默认使用RBAC了,这就使得集群内的各个组件之间的通信都是基于证书的,例如kubelet的配置文件(这里随便找了一个kubelet的配置文件):

 cat /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://192.168.123.150:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: system:node:k8s-mastername: system:node:k8s-master@kubernetes
current-context: system:node:k8s-master@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:k8s-masteruser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKVENDQWcyZ0F3SUJBZ0lJS0h2czRzK3FaR0l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTURneApGVEFUQmdOVkJBb1RESE41YzNSbGJUcHViMlJsY3pFZk1CMEdBMVVFQXhNV2MzbHpkR1Z0T201dlpHVTZhemh6CkxXMWhjM1JsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHRQVkZIdWJ1KzgKNkJCZ0I4UGJadzV0OGF2S3VxeklQMHRPN2tRbVN0ZkpTbjNxdjFHS0xwL2U4UllLbkpuUFRzaytYKzRPcitxSwoyeDhXQWxKUGtjQUFXNEJxUlZOT3R5WDhaZHRYc3J2d3VVaGRoZEZkU3dFV3ZzaWRVOEJJamRwQktKVkN0dHR0CkJPQ0hobXBSY0VyQ1JqYkt2Zy90WE9YcDE1cmx0aS92ZHJHaXpKRUt2cUJQZElpVU05UUh3WEZqdmFqMXFnT2YKaUpraTRlZDBnZ3AwSmtxM0grSXp6MFZNUG9YdTJXdnBTTU81dmtGbi9Lay92bUxhaHRwaTlJeCtPN1VMYUxTNQpsdEVGaVJXQnEyT1R2N3YzMndEVUJUcFdwN25wWlU0WE9ra3ltaW5VT3E3ZGhVZWxrQTMwWVhucjB0MklwOWI4ClgxRVNuSEtKMHMwQ0F3RUFBYU5XTUZRd0RnWURWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0cKQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVV2eG8wcGdUZHpuLzZQMzJPdjliSwo2bm5XK3Rnd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFORk5MVXA5SzRMM0s0NTRXbmp2OHprNVVhREU0ZGovCnpmRXBRdHRWSHRLV3VreHZTRWRRTG15RGZURDFoNm8veGFqNkc3Wk1YbmFWUTQ1WmkzZ3F5ck1ibTdETHhxWDYKV0pLdkZkNUJNY2F4YW16dWhoN0I4R2xrMkNsNUZsK3Z0QnUxREtya293blpydFBTeGFaVjhsUmo2bmFHU1k4RQo4RVVqUWN5VXF3Z2duZWwwanNoaFVKOGdKMHV0MXN5UVAxWEJJcEpsTEZ5b0dDQmNuWkFvdE9oWnFWTWwxcTdOClQ5aVZEVy9IZ2xPbll0WktTbXREN2JvMk4rSDZxNmhaUVFJWmVzWVJxNUoxV0IwclR5SkkzbHJEV3J2QWRrcDEKTUZrekJJK3d2WHF2MEdtTFNYNzRudU4wZnY2K0VvQkdUakFVbkNIdUxvQ0RQNmQxTVBYL3ZZcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdTA5VVVlNXU3N3pvRUdBSHc5dG5EbTN4cThxNnJNZy9TMDd1UkNaSzE4bEtmZXEvClVZb3VuOTd4RmdxY21jOU95VDVmN2c2djZvcmJIeFlDVWsrUndBQmJnR3BGVTA2M0pmeGwyMWV5dS9DNVNGMkYKMFYxTEFSYSt5SjFUd0VpTjJrRW9sVUsyMjIwRTRJZUdhbEZ3U3NKR05zcStEKzFjNWVuWG11VzJMKzkyc2FMTQprUXErb0U5MGlKUXoxQWZCY1dPOXFQV3FBNStJbVNMaDUzU0NDblFtU3JjZjRqUFBSVXcraGU3WmErbEl3N20rClFXZjhxVCsrWXRxRzJtTDBqSDQ3dFF0b3RMbVcwUVdKRllHclk1Ty91L2ZiQU5RRk9sYW51ZWxsVGhjNlNUS2EKS2RRNnJ0MkZSNldRRGZSaGVldlMzWWluMXZ4ZlVSS2Njb25TelFJREFRQUJBb0lCQVFDcHNWUFZxaW9zM1RwcwpZMk9GaDhhVXB2d3p3OFZjOVVtS1EyYk9yTlpQS2hobmZQMTR0TFJLdCtJNE1zTHZBWVlDQVpWTkNWZE1LQ0lkCnhvV3g1azVINE1zRXlzSWxtQUdLMDErLzJIS2ZtNVZ3UHZJVjIrd3dmMWUyVGZuckVKQWFzNzg5Z2lSQkpFSXYKMi9mbGFBUlFaakxRUHRyemVQb1pmTUdNbmlGd3lIVVJVQWZkL0U0QlFGNS8zUWVFeEpEVkNtaEZEOG5YRXlqRwpLSG5CSlI1TGFCeFpPcmh0bDVqckRZRGxtaWcyZGNPY21TOU5xN2xCUVExb3NmR1FhbjlTODQ1VnlFdjQvcWZrCjAwZkpJN0JpSStDcWNtM0lHRmhNMG5lNXlvVE8zSEo4Sy94bTdpRmxIcVg1cXZaYjJnc2hTRzZIR1E4YjZPV2UKT09id0xxZnRBb0dCQU1VNisvaEdsaVJrNGd4S3NmUE12dERsV3hGTFVWaTdlK1BGQlJ6Zm5FRlh3U055dlI4ZgpYTmlWZStUOFQvcTgwUnZrTmQ1QVNnd1NBTkwraUZnN0R4SXIzUC9GSkc5SjYwcURYMDZzZi9tKzM5c2VzZFp0Ck9Fanl6UXBmNnVtRGVKMHhvNDlrVWtiTkNvQUNPamE4QndDeFdRcjJHb0ZyL0ptZGEra0JteWRUQW9HQkFQTWYKbDkzbXo1QUpiVkE3cVRWckRiVDNDWEsxRjQyV0RuQ0FDRU5kcE8vZlhOZWtuWEhWcjZiRUl6UTluelQ4R2hmOApUUlBtb1VJTVNVVGpHZmdUMTlkbTliODdZb2NkMENCd0pZejlwcmtRaUpMdm13QS82cytQalkvVUwvSHdrT05QCllvcm9YclB5WVpIeHNjMW54di9lRWJic0UyWjdVSXpXbGpxK0lIbGZBb0dBYlN1OUZTeGRKMEFBTDdXWTBzNWUKUU5yemtac1RKLzUvRVJDWlIrWXVZNnpqWjIrM1oyYkF5ZEhVaG1kekRlTStEQ1pCK3dleTlRTnlHVmh5dUFQWQp6OElmemlPZGkweHJSUTk2emQyRjZRUFNmVU44Uktpb0l4amlqZitSMURmRnA1MDJYOFMwRmlTZ3owSnNYcWV0CmFLRENIT01rd01hNVIzNXZvTVlXejZrQ2dZRUE1TWdiR2ZhRDVjL3BMUEluaFp3SzF2c015Z045ZVgvMmNJa2EKdllIV250ODZ0N1l4YnBpZDVUbDJ3MGNsbFMrU3duVnFkc3ExZnJpZkRoTURNZjVDUTNHZzJXWmhqakpRMHVXVgpnSHFFdEd2SmlUT3VVV3JVWktONm5Ca1pVUHVHN0ZDY3M0aDg3YXF0aEMvRG1ENEs5bVlibDEzSjE4czgvbnRECi9WMUNvOU1DZ1lFQWlCb2tPTEFBNlFDbHNYOXcwNlMrZ1pXT0FpbDlRRXhHalhVR2o0ZEtielpibmgrWnkwbUoKNW9LRHpydTFKeGdoa1JtQ0ZiZmx6aHpMMktWU2xIbXNPYWZDSU1JSHJSZTdic0gvdjg2SVpkYXlnTWxLckJ2LwpXUVMxdmJoQjVvNGdwRkE5aG0rMFcwTW9ZOUVsaERPUG52ajFxT2lTRVArN0dXdDAxOW5HdWdzPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

可以看到,此配置文件携带了三个证书和api-server通信认证,也就是说,如果此配置文件没有证书或者证书过期了,那么,kubelet服务奖不会成功启动的。

同样的,controller-manager服务也携带了证书来与api-server通信,同样的,如果此配置文件内的证书是不对的,或者证书过期了,controller-manager服务也不会成功启动,即使这个服务是存在于静态pod内。

 cat /etc/kubernetes/controller-manager.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://192.168.123.150:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: system:kube-controller-managername: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manageruser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lJVXJjNDVZUVF1NE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUtkU2d1d1NDMVhVSmlvRjIvbFhaNHQwWjBQRVEvc0QKWk9ZTzdPdlUwbHVSYngzcW1VaEN6NjlPbFBtY0h0OUllRHl2a09qYzFBZ2FlZWwzR3VnSjlKZ2xFM0RmZm5hegpZVVBrOXVzTVBIQjZtMUhNR2ZjUVloZkhLUG9TNmk5bEVWZUhKOXEvOTVaSUdGKzVMR0F2eXVBUWg1NmZYT1hrCjRZRTRsSmYzRGhzdGRNeEtDNXVZTXpxazR3RmlNVkNxYkRhdlVqc3VmZzhhYlFQQWFVQ3NieWFFMm04RVllOGgKb2VkTkdVdml5WkxrQUM2ckM4bGVIeGNBbjB2ZlVvYXU3UzJFdWpNK01jV3RLZ1o4NmVVdjJaU2dXemFCa1VWZAo4QlRxK0VDOUtlb1pWRHJnUlBpejVub3FsVDBYTTJacUVJK01Ud1lTRWt3aTJ5WlpqNS9yWFQ4Q0F3RUFBYU5XCk1GUXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIKL3dRQ01BQXdId1lEVlIwakJCZ3dGb0FVdnhvMHBnVGR6bi82UDMyT3Y5Yks2bm5XK3Rnd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR0d3NGxaaTZDSzB5YWpsOGNlUEg3a20vY1VLcGFFTld1aWJyVTQyNEVDM1A0UkFZZlJXCmRranZjNm9JbVhzR2V2T2Y0ZWlJU0dxaWNOb254N2RkWUxLY2tDaWxLQmcwS2hyZGFRT3o3N3ZCQitvamczbmgKMHByb05oYW12dkVpc0lUY212cmdzNTZqMk1Id2lUK3ZHeXFHbWxPOG9TRHZmWVFnMUVqTkRxWlVEd0g3OFlHYwowT0h5cXU3SW1hYngvKzdWOGcvMmlBS3NEVVVja3I3UHVMWWI3RlA0ZlZvVjlDWkIzVHI3bXFRQ2FrUmJmMnF1CjUvd3pEMG9lYjFBeHV0aUFSVjBlM2JBZUxXV0tqckEyNW9ISVBCRW1zTEFQSmtlMDVlRk9LK05ZUHBMdjBNU04KVnlzaXEzcVl0RkxZSzRaN0kyaGgrKzc5MXM4Y2g1TDNFeGM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcDFLQzdCSUxWZFFtS2dYYitWZG5pM1JuUThSRCt3Tms1ZzdzNjlUU1c1RnZIZXFaClNFTFByMDZVK1p3ZTMwaDRQSytRNk56VUNCcDU2WGNhNkFuMG1DVVRjTjkrZHJOaFErVDI2d3c4Y0hxYlVjd1oKOXhCaUY4Y28raExxTDJVUlY0Y24yci8zbGtnWVg3a3NZQy9LNEJDSG5wOWM1ZVRoZ1RpVWwvY09HeTEwekVvTAptNWd6T3FUakFXSXhVS3BzTnE5U095NStEeHB0QThCcFFLeHZKb1RhYndSaDd5R2g1MDBaUytMSmt1UUFMcXNMCnlWNGZGd0NmUzk5U2hxN3RMWVM2TXo0eHhhMHFCbnpwNVMvWmxLQmJOb0dSUlYzd0ZPcjRRTDBwNmhsVU91QkUKK0xQbWVpcVZQUmN6Wm1vUWo0eFBCaElTVENMYkpsbVBuK3RkUHdJREFRQUJBb0lCQUZ0Vm9QMjRBOVFBRUMwVQpNYlZ6enFQREVMTmZLVFNWNzdmZElkckJ1Mm9jZ3lremJDU1R3OGFRQUtZWVlJbkZoMHlwRVZMcmFCcGNTWHYxCmRneC9rcktTV29CY255MndVVUc4ZEVSdDAzZ2FsVG9iVFhrZHlrM3NleU8ydTNyUGtwM1N1eUNmZFVqbFpkaXEKdmR4cmVqVEJFU2EzR3dDcTVhV2grd3JRNHpSVnc5eERTVGF0MXA1cS82UFRPcHhkWElJRWhHQVd4SEE2RnVnSgpuR2RrMWFxT1ZweFFSZVZ1QVluNVlYemZ0MXByU0wwdWQwOFRNd3FaRUowa0dabUdIODJDV0tTRnQ4cEs1MnJ2CnpURDFnWE5hZzhrNGF3czhrQXpnT1lWek8wMHR2SjdVQjNFcVBSSThKQVJucWpyTGpNaTRqeDVMZGFQZkc2VUQKMERwNFdsRUNnWUVBelRYT2FJdUQ2QUhkaTNPaFlpQ2puOUwwM3ZOUGY1c1JtRmJzclc2UFFxdEVTTit2aVFxMApFMVREU2REM3BpSndqSC9zQTNhRW5ISStRZVYxT1ZFbVR3YkxpaFhOVWtYTWo2OWRmVHdxYTd1MnJEWm0zQnF6CkVrYUI3N0FHb3BvR2hRajBHcGMzTnY5MDZwN3orekVEKzFSajFaeGtsaXVXQ2NDMzViVlZqejBDZ1lFQTBMd2QKa1VYdGxhK0hWb2dicFJsRlE2dTVtbVFzRXpMYWdrQkFoRDFqYkpORWtVL1o0RjJGNkwwVTN0SWtWNWRyRG9XRQowbUo5TmdELzhPWVdXbTZSdVo1bVhPZlRSOVdkc2k5MGpUNU9NRkJHbHVMdlUyNllhTEhUdmMwbHdiQlVPelJECjJvb2ZEZURjU3Y2d1oxSVU3QlQ2YTA4dWtON0FpNVR6ZWhrQ1ppc0NnWUVBdWVNeHRKWWN5TDlYMW9qSitiK2oKT0pXNTUzUHo0WjJ3bEpTNUZHbUFNRjVBSHRzeGdTeEc3dlByYXlSMkVQSkZqYUFiUlEvSkZJYVFTdFQyR1JPZgpaaHE3cWJ3U0g2TEdxS21zUUZPT0FjVXF0bGtaVit4L3BlQmt0NkIyZ2ppUUMxYU8rTDlkN3QzOUpNTVVNOGkwCjJLZ2JQMWJKN3haUWRVa3p6RXMwMCtrQ2dZQlJraUlQNG43dEx4STVpNmthQk4wZmk5MVZhMjRaOXBhVHJoNUkKVDJFcVRnYk9ycURiWUZEeldlanRCcnd6Q3JaSWozOFBaSFBBQmZYL0l6dDdEWmlmTERxZWRlNElOWCtSNFordgpqcmlwZ3NXRE01NEpRY0FIc2U2b1RxSkJwZkhVelNEekoyVHBYSVZhUFZ1Y2xPUWVPamgrZFF3aWl4bzlzZkRRCk56UEx6d0tCZ1FDaVU2QUZ4NjdPdW1jRGtwK21tRXpYNWpsQmtlZk9XakJlNlZ4ZGwzNjA1TjJ6YS9CbDlINzEKZ3pjdlZhQTY5RC9uVG50bE1xaWhnUm1NTGdEMGovOWkrV0VkMzR2Y0JTOHI2M1VZZVRvUWRoakJCZC8yeUM5NAppQ0ZiNlZ0dGRrSmg5SEhkV1pTRkxwQ0FmTnVlMVRxclA5L1RobTNRaTFjM2lZZVJUblh0YkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

三,

报错信息

集群很明显没有启动

root@k8s-master:~# kubectl get no
The connection to the server 192.168.123.150:6443 was refused - did you specify the right host or port?

系统日志报错如下;

12月8号证书已经过期。

ec 12 23:11:18 k8s-master kubelet[2750]: I1212 23:11:18.900395    2750 server.go:868] "Client rotation is on, will bootstrap in background"
Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.903330    2750 bootstrap.go:265] part of the existing bootstrap client certificate in /etc/kubernetes/kubelet.conf is expired: 2022-12-08 06:32:35 +0000 UTC
Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.905482    2750 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or

查看证书时间,再次确认一哈:

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configurationCERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 08, 2022 06:32 UTC   <invalid>                               no
apiserver                  Dec 08, 2022 06:32 UTC   <invalid>       ca                      no
apiserver-etcd-client      Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no
apiserver-kubelet-client   Dec 08, 2022 06:32 UTC   <invalid>       ca                      no
controller-manager.conf    Dec 08, 2022 06:32 UTC   <invalid>                               no
etcd-healthcheck-client    Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no
etcd-peer                  Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no
etcd-server                Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no
front-proxy-client         Dec 08, 2022 06:32 UTC   <invalid>       front-proxy-ca          no
scheduler.conf             Dec 08, 2022 06:32 UTC   <invalid>                               no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no 

四,   

kubernetes集群的升级

添加阿里云的apt源:

cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
# 阿里源
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiversedeb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

更新apt源:

sudo apt-get update

输出如下:

有警告,可以不处理,也可以处理一哈,这里的警告是说两个文件的源重复配置,冲突了,因此,删除/etc/apt/sources.list这个文件即可。

root@k8s-master:~# sudo apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease
Hit:2 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease
Hit:3 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu bionic-security InRelease
Hit:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease
Get:7 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease [242 kB]
Get:8 http://mirrors.aliyun.com/ubuntu bionic/universe Sources [9,051 kB]
Get:9 http://mirrors.aliyun.com/ubuntu bionic/restricted Sources [5,324 B]
Get:10 http://mirrors.aliyun.com/ubuntu bionic/main Sources [829 kB]
Get:11 http://mirrors.aliyun.com/ubuntu bionic/multiverse Sources [181 kB]
Get:12 http://mirrors.aliyun.com/ubuntu bionic-updates/restricted Sources [33.1 kB]
Get:13 http://mirrors.aliyun.com/ubuntu bionic-updates/universe Sources [486 kB]
Get:14 http://mirrors.aliyun.com/ubuntu bionic-updates/multiverse Sources [17.2 kB]
Get:15 http://mirrors.aliyun.com/ubuntu bionic-updates/main Sources [537 kB]
Get:16 http://mirrors.aliyun.com/ubuntu bionic-backports/universe Sources [6,600 B]
Get:17 http://mirrors.aliyun.com/ubuntu bionic-backports/main Sources [10.5 kB]
Get:18 http://mirrors.aliyun.com/ubuntu bionic-security/restricted Sources [30.2 kB]
Get:19 http://mirrors.aliyun.com/ubuntu bionic-security/multiverse Sources [10.6 kB]
Get:20 http://mirrors.aliyun.com/ubuntu bionic-security/main Sources [288 kB]
Get:21 http://mirrors.aliyun.com/ubuntu bionic-security/universe Sources [309 kB]
Get:22 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Sources [8,164 B]
Get:23 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Sources [9,428 B]
Get:24 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Sources [75.6 kB]
Get:25 http://mirrors.aliyun.com/ubuntu bionic-proposed/main amd64 Packages [145 kB]
Get:26 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Translation-en [32.2 kB]
Get:27 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted amd64 Packages [132 kB]
Get:28 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Translation-en [18.5 kB]
Get:29 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe amd64 Packages [11.0 kB]
Get:30 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Translation-en [6,676 B]
Fetched 12.5 MB in 14s (878 kB/s)
Reading package lists... Done
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2

2,

重新安装kubelet,kubeadm,kubectl

sudo apt-get install kubelet=1.22.2-00  kubeadm=1.22.2-00 kubectl=1.22.2-00  -y

3,

开始升级

先更新证书以启动kubelet

kubeadm  certs renew all

输出如下:

[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configurationcertificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewedDone renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

以上输出最后提示需要重启restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd,在此之前先看看证书是否更新:

已经更新了,但kubelet的配置文件等等这些还是使用的旧证书,因此,此时的kubelet等服务还是不能启动的状态

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configurationCERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 12, 2023 15:29 UTC   364d                                    no
apiserver                  Dec 12, 2023 15:29 UTC   364d            ca                      no
apiserver-etcd-client      Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Dec 12, 2023 15:29 UTC   364d            ca                      no
controller-manager.conf    Dec 12, 2023 15:29 UTC   364d                                    no
etcd-healthcheck-client    Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no
etcd-peer                  Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no
etcd-server                Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no
front-proxy-client         Dec 12, 2023 15:29 UTC   364d            front-proxy-ca          no
scheduler.conf             Dec 12, 2023 15:29 UTC   364d                                    no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no

因此,这个时候需要删除这些服务的配置文件,使用kubeadm重新生成这些文件:

root@k8s-master:~# rm -rf /etc/kubernetes/*.conf
root@k8s-master:~# kubeadm  init phase kubeconfig all
I1212 23:35:49.775848   19629 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

如上所示,可以看到新的配置文件都使用的新的证书了,此时kubelet可以重启了。

root@k8s-master:~# systemctl restart kubelet
root@k8s-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Mon 2022-12-12 23:36:57 CST; 2s agoDocs: https://kubernetes.io/docs/home/Main PID: 21307 (kubelet)Tasks: 20 (limit: 2210)CGroup: /system.slice/kubelet.service├─21307 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.└─21589 /opt/cni/bin/calicoDec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316164   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a5
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316204   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316279   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwxm\" (UniqueName: \"kubernetes.io/projected/8ad3a63e-e
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316323   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316364   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6q6\" (UniqueName: \"kubernetes.io/projected/5ef5e743-e
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316407   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1f1b0
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316447   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c8
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316497   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-local-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316540   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316562   21307 reconciler.go:157] "Reconciler: start to sync state"

此时查看节点状态发现报错,因为是使用的.kube目录下congfig,这个文件是使用的旧的证书,因此,删除它,重新引用变量,并覆盖旧的config:

root@k8s-master:~# kubectl get no
error: You must be logged in to the server (Unauthorized)

OK,master节点的正式就更新完了,那么,工作节点的证书怎么处理呢?

解决方案为:由于整个集群是kubeadm搭建的,而etcd是静态pod 形式存在在master节点的,因此,master节点恢复后,确认etcd正常后,工作节点重新加入集群即可:

删除工作节点:

root@k8s-master:~# kubectl delete nodes k8s-node1
node "k8s-node1" deleted
root@k8s-master:~# kubectl delete nodes k8s-node2
node "k8s-node2" deleted

生成加入节点命令:

root@k8s-master:~# kubeadm token create --print-join-command
kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923

在工作节点重设节点并重新加入节点(151和152节点执行):

root@k8s-node1:~# kubeadm reset -f
[preflight] Running pre-flight checks
W1212 23:52:30.989714   84567 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@k8s-node1:~# kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@k8s-master:~# kubectl get no
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   Ready      control-plane,master   369d   v1.22.2
k8s-node1    NotReady   <none>                 369d   v1.22.0
k8s-node2    NotReady   <none>                 369d   v1.22.0
root@k8s-master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/root/.kube/config'? y

此时,回到master节点,查看节点状态,可以看到恢复正常了:

root@k8s-master:~# kubectl get no
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   369d    v1.22.2
k8s-node1    Ready    <none>                 2m18s   v1.22.0
k8s-node2    Ready    <none>                 12s     v1.22.0

查看pod,也都是正常的状态了:

root@k8s-master:~# kubectl get po -A
NAMESPACE       NAME                                       READY   STATUS    RESTARTS        AGE
default         front-end-6f94965fd9-dq7t8                 1/1     Running   0               27m
default         guestbook-86bb8f5bc9-mcdvg                 1/1     Running   0               27m
default         guestbook-86bb8f5bc9-zh7zq                 1/1     Running   0               27m
default         nfs-client-provisioner-56dd5765dc-gp6mz    1/1     Running   0               27m

五,

集群升级

先升级kubeadm到1.22.10

root@k8s-master:~# apt-get install kubeadm=1.22.10-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:cri-tools
The following packages will be upgraded:cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 26.7 MB of archives.
After this operation, 19.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.25.0-00 [17.9 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.10-00 [8,728 kB]
Fetched 26.7 MB in 51s (522 kB/s)
(Reading database ... 67719 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.25.0-00_amd64.deb ...
Unpacking cri-tools (1.25.0-00) over (1.19.0-00) ...
Preparing to unpack .../kubeadm_1.22.10-00_amd64.deb ...
Unpacking kubeadm (1.22.10-00) over (1.22.2-00) ...
Setting up cri-tools (1.25.0-00) ...
Setting up kubeadm (1.22.10-00) ...

在升级kubernetes集群:

kubeadm upgrade apply v1.22.10

输出如下:

root@k8s-master:~# kubeadm upgrade apply v1.22.10
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.10"
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.22.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.10"...
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
Static pod: etcd-k8s-master hash: ee7d79d2b2967f03af72732ecda2b44f
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests149634972"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: d2601c13ace3af023db083125c56d47b
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
Static pod: kube-controller-manager-k8s-master hash: 648269e02b16780e315b096eec7eaa5d
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
Static pod: kube-scheduler-k8s-master hash: ec4c9f7722e075d30583bde88d591749
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.10". Enjoy![upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

提示kubelet也应该升级:

root@k8s-master:~# apt-get install kubelet=1.22.10-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:kubelet
1 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 19.2 MB of archives.
After this operation, 32.1 MB disk space will be freed.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.10-00 [19.2 MB]
Fetched 19.2 MB in 42s (453 kB/s)
(Reading database ... 67719 files and directories currently installed.)
Preparing to unpack .../kubelet_1.22.10-00_amd64.deb ...
Unpacking kubelet (1.22.10-00) over (1.22.2-00) ...
Setting up kubelet (1.22.10-00) ...

证书的时间也刷新了:

(为什么一开始不直接升级呢?因为升级的时候需要集群是正常运行的,但前面证书是已经过期,集群宕机状态了,没办法升级)

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 12, 2023 16:11 UTC   364d            ca                      no
apiserver                  Dec 12, 2023 16:11 UTC   364d            ca                      no
apiserver-etcd-client      Dec 12, 2023 16:11 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Dec 12, 2023 16:11 UTC   364d            ca                      no
controller-manager.conf    Dec 12, 2023 16:11 UTC   364d            ca                      no
etcd-healthcheck-client    Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no
etcd-peer                  Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no
etcd-server                Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no
front-proxy-client         Dec 12, 2023 16:11 UTC   364d            front-proxy-ca          no
scheduler.conf             Dec 12, 2023 16:11 UTC   364d            ca                      no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no

云原生|kubernetes|kubernetes集群升级+证书更新(Ubuntu-18.04+kubeadm)相关推荐

  1. 阿里巴巴云原生 etcd 服务集群管控优化实践

    作者 | 陈星宇(宇慕) 来源 | 阿里巴巴云原生公众号 背景 Kubernetes 采用 etcd 存储其内部核心元数据信息.经过这些年的发展,尤其是伴随着这两年云原生的快速发展,Kubernete ...

  2. 【云原生】Docker集群部署MinIO

    MinIO 是全球领先的对象存储先锋,目前在全世界有数百万的用户. 在标准硬件上,读/写速度上高达183 GB / 秒 和 171 GB / 秒. 对象存储可以充当主存储层,以处理Spark.Pres ...

  3. 两台ubantu搭linux集群,如何使用运行Ubuntu 14.04的firewire在两台Linux...

    电脑[A]: >启用IP转发 >暂时通过运行echo 1>的/ proc / SYS /网/的IPv4 / IP_FORWARD >或永久地在/etc/sysctl.conf中 ...

  4. 【云原生】Kubernetes集群升级

    [云原生]Kubernetes集群升级指南 前言 一.集群升级过程辅助命令 二.升级master节点 2.1.升级kubeadm. 2.2.验证升级计划 2.3.master节点升级 三.升级node ...

  5. 云原生时代, Kubernetes 多集群架构初探

    为什么我们需要多集群? 近年来,多集群架构已经成为"老生常谈".我们喜欢高可用,喜欢异地多可用区,而多集群架构天生就具备了这样的能力.另一方面我们也希望通过多集群混合云来降低成本, ...

  6. 云原生|kubernetes|多集群管理之kubeconfig文件配置和使用(定义,使用方法,合并管理多集群)

    前言: kubernetes集群通常并不是只有一个集群,特别是对于业务量比较多的公司来说,可能集群的规模会非常大.所有的业务都放到一个kubernetes集群内是不现实的,也不是科学的,就如同你不会把 ...

  7. 【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台

    [云原生之kubernetes实战]在k8s集群下部署Weave Scope监控平台 一.Weave Scope介绍 1.Weave Scope简介 2.Weave Scope的特点 3.Weave ...

  8. Kubernetes 集群升级指南:从理论到实践

    作者 | 高相林(禅鸣) **导读:**集群升级是 Kubernetes 集群生命周期中最为重要的一环,也是众多使用者最为谨慎对待的操作之一.为了更好地理解集群升级这件事情的内涵外延,我们首先会对集群 ...

  9. 云服务器-异地部署集群服务-Kubernetes(K8S)-网络篇

    重要!!! 注意: 本文使用二进制安装,过程非常繁琐,所以不推荐大家使用这种安装方式.请使用更简洁的kubeadm安装,具体请参考 云服务器-异地部署集群服务-Kubernetes(K8S)-Kube ...

最新文章

  1. redis中的lua
  2. 带你们了解数据安全探索者之路
  3. 经典C语言程序100例之二二
  4. dojo中的dojo/dom-class
  5. 动态查找表之二叉搜索树
  6. android底层oem,Android中如何实现OEM
  7. 发布npm包到GitHub Packages
  8. Android Screen
  9. Ogre 3d 工具集
  10. RestTemplate异常no suitable HttpMessageConverter found for request type [java.lang.Integer]
  11. 软件开发知识--[ADO.NET Entity Framework]
  12. javascript常见错误
  13. PgSQL · 特性分析 · 金融级同步多副本分级配置方法
  14. 欠薪投诉竟然要3个月才有结果,这办事效率……
  15. 生成fnt字体工具BMFontTool工具
  16. PDFObject无法加载远程url和不支持IE浏览器解决方案
  17. mysql 存储 海量图片_数据库中存储大量图片设计
  18. SpringBoot RESTful API 架构风格实践
  19. 数据结构——树-基本知识点(第六章)
  20. vue v-modle实现组件之间的动态传值

热门文章

  1. 四川翌加:想提高抖音小店转化率该怎么做
  2. JavaScript进阶篇①——基础语法
  3. K8s 及 Docker卸载
  4. Photoshop用光与影表现3D立体字效…
  5. java 备忘录模式_java设计模式--备忘录模式
  6. php 使用strrpos检查字串是否存在
  7. ajax请求时cookie,ajax跨域请求中的cookie问题
  8. 【Kay】详解Python中的random()函数
  9. 梦幻西游鼠标漂移解决办法
  10. chop() 移除字符串右侧的空白字符或其他字符