今天重启了一下,服务器,结果发现昨天部署的k8s环境无法正常使用了。。看来单机版本还是适合开发环境。重启服务器后没有可以到达k8s服务的路由了,iptables规则已经被清空。

[root@k8s-test ~]# kubectl get pvc --all-namespaces
Unable to connect to the server: dial tcp: lookup lb.kubesphere.local on 202.96.128.86:53: no such host
[root@k8s-test ~]# getenforce
Disabled
[root@k8s-test ~]# systemctl status firewalld.target
Unit firewalld.target could not be found.
[root@k8s-test ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.5.1     0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.5.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0

删除整个集群

[root@k8s-test ~]# ./kk delete cluster
Are you sure to delete this cluster? [yes/no]: yes
INFO[16:41:03 CST] Resetting kubernetes cluster ...
[k8s-test 192.168.5.233] MSG:
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0909 16:41:04.606002   11473 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp: lookup lb.kubesphere.local on 202.96.128.86:53: no such host
[preflight] Running pre-flight checks
W0909 16:41:04.606213   11473 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[k8s-test 192.168.5.233] MSG:
sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns"
INFO[16:41:12 CST] Successful.                                  

重新建立集群

[root@k8s-test ~]# !33
./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
+----------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| name     | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker  | nfs client | ceph client | glusterfs client | time         |
+----------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| k8s-test | y    | y    | y       | y        | y     | y     | y         | 20.10.8 | y          |             |                  | CST 17:11:56 |
+----------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendationsContinue this installation? [yes/no]: yes
INFO[17:12:00 CST] Downloading Installation Files
INFO[17:12:00 CST] Downloading kubeadm ...
INFO[17:12:00 CST] Downloading kubelet ...
INFO[17:12:00 CST] Downloading kubectl ...
INFO[17:12:01 CST] Downloading helm ...
INFO[17:12:01 CST] Downloading kubecni ...
INFO[17:12:01 CST] Configuring operating system ...
[k8s-test 192.168.5.233] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
INFO[17:12:02 CST] Installing docker ...
INFO[17:12:02 CST] Start to download images on all nodes
[k8s-test] Downloading image: kubesphere/etcd:v3.4.13
[k8s-test] Downloading image: kubesphere/pause:3.2
[k8s-test] Downloading image: kubesphere/kube-apiserver:v1.20.4
[k8s-test] Downloading image: kubesphere/kube-controller-manager:v1.20.4
[k8s-test] Downloading image: kubesphere/kube-scheduler:v1.20.4
[k8s-test] Downloading image: kubesphere/kube-proxy:v1.20.4
[k8s-test] Downloading image: coredns/coredns:1.6.9
[k8s-test] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-test] Downloading image: calico/kube-controllers:v3.16.3
[k8s-test] Downloading image: calico/cni:v3.16.3
[k8s-test] Downloading image: calico/node:v3.16.3
[k8s-test] Downloading image: calico/pod2daemon-flexvol:v3.16.3
INFO[17:12:47 CST] Generating etcd certs
INFO[17:12:48 CST] Synchronizing etcd certs
INFO[17:12:48 CST] Creating etcd service
[k8s-test 192.168.5.233] MSG:
etcd will be installed
INFO[17:12:49 CST] Starting etcd cluster
[k8s-test 192.168.5.233] MSG:
Configuration file will be created
INFO[17:12:49 CST] Refreshing etcd configuration
Waiting for etcd to start
INFO[17:12:54 CST] Backup etcd data regularly
INFO[17:13:01 CST] Get cluster status
[k8s-test 192.168.5.233] MSG:
Cluster will be created.
INFO[17:13:01 CST] Installing kube binaries
Push /root/kubekey/v1.20.4/amd64/kubeadm to 192.168.5.233:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.20.4/amd64/kubelet to 192.168.5.233:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.20.4/amd64/kubectl to 192.168.5.233:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.20.4/amd64/helm to 192.168.5.233:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.5.233:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[17:13:05 CST] Initializing kubernetes cluster
[k8s-test 192.168.5.233] MSG:
W0909 17:13:06.146026   15748 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.8. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test k8s-test.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.5.233 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 63.502323 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-test as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-test as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: aa5vhc.pj7dlo5pe3afnvuj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join lb.kubesphere.local:6443 --token aa5vhc.pj7dlo5pe3afnvuj \--discovery-token-ca-cert-hash sha256:bb20f946111b0360e65a36fd64b1217654ef36b141f3e8f0a69cc90ca66c80e1 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join lb.kubesphere.local:6443 --token aa5vhc.pj7dlo5pe3afnvuj \--discovery-token-ca-cert-hash sha256:bb20f946111b0360e65a36fd64b1217654ef36b141f3e8f0a69cc90ca66c80e1
[k8s-test 192.168.5.233] MSG:
node/k8s-test untainted
[k8s-test 192.168.5.233] MSG:
node/k8s-test labeled
[k8s-test 192.168.5.233] MSG:
service "kube-dns" deleted
[k8s-test 192.168.5.233] MSG:
service/coredns created
[k8s-test 192.168.5.233] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[k8s-test 192.168.5.233] MSG:
configmap/nodelocaldns created
[k8s-test 192.168.5.233] MSG:
I0909 17:14:37.494453   17794 version.go:254] remote version is much newer: v1.22.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
6b4801060155fa8ef052f7a53eac7607918e5a69c55514cb34b1444ba4e13673
[k8s-test 192.168.5.233] MSG:
secret/kubeadm-certs patched
[k8s-test 192.168.5.233] MSG:
secret/kubeadm-certs patched
[k8s-test 192.168.5.233] MSG:
secret/kubeadm-certs patched
[k8s-test 192.168.5.233] MSG:
kubeadm join lb.kubesphere.local:6443 --token gyfe6y.hdnow2d40cdvocl1     --discovery-token-ca-cert-hash sha256:bb20f946111b0360e65a36fd64b1217654ef36b141f3e8f0a69cc90ca66c80e1
[k8s-test 192.168.5.233] MSG:
k8s-test   v1.20.4   [map[address:192.168.5.233 type:InternalIP] map[address:k8s-test type:Hostname]]
INFO[17:14:39 CST] Joining nodes to cluster
INFO[17:14:39 CST] Deploying network plugin ...
[k8s-test 192.168.5.233] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[k8s-test 192.168.5.233] MSG:
storageclass.storage.k8s.io/local created
serviceaccount/openebs-maya-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
deployment.apps/openebs-localpv-provisioner created
INFO[17:14:40 CST] Deploying KubeSphere ...
v3.1.1
[k8s-test 192.168.5.233] MSG:
namespace/kubesphere-system created
namespace/kubesphere-monitoring-system created
[k8s-test 192.168.5.233] MSG:
secret/kube-etcd-client-certs created
[k8s-test 192.168.5.233] MSG:
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################Console: http://192.168.5.233:30880
Account: admin
Password: P@88w0rdNOTES:1. After you log into the console, please check themonitoring status of service components in"Cluster Management". If any service is notready, please wait patiently until all components are up and running.2. Please change the default password after login.#####################################################
https://kubesphere.io             2021-09-09 17:18:34
#####################################################
INFO[17:18:42 CST] Installation is complete.Please check the result using the command:kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

kubesphpe单机集群删除以及重建相关推荐

  1. 银河麒麟高级服务器操作系统V10上安装k8s单机集群

    前言 本文介绍银河麒麟高级服务器操作系统V10上安装部署k8s单机集群及一些基础的kubectl指令 本文涉及部署脚本主要源自基于https://github.com/easzlab/kubeasz在 ...

  2. 一张“神图”看懂单机/集群/热备/磁盘阵列(RAID)

    一张"神图"看懂单机/集群/热备/磁盘阵列(RAID) 单机部署(stand-alone):只有一个饮水机提供服务,服务只部署一份 集群部署(cluster):有多个饮水机同时提供 ...

  3. 云计算学习总结(1)——PaaS云平台部署之在Centos7搭建Mesos+Zookeeper+Marathon+Docker单机集群

    ----------------------------------------------------------------------------------- ========Mesos+Do ...

  4. 云计算学习总结(1)——PaaS云平台部署之在Centos7搭建Mesos+Zookeeper+Marathon+Docker单机集群...

    ----------------------------------------------------------------------------------- ========Mesos+Do ...

  5. Windows安装Nacos单机集群

    Windows安装Nacos单机&集群 下载地址: https://github.com/alibaba/nacos/releases 创建nacos配置库,并运行下面的sql脚本 然后修改配 ...

  6. HyperLedger FabricV2.3 Raft单机集群部署

    目录 云主机配置 依赖环境配置 部署步骤 0.更新yum 1.安装golang 2.安装docker 3. 安装docker-compose 4. 安装git 5. 安装java 6. 防火墙配置 7 ...

  7. 金仓数据库KingbaseES V8R3集群删除test库主备切换测试案例

    案例说明 在KingbaseES R3集群中,kingbasecluster进程会通过test库访问,连接后台数据库服务测试:如果删除test数据库,导致后台数据库服务访问失败,在集群主备切换时,无法 ...

  8. 25分钟!一键部署Oracle 11GR2 HA 单机集群

    无人值守安装Oracle 11GR2单机集群,只需要25分钟?没错,通过脚本静默安装,只需要25分钟,包括安装补丁,建库. 脚本下载链接:SHELL脚本进行oracle数据库一键安装,实现真正的无人值 ...

  9. 最新倾斜摄影(ContextCapture)空三/模型重建-台式/便携/单机-集群硬件配置方案2020v2

    主要内容 (1)台式图像处理工作站配置方案 (2)便携图像处理工作站配置方案 (3)倾斜摄影图像处理多机集群配置方案 (4)迷你移动式图像处理集群配置方案 在测绘行业无人机航拍影像应用越来越广泛,数据 ...

最新文章

  1. 买不到口罩怎么办?Python爬虫帮你时刻盯着自动下单!| 原力计划
  2. weblogic集群安装心得-程序包发布
  3. Editplus的扩展程序的删除
  4. Android开源工具库
  5. mybatis 配置文件报错:Referenced file contains errors(file:/D:/config/ mybatis-3-mapper.dtd).
  6. 【C语言简介】C语言的前世今生
  7. iOS 音乐播放器之锁屏效果+歌词解析
  8. OpenERP 中的on_change方法总结
  9. bzoj 4950: [Wf2017]Mission Improbable(二分匹配)
  10. 一、cadence ic 5141 ——软件下载地址以及常见错误
  11. 局域网共享文件搭建方法
  12. 外包网站建设需要注意什么
  13. 带手机版TOOL在线网页工具箱/站长工具源码/在线加密解密网站源码162个工具
  14. S5P4418裸机之SDIO程序
  15. 4G 工业路由器并入cisco专网
  16. psql -d temp 时候的txid_current!
  17. HTML二级下拉菜单常见样式以及常见问题
  18. 《卸甲笔记》-分组统计查询对比
  19. c 窗体连接mysql_c 窗体如何连接数据库
  20. metaspolit提示Exploit failed: You must select a target.

热门文章

  1. Java 详解数字格式化(NumberFormatDecimalFormat)
  2. win10文件夹变成exe病毒的解决方法
  3. 2018——测试与信仰
  4. Linux中RPM软件包和YUM软件仓库的介绍和使用
  5. go语言多版本管理工具g windows下安装使用
  6. 版本号该怎么设置?版本格式:主版本号、次版本号、修订号
  7. asp毕业设计——基于vb+VB.NET+SQL Server的web订餐系统设计与实现(毕业论文+程序源码)——订餐系统
  8. 【javaweb】库存物资管理系统思路与总结
  9. vivado配置zynq硬件
  10. 西安翻译学院图书馆计算机,西安翻译学院图书馆读者服务与借阅管理制度