前言

KubeSphere® 是经 CNCF 认证的 Kubernetes 主流开源发行版之一,在 Kubernetes 之上提供多种以容器为资源载体的业务功能模块,如多租户管理、集群运维、应用管理、DevOps、微服务治理等功能。

最近微服务,要部署到k8s,采用KubeSphere应用为中心的容器管理平台,于是捣鼓怎样去部署,第一次部署成功,好像不稳定,再次恢复四台服务器镜像,重新部署,其中遇到很多的问题及挫折,在此记录一下,以供大家参考。

准备服务器

  1. master:172.16.7.12 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 200G ( data)

  2. node5:172.16.7.15 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

  3. node3:172.16.7.16 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

  4. node4:172.16.7.17 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

系统优化及安装准备软件

系统自动优化

参考“ansible安装部署CDH集群,与手动安装部署CDH集群,及CM配置和用户权限配置”,先进行对系统进行优化:

  1. sh deploy_robot.sh init_ssh

  2. sh deploy_robot.sh init_sys

关于deploy_robot.sh的脚本在https://github.com/fleapx/cdh-deploy-robot.git

ansible工具软件安装

  1. # 控制机器上安装ansible

  2. yum install -y ansible

配置修改 /etc/ansible/hosts ,对需要管理的主机进行配置。

  1. [all]

  2. 172.16.7.12 master

  3. 172.16.7.15 node5

  4. 172.16.7.16 node3

  5. 172.16.7.17 node4

默认配置,需要修改编辑 /etc/ansible/ansible.cfg。

  1. # uncomment this to disable SSH key host checking

  2. host_key_checking = False

其他系统软件安装

  1. ansible all -m shell -a "wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo"

  2. ansible all -m shell -a "sed -i 's/^.*aliyuncs*/#&/g' /etc/yum.repos.d/CentOS-Base.repo"

  3. ansible all -m shell -a "wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo"

  4. ansible all -m shell -a "yum -y install ebtables socat ipset conntrack nfs-utils rpcbind"

  5. ansible all -m shell -a "yum install -y vim wget yum-utils device-mapper-persistent-data lvm2"

集群时间校准

  1. ansible all -m shell -a "yum install chrony -y"

  2. ansible all -m shell -a "systemctl start chronyd"

  3. ansible all -m shell -a "sed -i -e '/^server/s/^/#/' -e '1a server ntp.aliyun.com iburst' /etc/chrony.conf"

  4. ansible all -m shell -a "systemctl restart chronyd"

  5. ansible all -m shell -a "timedatectl set-timezone Asia/Shanghai"

系统其他优化

  1. ansible all -m shell -a "echo '* soft nofile 655360' >> /etc/security/limits.conf"

  2. ansible all -m shell -a "echo '* hard nofile 655360' >> /etc/security/limits.conf"

  3. ansible all -m shell -a "echo '* soft nproc 655360' >> /etc/security/limits.conf"

  4. ansible all -m shell -a "echo '* hard nproc 655360' >> /etc/security/limits.conf"

  5. ansible all -m shell -a "echo '* soft memlock unlimited' >> /etc/security/limits.conf"

  6. ansible all -m shell -a "echo '* hard memlock unlimited' >> /etc/security/limits.conf"

  7. ansible all -m shell -a "echo 'DefaultLimitNOFILE=1024000' >> /etc/systemd/system.conf"

  8. ansible all -m shell -a "echo 'DefaultLimitNPROC=1024000' >> /etc/systemd/system.conf"

开放所有端口

  1. ansible all -m shell -a "iptables -P INPUT ACCEPT"

  2. ansible all -m shell -a "iptables -P FORWARD ACCEPT"

  3. ansible all -m shell -a "iptables -P OUTPUT ACCEPT"

  4. ansible all -m shell -a "iptables -F"

安装docker

CentOS 7系统下docker安装,及配置阿里云加速,及所有配置请参加此文。

  1. ansible all -m shell -a "yum remove docker docker-common docker-selinux docker-engine"

  2. ansible all -m shell -a "yum -y install docker-ce-19.03.8-3.el7"

安装k8s的镜像源

  1. tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'

  2. [kubernetes]

  3. name=Kubernetes

  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

  5. enabled=1

  6. gpgcheck=0

  7. repo_gpgcheck=0

  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

  9. https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  10. EOF

安装kubernetes-v1.18.6和kubesphere-v3.0

官方https://kubesphere.com.cn/多节点安装地址:https://kubesphere.com.cn/docs/quick-start/all-in-one-on-linux/

下载kk安装文件

  1. # 在国内先添加一个环境变量

  2. export KKZONE=cn

  3. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -

然后,使kk成为全局可执行文件

mv ./kk /usr/local/bin 

生成多节点集群的配置

  1. # 创建一个配置文件模版

  2. kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0 -f ./config-kubesphere.yaml

修改配置文件

修改配置文件config-kubesphere.yaml

  1. apiVersion: kubekey.kubesphere.io/v1alpha1

  2. kind: Cluster

  3. metadata:

  4. name: sample

  5. spec:

  6. hosts:

  7. - {name: master, address: 172.16.7.12, internalAddress: 172.16.7.12, user: root, password: azdebug_it}

  8. - {name: node3, address: 172.16.7.16, internalAddress: 172.16.7.16, user: root, password: azdebug_it}

  9. - {name: node4, address: 172.16.7.17, internalAddress: 172.16.7.17, user: root, password: azdebug_it}

  10. - {name: node5, address: 172.16.7.15, internalAddress: 172.16.7.15, user: root, password: azdebug_it}

  11. roleGroups:

  12. etcd:

  13. - node3

  14. master:

  15. - master

  16. worker:

  17. - node4

  18. - node3

  19. - node5

  20. controlPlaneEndpoint:

  21. domain: lb.kubesphere.local

  22. address: "172.16.7.12"

  23. port: "6443"

  24. kubernetes:

  25. version: v1.18.6

  26. imageRepo: kubesphere

  27. clusterName: cluster.local

  28. network:

  29. plugin: calico

  30. kubePodsCIDR: 10.233.64.0/18

  31. kubeServiceCIDR: 10.233.0.0/18

  32. registry:

  33. registryMirrors: []

  34. insecureRegistries: []

  35. #privateRegistry: dockerhub.kubekey.local

  36. addons: []

  37. ---

  38. apiVersion: installer.kubesphere.io/v1alpha1

  39. kind: ClusterConfiguration

  40. metadata:

  41. name: ks-installer

  42. namespace: kubesphere-system

  43. labels:

  44. version: v3.0.0

  45. spec:

  46. local_registry: ""

  47. persistence:

  48. storageClass: ""

  49. authentication:

  50. jwtSecret: ""

  51. etcd:

  52. monitoring: true

  53. endpointIps: 172.16.7.16

  54. port: 2379

  55. tlsEnable: true

  56. common:

  57. es:

  58. elasticsearchDataVolumeSize: 20Gi

  59. elasticsearchMasterVolumeSize: 4Gi

  60. elkPrefix: logstash

  61. logMaxAge: 7

  62. mysqlVolumeSize: 20Gi

  63. minioVolumeSize: 20Gi

  64. etcdVolumeSize: 20Gi

  65. openldapVolumeSize: 2Gi

  66. redisVolumSize: 2Gi

  67. console:

  68. enableMultiLogin: true # enable/disable multi login

  69. port: 30880

  70. alerting:

  71. enabled: true

  72. auditing:

  73. enabled: true

  74. devops:

  75. enabled: true

  76. jenkinsMemoryLim: 5Gi

  77. jenkinsMemoryReq: 1500Mi

  78. jenkinsVolumeSize: 8Gi

  79. jenkinsJavaOpts_Xms: 1024m

  80. jenkinsJavaOpts_Xmx: 1024m

  81. jenkinsJavaOpts_MaxRAM: 2g

  82. events:

  83. enabled: true

  84. ruler:

  85. enabled: true

  86. replicas: 2

  87. logging:

  88. enabled: true

  89. logsidecarReplicas: 2

  90. metrics_server:

  91. enabled: true

  92. monitoring:

  93. prometheusMemoryRequest: 400Mi

  94. prometheusVolumeSize: 20Gi

  95. multicluster:

  96. clusterRole: none # host | member | none

  97. networkpolicy:

  98. enabled: true

  99. notification:

  100. enabled: true

  101. openpitrix:

  102. enabled: true

  103. servicemesh:

  104. enabled: true

执行及安装kubernetes v1.18.6和kubesphere v3.0.0

  1. # 修改配置文件,添加上节点信息(节点名称,ip等)

  2. kk create cluster -f ./config-kubesphere.yaml

正确执行结果

  1. [root@master ~]# sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

  2. W0105 22:45:15.009277 22248 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  3. W0105 22:45:15.009521 22248 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  4. [init] Using Kubernetes version: v1.18.6

  5. [preflight] Running pre-flight checks

  6. error execution phase preflight: [preflight] Some fatal errors occurred:

  7. [ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory

  8. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist

  9. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

  10. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

  11. To see the stack trace of this error execute with --v=5 or higher

  12. [root@master ~]# cd /home/k

  13. k8s-script/ kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz

  14. [root@master ~]# cd /home/k

  15. k8s-script/ kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz

  16. [root@master ~]# cd /home/k8s-script/

  17. [root@master k8s-script]# export KKZONE=cn

  18. [root@master k8s-script]# ./kk create cluster -f ./k8s-config.yaml

  19. +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

  20. | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |

  21. +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

  22. | node5 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  23. | node4 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  24. | node3 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  25. | master | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  26. +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

  27. This is a simple check of your environment.

  28. Before installation, you should ensure that your machines meet all requirements specified at

  29. https://github.com/kubesphere/kubekey#requirements-and-recommendations

  30. Continue this installation? [yes/no]: yes

  31. INFO[22:57:10 CST] Downloading Installation Files

  32. INFO[22:57:10 CST] Downloading kubeadm ...

  33. INFO[22:57:10 CST] Downloading kubelet ...

  34. INFO[22:57:10 CST] Downloading kubectl ...

  35. INFO[22:57:11 CST] Downloading helm ...

  36. INFO[22:57:11 CST] Downloading kubecni ...

  37. INFO[22:57:11 CST] Configurating operating system ...

  38. [node5 172.16.7.15] MSG:

  39. vm.swappiness = 1

  40. net.ipv4.tcp_tw_reuse = 1

  41. net.ipv4.tcp_tw_recycle = 1

  42. net.ipv4.tcp_keepalive_time = 1200

  43. net.ipv4.ip_local_port_range = 10000 65000

  44. net.ipv4.tcp_max_syn_backlog = 8192

  45. net.ipv4.tcp_max_tw_buckets = 5000

  46. fs.file-max = 655350

  47. net.ipv4.route.gc_timeout = 100

  48. net.ipv4.tcp_syn_retries = 1

  49. net.ipv4.tcp_synack_retries = 1

  50. net.core.netdev_max_backlog = 16384

  51. net.ipv4.tcp_max_orphans = 16384

  52. net.ipv4.tcp_fin_timeout = 2

  53. net.core.somaxconn = 32768

  54. kernel.threads-max = 655360

  55. kernel.pid_max = 655360

  56. vm.max_map_count = 393210

  57. net.ipv4.ip_forward = 1

  58. net.bridge.bridge-nf-call-arptables = 1

  59. net.bridge.bridge-nf-call-ip6tables = 1

  60. net.bridge.bridge-nf-call-iptables = 1

  61. net.ipv4.ip_local_reserved_ports = 30000-32767

  62. [node4 172.16.7.17] MSG:

  63. vm.swappiness = 1

  64. net.ipv4.tcp_tw_reuse = 1

  65. net.ipv4.tcp_tw_recycle = 1

  66. net.ipv4.tcp_keepalive_time = 1200

  67. net.ipv4.ip_local_port_range = 10000 65000

  68. net.ipv4.tcp_max_syn_backlog = 8192

  69. net.ipv4.tcp_max_tw_buckets = 5000

  70. fs.file-max = 655350

  71. net.ipv4.route.gc_timeout = 100

  72. net.ipv4.tcp_syn_retries = 1

  73. net.ipv4.tcp_synack_retries = 1

  74. net.core.netdev_max_backlog = 16384

  75. net.ipv4.tcp_max_orphans = 16384

  76. net.ipv4.tcp_fin_timeout = 2

  77. net.core.somaxconn = 32768

  78. kernel.threads-max = 655360

  79. kernel.pid_max = 655360

  80. vm.max_map_count = 393210

  81. net.ipv4.ip_forward = 1

  82. net.bridge.bridge-nf-call-arptables = 1

  83. net.bridge.bridge-nf-call-ip6tables = 1

  84. net.bridge.bridge-nf-call-iptables = 1

  85. net.ipv4.ip_local_reserved_ports = 30000-32767

  86. [master 172.16.7.12] MSG:

  87. vm.swappiness = 1

  88. net.ipv4.tcp_tw_reuse = 1

  89. net.ipv4.tcp_tw_recycle = 1

  90. net.ipv4.tcp_keepalive_time = 1200

  91. net.ipv4.ip_local_port_range = 10000 65000

  92. net.ipv4.tcp_max_syn_backlog = 8192

  93. net.ipv4.tcp_max_tw_buckets = 5000

  94. fs.file-max = 655350

  95. net.ipv4.route.gc_timeout = 100

  96. net.ipv4.tcp_syn_retries = 1

  97. net.ipv4.tcp_synack_retries = 1

  98. net.core.netdev_max_backlog = 16384

  99. net.ipv4.tcp_max_orphans = 16384

  100. net.ipv4.tcp_fin_timeout = 2

  101. net.core.somaxconn = 32768

  102. kernel.threads-max = 655360

  103. kernel.pid_max = 655360

  104. vm.max_map_count = 393210

  105. net.ipv4.ip_forward = 1

  106. net.bridge.bridge-nf-call-arptables = 1

  107. net.bridge.bridge-nf-call-ip6tables = 1

  108. net.bridge.bridge-nf-call-iptables = 1

  109. net.ipv4.ip_local_reserved_ports = 30000-32767

  110. [node3 172.16.7.16] MSG:

  111. vm.swappiness = 1

  112. net.ipv4.tcp_tw_reuse = 1

  113. net.ipv4.tcp_tw_recycle = 1

  114. net.ipv4.tcp_keepalive_time = 1200

  115. net.ipv4.ip_local_port_range = 10000 65000

  116. net.ipv4.tcp_max_syn_backlog = 8192

  117. net.ipv4.tcp_max_tw_buckets = 5000

  118. fs.file-max = 655350

  119. net.ipv4.route.gc_timeout = 100

  120. net.ipv4.tcp_syn_retries = 1

  121. net.ipv4.tcp_synack_retries = 1

  122. net.core.netdev_max_backlog = 16384

  123. net.ipv4.tcp_max_orphans = 16384

  124. net.ipv4.tcp_fin_timeout = 2

  125. net.core.somaxconn = 32768

  126. kernel.threads-max = 655360

  127. kernel.pid_max = 655360

  128. vm.max_map_count = 393210

  129. net.ipv4.ip_forward = 1

  130. net.bridge.bridge-nf-call-arptables = 1

  131. net.bridge.bridge-nf-call-ip6tables = 1

  132. net.bridge.bridge-nf-call-iptables = 1

  133. net.ipv4.ip_local_reserved_ports = 30000-32767

  134. INFO[22:57:25 CST] Installing docker ...

  135. INFO[22:57:35 CST] Start to download images on all nodes

  136. [node5] Downloading image: kubesphere/pause:3.2

  137. [master] Downloading image: kubesphere/pause:3.2

  138. [node3] Downloading image: kubesphere/etcd:v3.3.12

  139. [node4] Downloading image: kubesphere/pause:3.2

  140. [node5] Downloading image: kubesphere/kube-proxy:v1.18.6

  141. [node4] Downloading image: kubesphere/kube-proxy:v1.18.6

  142. [node3] Downloading image: kubesphere/pause:3.2

  143. [master] Downloading image: kubesphere/kube-apiserver:v1.18.6

  144. [node5] Downloading image: coredns/coredns:1.6.9

  145. [node4] Downloading image: coredns/coredns:1.6.9

  146. [master] Downloading image: kubesphere/kube-controller-manager:v1.18.6

  147. [node5] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  148. [node3] Downloading image: kubesphere/kube-proxy:v1.18.6

  149. [master] Downloading image: kubesphere/kube-scheduler:v1.18.6

  150. [node5] Downloading image: calico/kube-controllers:v3.15.1

  151. [node3] Downloading image: coredns/coredns:1.6.9

  152. [node4] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  153. [node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  154. [master] Downloading image: kubesphere/kube-proxy:v1.18.6

  155. [node5] Downloading image: calico/cni:v3.15.1

  156. [node4] Downloading image: calico/kube-controllers:v3.15.1

  157. [master] Downloading image: coredns/coredns:1.6.9

  158. [node5] Downloading image: calico/node:v3.15.1

  159. [node4] Downloading image: calico/cni:v3.15.1

  160. [node3] Downloading image: calico/kube-controllers:v3.15.1

  161. [node3] Downloading image: calico/cni:v3.15.1

  162. [master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  163. [node5] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  164. [node4] Downloading image: calico/node:v3.15.1

  165. [master] Downloading image: calico/kube-controllers:v3.15.1

  166. [node3] Downloading image: calico/node:v3.15.1

  167. [node4] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  168. [node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  169. [master] Downloading image: calico/cni:v3.15.1

  170. [master] Downloading image: calico/node:v3.15.1

  171. [master] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  172. INFO[23:01:26 CST] Generating etcd certs

  173. INFO[23:01:32 CST] Synchronizing etcd certs

  174. INFO[23:01:36 CST] Creating etcd service

  175. INFO[23:01:49 CST] Starting etcd cluster

  176. [node3 172.16.7.16] MSG:

  177. Configuration file already exists

  178. Waiting for etcd to start

  179. INFO[23:01:58 CST] Refreshing etcd configuration

  180. INFO[23:01:59 CST] Backup etcd data regularly

  181. INFO[23:02:00 CST] Get cluster status

  182. [master 172.16.7.12] MSG:

  183. Cluster will be created.

  184. INFO[23:02:01 CST] Installing kube binaries

  185. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.16:/tmp/kubekey/kubeadm Done

  186. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.15:/tmp/kubekey/kubeadm Done

  187. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.17:/tmp/kubekey/kubeadm Done

  188. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.12:/tmp/kubekey/kubeadm Done

  189. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.16:/tmp/kubekey/kubelet Done

  190. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.12:/tmp/kubekey/kubelet Done

  191. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.15:/tmp/kubekey/kubelet Done

  192. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.17:/tmp/kubekey/kubelet Done

  193. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.16:/tmp/kubekey/kubectl Done

  194. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.17:/tmp/kubekey/kubectl Done

  195. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.15:/tmp/kubekey/kubectl Done

  196. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.12:/tmp/kubekey/kubectl Done

  197. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.12:/tmp/kubekey/helm Done

  198. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.16:/tmp/kubekey/helm Done

  199. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.15:/tmp/kubekey/helm Done

  200. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.17:/tmp/kubekey/helm Done

  201. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  202. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.16:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  203. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.17:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  204. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.12:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  205. INFO[23:02:50 CST] Initializing kubernetes cluster

  206. [master 172.16.7.12] MSG:

  207. W0105 23:02:51.587457 23847 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  208. W0105 23:02:51.587685 23847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  209. [init] Using Kubernetes version: v1.18.6

  210. [preflight] Running pre-flight checks

  211. [preflight] Pulling images required for setting up a Kubernetes cluster

  212. [preflight] This might take a minute or two, depending on the speed of your internet connection

  213. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

  214. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  215. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  216. [kubelet-start] Starting the kubelet

  217. [certs] Using certificateDir folder "/etc/kubernetes/pki"

  218. [certs] Generating "ca" certificate and key

  219. [certs] Generating "apiserver" certificate and key

  220. [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master master.cluster.local node3 node3.cluster.local node4 node4.cluster.local node5 node5.cluster.local] and IPs [10.233.0.1 172.16.7.12 127.0.0.1 172.16.7.12 172.16.7.16 172.16.7.17 172.16.7.15 10.233.0.1]

  221. [certs] Generating "apiserver-kubelet-client" certificate and key

  222. [certs] Generating "front-proxy-ca" certificate and key

  223. [certs] Generating "front-proxy-client" certificate and key

  224. [certs] External etcd mode: Skipping etcd/ca certificate authority generation

  225. [certs] External etcd mode: Skipping etcd/server certificate generation

  226. [certs] External etcd mode: Skipping etcd/peer certificate generation

  227. [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation

  228. [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation

  229. [certs] Generating "sa" key and public key

  230. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"

  231. [kubeconfig] Writing "admin.conf" kubeconfig file

  232. [kubeconfig] Writing "kubelet.conf" kubeconfig file

  233. [kubeconfig] Writing "controller-manager.conf" kubeconfig file

  234. [kubeconfig] Writing "scheduler.conf" kubeconfig file

  235. [control-plane] Using manifest folder "/etc/kubernetes/manifests"

  236. [control-plane] Creating static Pod manifest for "kube-apiserver"

  237. W0105 23:03:00.466175 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

  238. [control-plane] Creating static Pod manifest for "kube-controller-manager"

  239. W0105 23:03:00.474746 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

  240. [control-plane] Creating static Pod manifest for "kube-scheduler"

  241. W0105 23:03:00.476002 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

  242. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

  243. [apiclient] All control plane components are healthy after 32.002873 seconds

  244. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

  245. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster

  246. [upload-certs] Skipping phase. Please see --upload-certs

  247. [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"

  248. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

  249. [bootstrap-token] Using token: 6zsarg.gxg5eijglkupq85j

  250. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

  251. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

  252. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

  253. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

  254. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

  255. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

  256. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

  257. [addons] Applied essential addon: CoreDNS

  258. [addons] Applied essential addon: kube-proxy

  259. Your Kubernetes control-plane has initialized successfully!

  260. To start using your cluster, you need to run the following as a regular user:

  261. mkdir -p $HOME/.kube

  262. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  263. sudo chown $(id -u):$(id -g) $HOME/.kube/config

  264. You should now deploy a pod network to the cluster.

  265. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  266. https://kubernetes.io/docs/concepts/cluster-administration/addons/

  267. You can now join any number of control-plane nodes by copying certificate authorities

  268. and service account keys on each node and then running the following as root:

  269. kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \

  270. --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9 \

  271. --control-plane

  272. Then you can join any number of worker nodes by running the following on each as root:

  273. kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \

  274. --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9

  275. [master 172.16.7.12] MSG:

  276. service "kube-dns" deleted

  277. [master 172.16.7.12] MSG:

  278. service/coredns created

  279. [master 172.16.7.12] MSG:

  280. serviceaccount/nodelocaldns created

  281. daemonset.apps/nodelocaldns created

  282. [master 172.16.7.12] MSG:

  283. configmap/nodelocaldns created

  284. [master 172.16.7.12] MSG:

  285. I0105 23:04:05.247536 26174 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.18

  286. W0105 23:04:06.468801 26174 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  287. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace

  288. [upload-certs] Using certificate key:

  289. 13a993ef56fb292d7ecb9947a3095a0eca6c419dfa148569699c474a8d6c28df

  290. [master 172.16.7.12] MSG:

  291. secret/kubeadm-certs patched

  292. [master 172.16.7.12] MSG:

  293. secret/kubeadm-certs patched

  294. [master 172.16.7.12] MSG:

  295. secret/kubeadm-certs patched

  296. [master 172.16.7.12] MSG:

  297. W0105 23:04:08.563212 26292 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  298. kubeadm join lb.kubesphere.local:6443 --token cfoibt.jdyzk3oc1aze53ri --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9

  299. [master 172.16.7.12] MSG:

  300. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

  301. master NotReady master 39s v1.18.6 172.16.7.12 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.9

  302. INFO[23:04:09 CST] Deploying network plugin ...

  303. [master 172.16.7.12] MSG:

  304. configmap/calico-config created

  305. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

  306. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

  307. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

  308. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

  309. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

  310. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

  311. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

  312. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

  313. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

  314. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

  315. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

  316. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

  317. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

  318. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

  319. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

  320. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

  321. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

  322. clusterrole.rbac.authorization.k8s.io/calico-node created

  323. clusterrolebinding.rbac.authorization.k8s.io/calico-node created

  324. daemonset.apps/calico-node created

  325. serviceaccount/calico-node created

  326. deployment.apps/calico-kube-controllers created

  327. serviceaccount/calico-kube-controllers created

  328. INFO[23:04:13 CST] Joining nodes to cluster

  329. [node5 172.16.7.15] MSG:

  330. W0105 23:04:14.035833 52825 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

  331. [preflight] Running pre-flight checks

  332. [preflight] Reading configuration from the cluster...

  333. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  334. W0105 23:04:20.030576 52825 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  335. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

  336. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  337. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  338. [kubelet-start] Starting the kubelet

  339. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

  340. This node has joined the cluster:

  341. * Certificate signing request was sent to apiserver and a response was received.

  342. * The Kubelet was informed of the new secure connection details.

  343. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  344. [node4 172.16.7.17] MSG:

  345. W0105 23:04:14.432448 53936 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

  346. [preflight] Running pre-flight checks

  347. [preflight] Reading configuration from the cluster...

  348. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  349. W0105 23:04:19.870838 53936 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  350. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

  351. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  352. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  353. [kubelet-start] Starting the kubelet

  354. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

  355. This node has joined the cluster:

  356. * Certificate signing request was sent to apiserver and a response was received.

  357. * The Kubelet was informed of the new secure connection details.

  358. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  359. [node3 172.16.7.16] MSG:

  360. W0105 23:04:14.376894 57091 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

  361. [preflight] Running pre-flight checks

  362. [preflight] Reading configuration from the cluster...

  363. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  364. W0105 23:04:20.949568 57091 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  365. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

  366. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  367. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  368. [kubelet-start] Starting the kubelet

  369. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

  370. This node has joined the cluster:

  371. * Certificate signing request was sent to apiserver and a response was received.

  372. * The Kubelet was informed of the new secure connection details.

  373. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  374. [node3 172.16.7.16] MSG:

  375. node/node3 labeled

  376. [node5 172.16.7.15] MSG:

  377. node/node5 labeled

  378. [node4 172.16.7.17] MSG:

  379. node/node4 labeled

  380. [master 172.16.7.12] MSG:

  381. storageclass.storage.k8s.io/local created

  382. serviceaccount/openebs-maya-operator created

  383. clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created

  384. clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created

  385. configmap/openebs-ndm-config created

  386. daemonset.apps/openebs-ndm created

  387. deployment.apps/openebs-ndm-operator created

  388. deployment.apps/openebs-localpv-provisioner created

  389. INFO[23:04:51 CST] Deploying KubeSphere ...

  390. v3.0.0

  391. [master 172.16.7.12] MSG:

  392. namespace/kubesphere-system created

  393. namespace/kubesphere-monitoring-system created

  394. [master 172.16.7.12] MSG:

  395. secret/kube-etcd-client-certs created

  396. [master 172.16.7.12] MSG:

  397. namespace/kubesphere-system unchanged

  398. serviceaccount/ks-installer unchanged

  399. customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged

  400. clusterrole.rbac.authorization.k8s.io/ks-installer unchanged

  401. clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged

  402. deployment.apps/ks-installer unchanged

  403. clusterconfiguration.installer.kubesphere.io/ks-installer created

  404. INFO[23:10:23 CST] Installation is complete.

  405. Please check the result using the command:

  406. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

遇到问题及解决方案

  1. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.35:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  2. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.36:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  3. INFO[16:20:31 CST] Initializing kubernetes cluster

  4. [master 172.16.8.36] MSG:

  5. [preflight] Running pre-flight checks

  6. W0105 16:20:38.445541 19396 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory

  7. [reset] No etcd config found. Assuming external etcd

  8. [reset] Please, manually reset etcd to prevent further issues

  9. [reset] Stopping the kubelet service

  10. [reset] Unmounting mounted directories in "/var/lib/kubelet"

  11. W0105 16:20:38.453323 19396 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory

  12. [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

  13. [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

  14. [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

  15. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

  16. The reset process does not reset or clean up iptables rules or IPVS tables.

  17. If you wish to reset iptables, you must do so manually by using the "iptables" command.

  18. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)

  19. to reset your system's IPVS tables.

  20. The reset process does not clean your kubeconfig files and you must remove them manually.

  21. Please, check the contents of the $HOME/.kube/config file.

  22. [master 172.16.8.36] MSG:

  23. [preflight] Running pre-flight checks

  24. W0105 16:20:40.305612 19617 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory

  25. [reset] No etcd config found. Assuming external etcd

  26. [reset] Please, manually reset etcd to prevent further issues

  27. [reset] Stopping the kubelet service

  28. [reset] Unmounting mounted directories in "/var/lib/kubelet"

  29. W0105 16:20:40.310273 19617 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory

  30. [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

  31. [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

  32. [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

  33. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

  34. The reset process does not reset or clean up iptables rules or IPVS tables.

  35. If you wish to reset iptables, you must do so manually by using the "iptables" command.

  36. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)

  37. to reset your system's IPVS tables.

  38. The reset process does not clean your kubeconfig files and you must remove them manually.

  39. Please, check the contents of the $HOME/.kube/config file.

  40. ERRO[16:20:41 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

  41. W0105 16:20:40.826437 19657 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  42. W0105 16:20:40.826682 19657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  43. [init] Using Kubernetes version: v1.18.6

  44. [preflight] Running pre-flight checks

  45. error execution phase preflight: [preflight] Some fatal errors occurred:

  46. [ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory

  47. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist

  48. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

  49. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

  50. To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 node=172.16.8.36

  51. WARN[16:20:41 CST] Task failed ...

  52. WARN[16:20:41 CST] error: interrupted by error

  53. Error: Failed to init kubernetes cluster: interrupted by error

  54. Usage:

  55. kk create cluster [flags]

  56. Flags:

  57. -f, --filename string Path to a configuration file

  58. -h, --help help for cluster

  59. --skip-pull-images Skip pre pull images

  60. --with-kubernetes string Specify a supported version of kubernetes

  61. --with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)

  62. -y, --yes Skip pre-check of the installation

  63. Global Flags:

  64. --debug Print detailed information (default true)

  65. Failed to init kubernetes cluster: interrupted by error

在master节点执行

 sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

提取关键错误

  1. [ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory

  2. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist

  3. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

在master节点执行,再次查看

  1. [root@master ~]# ls -lh /etc/ssl/etcd/ssl/

  2. 总用量 32K

  3. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 admin-node3-key.pem

  4. -rw-r--r--. 1 root root 1.4K 1月 5 19:59 admin-node3.pem

  5. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 ca-key.pem

  6. -rw-r--r--. 1 root root 1.1K 1月 5 19:59 ca.pem

  7. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 member-node3-key.pem

  8. -rw-r--r--. 1 root root 1.4K 1月 5 19:59 member-node3.pem

  9. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 node-master-key.pem

  10. -rw-r--r--. 1 root root 1.4K 1月 5 19:59 node-master.pem

原来在/etc/ssl/etcd/ssl/中真不存在node-node3-key.pem、node-node3.pem,真么办?原来是我选择的是member模式。

解决方法

  1. [root@master ~]# cp /etc/ssl/etcd/ssl/member-node3-key.pem /etc/ssl/etcd/ssl/node-node3-key.pem

  2. [root@master ~]# cp /etc/ssl/etcd/ssl/member-node3.pem /etc/ssl/etcd/ssl/node-node3.pem

再次执行

  1. export KKZONE=cn

  2. ./kk create cluster -f ./k8s-config.yaml

查看安装KubeSphere日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

成功安装KubeSphere3.0

登陆kubesphere

  1. #ip:30880

  2. #用户名:admin

  3. #默认密码:P@88w0rd

优秀参考

  • https://www.cnblogs.com/elfcafe/p/13779619.html
  • https://www.cnblogs.com/carriezhangyan/p/11551192.html
  • https://www.cnblogs.com/xiao987334176/p/13267339.html
  • https://www.cnblogs.com/wenyang321/p/14086162.html
  • https://www.cnblogs.com/technology178/p/13547342.html
  • https://www.cnblogs.com/wuchangblog/p/14091717.html
  • https://www.cnblogs.com/it-peng/p/11393812.html

推荐这个博文:https://blog.csdn.net/weixin_43141746/article/details/110261158

采用KubeSphere的kk,部署安装多节点服务的kubernetes-v1.18.6和kubesphere-v3.0.0的踩坑过程记录,及反思相关推荐

  1. 安装Kubernetes V1.18.2

    安装Kubernetes V1.18.2 设置系统 #关闭防火墙(所有节点) systemctl disable firewalld systemctl stop firewalld#关闭SELINU ...

  2. ubuntu18.04 kubeadm 安装kubernetes v1.18.3

    机器:Ubuntu 18.04.2 LTS 关闭swap 先关闭swap!,不关也可以,安装kubernetes初始时会有提示关闭swap . sudo swapoff -a free -htotal ...

  3. FastDFS 的安装、使用、踩坑过程

    FastDFS 的一系列踩坑 1. 什么是 FastDFS 2. 为什么要使用 FastDFS 3. FastDFS 安装[CentOS] 3.1 Tracker 安装 3.2 Storage 安装 ...

  4. 容器化部署(k8s)任务调度平台xxl-job(部署过程及踩坑问题记录)

    文章预览: 1 部署过程(下方ip代表服务器的ip哈) 1.1 制作服务打包镜像DockerFile 1.2 制作执行脚本run.sh 1.3 jar包上上传 1.4 kuboard创建----配置信 ...

  5. install-newton部署安装--------计算节点部署安装

    #################################################################################################### ...

  6. Hyperledger Fabric 1.0 快速搭建 -------- 多机部署 Fabric CA节点服务

    前言 在这里我推荐两位大神的博客,可以参考或者直接跟着这两位大神学习,我是阅读这两位大神的博客和<深度探索区块链Hyperledger技术与应用>一书部署的 <深度探索区块链Hype ...

  7. 搭建kubernetes v1.21.5 和 kubesphere v3.2.1

    一.准备一台干净的centos7.6机器 二.关闭防火墙打上相关补丁和相关系统软件 systemctl stop firewalld yum install openssh openssl sudo ...

  8. superset安装踩坑过程总结

    搭建这个superset花了我两天的时间,作为一个小白翻遍了全网,但其实都是些很小的问题,感谢网上大腿们的无私贡献,才得以成功,真的自学太难了-- Windows10,64bit 先避坑(我前面花了1 ...

  9. linux ubuntu安装pytorch(深度学习环境搭建记录,无sudo权限)踩坑全记录

    一些牢骚:深度学习没怎么学习几次,搭建环境已经把我搞秃了哈哈哈. 之前在网上找到的搭建环境的步骤,我没有root权限,很多操作都不行(比如运行.run文件,cuda 和cudnn的安装和修改也需要ro ...

  10. centos7 安装esrally 踩坑过程

    背景: 为了压测下新ES集群性能,使用es官网推荐的Esrally.因为网络差异, 按照官网的步骤来,一步一个坑那基本上就是下载很慢,半天过去爬不出来. 所以那个简洁的命令有很多依赖前置条件.不然就是 ...

最新文章

  1. 机器学习入门(01)— 感知机概念、实现、局限性以及多层感知机
  2. 屏幕为什么要正负压供电_负压变换器的设计
  3. vba 字典_VBA中字典的基础概念及调用方法
  4. html 标签 r语言,从R中的字符串中删除html标签
  5. python面试常问题解答_10个Python面试常问的问题
  6. 【Elasticsearch】如何在Elasticsearch中查找相似的术语
  7. adprw指令通讯案例_超实用,非常典型的Modbus通讯项目案例,三分钟学会
  8. 在java 中调c_在Java中调用C
  9. 牙龈出血试试四个食疗方_新闻中心_新浪网
  10. PAT 乙级1019 数字黑洞
  11. sql中的distinct
  12. R语言ETL工程系列:R语言基础设置
  13. 随笔小杂记(四)——将语义分割标签转换为指定像素值
  14. FRR BGP代码分析20 -- 6PE\6VPE
  15. 《Android框架揭秘》——2.3节搭建Android SDK开发环境
  16. android 东软pda扫描适配_东软数字化医院解决方案
  17. “白嫖”时代进入最后倒计时,网盘行业到底是怎么由盛及衰的?
  18. 用U盘与移动硬盘制作WIN7启动盘(亲自实践)
  19. sprint演示会议
  20. 数据的经济价值与个人信息安全保护,该如何平衡?

热门文章

  1. 指针(一)(基本概念)
  2. 图书馆图书管理系统python_使用python的简易图书馆管理系统
  3. 进化计算——进化规划(EP)
  4. 生物特征认证和识别市场现状及未来发展趋势
  5. 4、IP信息查询API接口,免费好用
  6. 真机调试报错 Could not locat device support files
  7. linux系统底层,干货|七点,用计算机底层知识教你安装Linux系统!
  8. 在Openjdk 8 中如何合理使用容器 memory 资源
  9. 微信 存储目录 计算机,电脑微信文件夹保存位置
  10. 工业镜头视场、倍率、焦距之间的关系