前言

上一篇主要讲了如何进行单机版本kubesphere,本篇主要讲如何基于单机镜像完成集群的配置与管理。

一、导出镜像

以下操作必须要在之前的单机上执行,不然没效果。

#创建配置文件
./kk create manifest

查看配置文件
注意这里的harbor前面的注释一定要记得关闭
尽量把配置写全一点,这样内部后续包就包含进来了。
当然 mainfest.yaml,如果太大就会导致打包的文件太大。
所以我们给出两份demo文件,mainfest_mini.yaml,mainfest_full.yaml

# 把文件拷贝一份
cp mainfest_simple.yaml mainfest.yaml

最小版本

vim mainfest_mini.yaml
文件开始>>>>>apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: sample
spec:arches:- amd64operatingSystems:- arch: amd64type: linuxid: centosversion: "7"osImage: CentOS Linux 7 (Core)repository:iso:localPath:url: "https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso"kubernetesDistributions:- type: kubernetesversion: v1.21.5components:helm:version: v3.6.3cni:version: v0.9.1etcd:version: v3.4.13## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.containerRuntimes:- type: dockerversion: 20.10.8crictl:version: v1.22.0docker-registry:version: "2"harbor:version: v2.4.1docker-compose:version: v2.2.2images:- docker.io/calico/cni:v3.20.0- docker.io/calico/kube-controllers:v3.20.0- docker.io/calico/node:v3.20.0- docker.io/calico/pod2daemon-flexvol:v3.20.0- docker.io/coredns/coredns:1.8.0- docker.io/csiplugin/snapshot-controller:v4.0.0- docker.io/kubesphere/k8s-dns-node-cache:1.15.12- docker.io/kubesphere/ks-apiserver:v3.2.1- docker.io/kubesphere/ks-console:v3.2.1- docker.io/kubesphere/ks-controller-manager:v3.2.1- docker.io/kubesphere/ks-installer:v3.2.1- docker.io/kubesphere/kube-apiserver:v1.21.5- docker.io/kubesphere/kube-controller-manager:v1.21.5- docker.io/kubesphere/kube-proxy:v1.21.5- docker.io/kubesphere/kube-rbac-proxy:v0.8.0- docker.io/kubesphere/kube-scheduler:v1.21.5- docker.io/kubesphere/kube-state-metrics:v1.9.7- docker.io/kubesphere/kubectl:v1.21.0- docker.io/kubesphere/notification-manager-operator:v1.4.0- docker.io/kubesphere/notification-manager:v1.4.0- docker.io/kubesphere/notification-tenant-sidecar:v3.2.0- docker.io/kubesphere/pause:3.4.1- docker.io/kubesphere/prometheus-config-reloader:v0.43.2- docker.io/kubesphere/prometheus-operator:v0.43.2- docker.io/mirrorgooglecontainers/defaultbackend-amd64:1.4- docker.io/openebs/provisioner-localpv:2.10.1- docker.io/prom/alertmanager:v0.21.0- docker.io/prom/node-exporter:v0.18.1- docker.io/prom/prometheus:v2.26.0- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1registry:auths: {}

最大版本

mainfest_full.yaml文件开始>>>>>
---
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: sample
spec:arches:- amd64operatingSystems:- arch: amd64type: linuxid: centosversion: "7"repository:iso:localPath: ""url: "https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso"kubernetesDistributions:- type: kubernetesversion: v1.21.5components:helm:version: v3.6.3cni:version: v0.9.1etcd:version: v3.4.13## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.containerRuntimes:- type: dockerversion: 20.10.8crictl:version: v1.22.0### docker-registry:#   version: "2"harbor:version: v2.4.1docker-compose:version: v2.2.2images:- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.48.1- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.7.2- registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher:v0.1.1- registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher-agent:v0.1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.2.0-2.249.1- registry.cn-beijing.aliyuncs.com/kubesphereio/jnlp-slave:3.27-1- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.26.0- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.43.2- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.43.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v1.9.7- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v0.18.1- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-prometheus-adapter-amd64:v0.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.21.0- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.18.0- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:7.4.3- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.7.0-1- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.11.0- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.3- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text- registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache- registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest- registry.cn-beijing.aliyuncs.com/kubesphereio/java:openjdk-8-jre-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3registry:auths: {}
<<<<<文件结束#导出镜像, 不知道为啥我操作了两次。
#导出文件大小及时间取决于我们配置的文件多少。
./kk artifact export -m manifest.yaml -o kubesphere.tar.gz
#最后成功执行如下
13:48:20 CST success: [LocalHost]
13:48:20 CST [ChownOutputModule] Chown output file
13:48:20 CST success: [LocalHost]
13:48:20 CST [ChownWorkerModule] Chown ./kubekey dir
13:48:20 CST success: [LocalHost]
13:48:20 CST Pipeline[ArtifactExportPipeline] execute successful
[root@localhost ~]# ll
# kubesphere.tar.gz 就是我们打的镜像文件了

二、导入镜像

目标:我们期望在部署集群是无外网的情况下依然可以执行集群的安装。

环境准备

三台测试机器

role ip hostname
master 192.168.3.65 kube_master01
master 192.168.3.66 kube_master02
master 192.168.3.67 kube_master03
node 192.168.3.68 kube_node1
node 192.168.3.69 kube_node2

master上需要安装 etcd 高可用集群,这样我用任意一台就可以管理我们的机器了,实际管理时我们可以在任意一台机器中操作kubectl即可,因为它们都是向高可用集群etcd数据库中写入数据,然后再完成schedule调度任务。

拷贝文件到kube_master中

scp kk root@192.168.3.65:/root
scp kubesphere.tar.gz root@192.168.3.65:/root

执行操作

ssh root@192.168.3.65
##创建配置文件
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5 -f config-sample.yaml
##修改配置文件设置registry,注意设置hosts/registry
cp config-sample.yaml  config.yaml
cat config.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:- {name: kube_master01, address: 192.168.3.65, internalAddress: 192.168.3.65, user: root, password: "123456"}- {name: kube_master02, address: 192.168.3.66, internalAddress: 192.168.3.66, user: root, password: "123456"}- {name: kube_master03, address: 192.168.3.67, internalAddress: 192.168.3.67, user: root, password: "123456"}- {name: kube_node01, address: 192.168.3.68, internalAddress: 192.168.3.68, user: root, password: "123456"}- {name: kube_node02, address: 192.168.3.69, internalAddress: 192.168.3.69, user: root, password: "123456"}roleGroups:etcd:- kube_master01- kube_master02- kube_master03control-plane:- kube_master01- kube_master02- kube_master03worker:- kube_node01- kube_node02registry:- kube_master02controlPlaneEndpoint:## Internal loadbalancer for apiserversinternalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.21.5clusterName: cluster.localnetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:type: harborauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345plainHTTP: falseprivateRegistry: "dockerhub.kubekey.local"namespaceOverride: "kubesphereio"registryMirrors: []insecureRegistries: []addons: []---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.2.1
spec:persistence:storageClass: ""authentication:jwtSecret: ""local_registry: ""namespace_override: ""# dev_tag: ""etcd:monitoring: falseendpointIps: localhostport: 2379tlsEnable: truecommon:core:console:enableMultiLogin: trueport: 30880type: NodePort# apiserver:#  resources: {}# controllerManager:#  resources: {}redis:enabled: falsevolumeSize: 2Giopenldap:enabled: falsevolumeSize: 2Giminio:volumeSize: 20Gimonitoring:# type: externalendpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090GPUMonitoring:enabled: falsegpu:kinds:- resourceName: "nvidia.com/gpu"resourceType: "GPU"default: truees:# master:#   volumeSize: 4Gi#   replicas: 1#   resources: {}# data:#   volumeSize: 20Gi#   replicas: 1#   resources: {}logMaxAge: 7elkPrefix: logstashbasicAuth:enabled: falseusername: ""password: ""externalElasticsearchHost: ""externalElasticsearchPort: ""alerting:enabled: false# thanosruler:#   replicas: 1#   resources: {}auditing:enabled: false# operator:#   resources: {}# webhook:#   resources: {}devops:enabled: falsejenkinsMemoryLim: 2GijenkinsMemoryReq: 1500MijenkinsVolumeSize: 8GijenkinsJavaOpts_Xms: 512mjenkinsJavaOpts_Xmx: 512mjenkinsJavaOpts_MaxRAM: 2gevents:enabled: false# operator:#   resources: {}# exporter:#   resources: {}# ruler:#   enabled: true#   replicas: 2#   resources: {}logging:enabled: falsecontainerruntime: dockerlogsidecar:enabled: truereplicas: 2# resources: {}metrics_server:enabled: falsemonitoring:storageClass: ""# kube_rbac_proxy:#   resources: {}# kube_state_metrics:#   resources: {}# prometheus:#   replicas: 1#   volumeSize: 20Gi#   resources: {}#   operator:#     resources: {}#   adapter:#     resources: {}# node_exporter:#   resources: {}# alertmanager:#   replicas: 1#   resources: {}# notification_manager:#   resources: {}#   operator:#     resources: {}#   proxy:#     resources: {}gpu:nvidia_dcgm_exporter:enabled: false# resources: {}multicluster:clusterRole: nonenetwork:networkpolicy:enabled: falseippool:type: nonetopology:type: noneopenpitrix:store:enabled: falseservicemesh:enabled: falsekubeedge:enabled: falsecloudCore:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []cloudhubPort: "10000"cloudhubQuicPort: "10001"cloudhubHttpsPort: "10002"cloudstreamPort: "10003"tunnelPort: "10004"cloudHub:advertiseAddress:- ""nodeLimit: "100"service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"edgeWatcher:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []edgeWatcherAgent:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []
<<<文件结束
# 必须要设置好 registry ,不然后续我们更新将会很麻烦的。
# 执行 registry 安装
./kk init registry -f config.yaml -a kubesphere.tar.gz
.....
22:26:09 CST skipped: [kube_master02]
22:26:09 CST [InstallRegistryModule] Enable docker
22:26:11 CST skipped: [kube_master02]
22:26:11 CST [InstallRegistryModule] Install docker compose
22:26:15 CST success: [kube_master02]
22:26:15 CST [InstallRegistryModule] Sync harbor package
22:27:44 CST success: [kube_master02]
22:27:44 CST [InstallRegistryModule] Generate harbor config
22:27:47 CST success: [kube_master02]
22:27:47 CST [InstallRegistryModule] start harborLocal image registry created successfully. Address: dockerhub.kubekey.local22:28:24 CST success: [kube_master02]
22:28:24 CST Pipeline[InitRegistryPipeline] execute successful

好像安装成功了。。。

ssh root@192.168.3.66
[root@kube_node1 ~]# docker ps
CONTAINER ID   IMAGE                                  COMMAND                  CREATED         STATUS                   PORTS                                                                                                                       NAMES
0b6262ef1994   goharbor/nginx-photon:v2.4.1           "nginx -g 'daemon of…"   3 minutes ago   Up 3 minutes (healthy)   0.0.0.0:4443->4443/tcp, :::4443->4443/tcp, 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp   nginx
7577ff03905d   goharbor/harbor-jobservice:v2.4.1      "/harbor/entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-jobservice
3ba916375569   goharbor/notary-server-photon:v2.4.1   "/bin/sh -c 'migrate…"   3 minutes ago   Up 3 minutes                                                                                                                                         notary-server
6c6ab9420de0   goharbor/harbor-core:v2.4.1            "/harbor/entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-core
4e66ca5bff32   goharbor/trivy-adapter-photon:v2.4.1   "/home/scanner/entry…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               trivy-adapter
554c004cf1b2   goharbor/notary-signer-photon:v2.4.1   "/bin/sh -c 'migrate…"   3 minutes ago   Up 3 minutes                                                                                                                                         notary-signer
1368d5c294c3   goharbor/redis-photon:v2.4.1           "redis-server /etc/r…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               redis
955b1e91535f   goharbor/harbor-registryctl:v2.4.1     "/home/harbor/start.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               registryctl
ae71922e7b43   goharbor/chartmuseum-photon:v2.4.1     "./docker-entrypoint…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               chartmuseum
636a15e66450   goharbor/harbor-db:v2.4.1              "/docker-entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-db
f041fefe6684   goharbor/harbor-portal:v2.4.1          "nginx -g 'daemon of…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-portal
f030bb92b6ba   goharbor/registry-photon:v2.4.1        "/home/harbor/entryp…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               registry
fc4fb2a3474f   goharbor/harbor-log:v2.4.1             "/bin/sh -c /usr/loc…"   3 minutes ago   Up 3 minutes (healthy)   127.0.0.1:1514->10514/tcp                                                                                                   harbor-log

创建 Harbor 项目

vim /create_project_harbor.sh
#!/usr/bin/env bash# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"harbor_projects=(librarykubesphereiokubespherecalicocorednsopenebscsipluginminiomirrorgooglecontainersosixiapromthanosiojimmidysongrafanaelasticistiojaegertracingjenkinsweaveworksopenpitrixjoosthofmannginxdemosfluentkubeedge
)for project in "${harbor_projects[@]}"; doecho "creating $project"curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
done
<<<文件结束
#再次执行以下命令修改集群配置文件
vim config.yaml...registry:type: harborauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345plainHTTP: falseprivateRegistry: "dockerhub.kubekey.local"namespaceOverride: "kubesphereio"registryMirrors: []insecureRegistries: []addons: []

安装 KubeSphere 集群

./kk create cluster -f config.yaml -a kubesphere.tar.gz --with-packages
22:32:58 CST [ConfirmModule] Display confirmation form
+---------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
| name          | sudo | curl | openssl | ebtables | socat | ipset | conntrack | chrony | docker  | nfs client | ceph client | glusterfs client | time         |
+---------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
| kube_master01 | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:57 |
| kube_master02 | y    | y    | y       | y        |       | y     |           | y      | 20.10.8 | y          |             | y                | PDT 07:32:57 |
| kube_master03 | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:57 |
| kube_node01   | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:58 |
| kube_node02   | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:57 |
+---------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+.....
# 执行过程中你会可能会发现磁盘不够用的现象。。。。
failure: repodata/repomd.xml from base-local: [Errno 256] No more mirrors to try.
file:///tmp/kubekey/iso/repodata/repomd.xml: [Errno -1] Error importing repomd.xml for base-local: Damaged repomd.xml file: Process exited with status 1
failed: [kube_node1] [AddLocalRepository] exec failed after 1 retires: update local repository failed: Failed to exec command: sudo -E /bin/bash -c "yum clean all && yum makecache"
Loaded plugins: fastestmirror, langpacks
Cleaning repos: base-local#没关系我们忽略错误试试
./kk create cluster -f config.yaml -a kubesphere.tar.gz --with-packages --ignore-err.....结果还是不行
failure: repodata/repomd.xml from base-local: [Errno 256] No more mirrors to try.
file:///tmp/kubekey/iso/repodata/repomd.xml: [Errno -1] Error importing repomd.xml for base-local: Damaged repomd.xml file: Process exited with status 1或者发生如下错误:error: Pipeline[CreateClusterPipeline] execute failed: Module[CopyImagesToRegistryModule] exec failed:
failed: [LocalHost] [CopyImagesToRegistry] exec failed after 1 retires: read index.json failed: open /root/kubekey/images/index.json: no such file or directory反正我是想近任何方法想把它跑通,但是确实没有办法搞出来。。。

花了大半天研究的离线部署,最后成了这样。。。


kubesphere离线打包问题(安装问题整理)

1、下载文件要依赖已存在的集群

[root@kube_master ~]# ./kk create manifest
/root/manifest-sample.yaml already exists. Are you sure you want to overwrite this file? [yes/no]: yes
error: get kubernetes client failed: open /root/.kube/config: no such file or directory

如果我的集群里没有安装过kubernates或者kubesphere,竟然无法执行create manifest,这里没想通。

2、下载文件时经常性卡死退出需要关闭docker服务才行。

downloading amd64 harbor v2.4.1 ...
已杀死

3、下载文件的配置manifest.yaml没有给充足的说明如何使用导致下载文件时间巨长,其实很多文件根本没啥用。所以建议使用manifest-sample.yaml文件,方便简单。
为了下载这些文件,我的整个磁盘最大占用差不多50G了。

4、无法下载ISO文件,需要手动指定安装的路径。

错误如下:
failed: [LocalHost] [DownloadISOFile] exec failed after 1 retires: Failed to download centos-7-amd64.iso iso file: curl -L -o /root/kubekey/centos-7-amd64.iso https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso error: exit status 35

修改manifest.yaml如下:
需要提前从github上手动下载下来才行。

  operatingSystems:- arch: amd64type: linuxid: centosversion: "7"osImage: CentOS Linux 7 (Core)repository:iso:localPath: /root/centos-7-amd64-rpms.isourl: ""

5、我的实际打包文件其实跟我的配置文件一点关系都没有,比如我使用manifest_mini.yaml进行打包实际它依然会把无关的包给我打进来了。

以至于我整个包差不多15个G了,哈哈。

总结

KubeSphere单机还是挺好用的,可能对支持联网状态的安装比较友好吧。对于tob场景下的离线安装我基本就放弃了。

其实之前腾讯云出一版离线安装教程,地址如下:
https://cloud.tencent.com/developer/article/1802614
下载离线包大约10G左右,并且版本是固定的,这对于我们生产环境来说不一定是好的,我感觉KubeSphere是应该思考一下如何支持离线部署了。

kubesphere离线安装从入门到放弃相关推荐

  1. K8S以及Kubesphere离线部署方案

    本篇文档描述kubesphere的离线部署过程,kubesphere的版本为3.1.1,kubernetes的版本为1.20.6,其他版本可能过程略有出入. 系统要求 系统 最低要求(每个节点) Ub ...

  2. kubesphere 3.0离线安装

    离线安装 离线安装几乎与在线安装相同,不同之处是您必须创建一个本地仓库来托管 Docker 镜像.本教程演示了如何在离线环境中将 KubeSphere 安装到 Kubernetes 上. 开始下方步骤 ...

  3. System Generator从入门到放弃(一)-安装与使用

    System Generator从入门到放弃(一)-安装与使用 文章目录 System Generator从入门到放弃(一)-安装与使用 一.安装与使用 1.简介 2.功能介绍 3.System Ge ...

  4. 【 linux 从入门到放弃(全网最详细虚拟机及c7安装)】

    linux 从入门到放弃(全网最详细虚拟机及c7安装) 文章目录 linux 从入门到放弃(全网最详细虚拟机及c7安装) 一.初识linux 二.linux 发展 二.linux 组成 三.linux ...

  5. ROS从入门到放弃——安装Moveit及其使用

    ROS从入门到放弃--安装Moveit及其使用 1. 安装流程 2.小测试前的准备 3. 小测试 4. Move Group Python Interface 4.0 辅助函数 all_close(g ...

  6. win10在Anaconda3下面安装caffe框架(CPU)Python2.7从入门到放弃

    最新更新 https://blog.csdn.net/baidu_40691432/article/details/121426736 电脑系统重装,在2021年的11月再次在win10系统上安装ca ...

  7. java adt入门教程_【教程】【多图详解】如何在Eclipse中离线安装ADT(Android Development Tools)...

    背景 本来正常情况的话,去下载集成好ADT的Eclipse,就可以直接使用了: 但是呢,(有人)有时候,是本身已经有了Eclipse了,是需要(通过Eclipse)在线下载和安装ADT的. 结果就遇到 ...

  8. jenkins手把手教你从入门到放弃03-安装Jenkins时web界面出现该jenkins实例似乎已离线

    简介 很久没有安装jenkins了,因为之前用的的服务器一直正常使用,令人郁闷的是,之前用jenkins一直没出过这个问题. 令人更郁闷的是,我尝试了好多个历史版本和最新版本,甚至从之前的服务器把je ...

  9. SpringBoot-从入门到放弃(二) 开发环境的搭建

    SpringBoot-从入门到放弃(二) 开发环境的搭建 版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.net/z23546498/article/deta ...

最新文章

  1. Inno Setup制作应用程序安装包
  2. 用Python如何查快递?
  3. 习题8-5 使用函数实现字符串部分复制 (20 分)
  4. 协同办公工具解决了什么问题?
  5. 业界丨几张图带你看懂人工智能产业
  6. 赠书:Apache顶级项目,SkyWalking为何一枝独秀?
  7. Jarvis Oj Pwn 学习笔记-level3
  8. python测量镜头到目标距离_摄像头单目测距原理及实现
  9. Linux内核深入理解中断和异常(5):外部中断
  10. 常言道:“人在做,天在看。”,你是怎样理解这句话的?
  11. SQL CE与SQL Server数据交换
  12. 在MFC中通过opencv显示摄像头视频或者文件视频
  13. 论文阅读 (70):Exploring Self-attention for Image Recognition
  14. MySQL分页查询的5种方法
  15. 根据录入的计算公式计算_增值税含税怎么计算?
  16. 如何提升流量的转化率
  17. Q版京剧脸谱来喽——状元
  18. 5步告诉你QQ音乐的完美音质是怎么来的,播放器的秘密都在这里
  19. 【基础知识】PID(比例微分积分)控制
  20. Spark的RDD的弹性体现在什么地方?------面试题

热门文章

  1. 【BP数据预测】差分进化算法优化BP神经网络数据预测【含Matlab源码 1315期】
  2. HTML5期末大作业:旅游网页设计与实现——旅游风景区网站HTML+CSS+JavaScript 景点静态网页设计 学生DW静态网页设计...
  3. 落谷P3712少女与战车(疑似CSDN首发)
  4. myBase/Webcollect网页插件
  5. HBuilder如何在真机运行
  6. 执行RMAN恢复的高级场景_还原使用旧版本的RMAN创建的备份
  7. 操作系统对定时器的应用
  8. 小程序---搜索框实现
  9. oracle物料属性主要单位,Oracle EBS物料属性设定.doc
  10. HackTheBox-Baby RE