Centos也适用

目录

一、环境准备

1、主机准备

2、关闭selinux、防火墙等

3、关闭swap分区

二、安装docker

1、配置yum源

1、1配置docker源

2、安装依赖包

3、安装docker

二、安装kubernets

1、配置国内源

2、内核配置

2.1、配置IPVS(可选)

3、安装相应版本

4、初始化Master

4、1master部署calico网络

5、master重置*

6、将work node加入

四、安装KubeSphere

1、前置工作

搭建NFS作为默认存储驱动(sc)

创建存储类和provisioner

安装metrics-server

2、部署KubeSphere

获取文件

安装

安装进度检查

登录


一、环境准备

1、主机准备

三台虚拟机并配置好主机名及映射(本次实验主机为虚拟机openEuler22.03 SP1)

2、关闭selinux、防火墙等

# 三台都要操作
# 临时关闭,可用getenforce查看是否关闭
setenforce 0
# 通过配置文件关闭
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config#关闭防火墙
sudo systemctl stop firewalld
sudo systemctl disable firewalld直接root执行如下吧
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

3、关闭swap分区

#三台都要操作
swapoff -a
sed -ri 's/(.*swap.*)/#\1/' /etc/fstab

二、安装docker

--每台都要安装--

1、配置yum源

# 编辑yum源,可参考如下,此处使用的移动云源,各大厂商均有,如华为源
vi /etc/yum.repos.d/openEuler.repo
[OS]
name=OS
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler[everything]
name=everything
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/everything/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/everything/$basearch/RPM-GPG-KEY-openEuler[EPOL]
name=EPOL
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler[debuginfo]
name=debuginfo
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/debuginfo/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/debuginfo/$basearch/RPM-GPG-KEY-openEuler[source]
name=source
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/source/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/source/RPM-GPG-KEY-openEuler[update]
name=update
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/update/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler[update-source]
name=update-source
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/update/source/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/source/RPM-GPG-KEY-openEuler

1、1配置docker源

sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
****
将docker-ce.repo中[docker-ce-stable]中的$releasever改为7
即 cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg.......

2、安装依赖包

#非root请加sudo
yum clean all && yum makecache && yum -y update
yum install -y  device-mapper-persistent-data lvm2

3、安装docker

1、查看docker可安装版本
yum list docker-ce.x86_64 --showduplicates | sort -r2、根据自己需要安装
sudo yum install -y docker-ce-24.0.2 docker-ce-cli-24.0.23、将使用的用户加入docker组
sudo gpasswd -a $USER docker
newgrp docker ##或者重新ssh连接进入4、启动并加入开机自启
sudo systemctl enable docker --nowdocker info #可查看详细信息5、配置镜像地址和修改驱动
mkdir -p /etc/docker
sudo vim /etc/docker/daemon.json
写入
{"registry-mirrors": ["https://*****.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2"
}
#registry-mirrors地址可通过aliyun个人账户获取
另外还有:
http://hub-mirror.c.163.com/
https://docker.mirrors.ustc.edu.cn/
可在registry-mirrors中用逗号分隔接着执行
sudo systemctl daemon-reload
sudo systemctl restart docker

二、安装kubernets

--三台都要安装--

1、配置国内源

sudo vim /etc/yum.repos.d/kubernetes.repo
写入:
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgsudo yum clean all && sudo yum makecache

2、内核配置

1、配置内核桥接流量
sudo vim /etc/modules-load.d/k8s.conf
写入:
br_netfilter
或者手动加载
modprobe br_netfilter
lsmod | grep br_netfilter # 查看网桥过滤模块是否加载成功sudo vim /etc/sysctl.d/k8s.conf
写入:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1sudo sed -ri 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/' /etc/sysctl.conf
即修改:
net.ipv4.ip_forward = 1
#0代表禁止转发,1代表转发,禁止转发master初始化将报错sudo sysctl --system

2.1、配置IPVS(可选)

---IPVS是用来应对负载均衡场景的组件,其比iptables有更高的转发性能---
yum install ipset ipvsadm ebtables socat ipset conntrackvim /etc/sysconfig/modules/ipvs.modules
#写入:
modprobe -- ip_vs                                                                                                           modprobe -- ip_vs_rr                                                                                                        modprobe -- ip_vs_wrr                                                                                                       modprobe -- ip_vs_sh
modprobe -- nf_conntrack#使配置生效
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
#查看
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

3、安装相应版本

sudo yum install -y kubelet-1.23.9 kubeadm-1.23.9 kubectl-1.23.9
#1.24之后的版本需要使用cri-dockerd
sudo systemctl  enable kubelet --now
#此时无需检查kubelet服务状态,master节点还未初始化,将报错
kubelet --version  ##查看版本号,虽然安装时已经指定了,看看也无妨

4、初始化Master

--在master节点执行--

sudo kubeadm init --kubernetes-version=1.23.9 --apiserver-advertise-address=192.168.11.11 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16--kubernetes-version=1.23.9 安装的版本号,kubelet --version可查看
--apiserver-advertise-address master节点的ip
--image-repository 镜像仓库地址,registry.cn-hangzhou.aliyuncs.com和registry.aliyuncs.com一样
--pod-network-cidr pod的ip地址,flannel网络好像默认10.244.0.0/16,calico的可以通过yml配置,其默认为192.168.0.0/16
--service-cidr 可暂不管

出现:

Your Kubernetes control-plane has initialized successfully!

代表master初始化成功

接着master执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、1master部署calico网络

下载calico yml文件
curl -O https://docs.projectcalico.org/v3.20/manifests/calico.yaml
##注意,请查看https://docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements隔版本的要求,确保calico的版本符合操作系统和k8s的版本,避免报错(v3.20换成其他版本就可查看)sed -i "s#docker.io/##g" calico.yaml 取消掉yaml中image的docker.io地址,采用国内源

修改pod-network-cidr

将此处的注释取消并修改为初始化Master时的pod-network-cidr

 接着执行:

kubectl apply -f calico.yaml
未出现error则正常过一段时间查看所以pod运行情况:
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-77d676778d-lkqrv   1/1     Running   0          8m19s
kube-system   calico-node-j2bc9                          1/1     Running   0          8m19s
kube-system   coredns-7f89b7bc75-4qzcm                   1/1     Running   0          10m
kube-system   coredns-7f89b7bc75-dbfbt                   1/1     Running   0          10m
kube-system   etcd-master                                1/1     Running   0          10m
kube-system   kube-apiserver-master                      1/1     Running   0          10m
kube-system   kube-controller-manager-master             1/1     Running   0          10m
kube-system   kube-proxy-lbmwf                           1/1     Running   0          10m
kube-system   kube-scheduler-master                      1/1     Running   0          10m
全为running则成功

5、master重置*

在master初始化或者网络配置中出现不明错误或者操作失误,可进行重置

sudo kubeadm reset && sudo systemctl daemon-reload && sudo systemctl restart kubelet
重置后,必须删除./kube/文件,否则再次初始化后查看状态会报错
rm -rf $HOME/.kube
初始化后,再次执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6、将work node加入

master执行:
sudo kubeadm token create --print-join-command
将生成命令在work node上执行
过一段时间 master查看:
kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   2m31s   v1.23.9
node1    Ready    <none>                 29s     v1.23.9
node2    Ready    <none>                 14s     v1.23.9可对node节点自行打标,如:
kubectl label node node1 node-role.kubernetes.io/worker1=true
kubectl label node node2 node-role.kubernetes.io/worker2=truekubectl get nodes
NAME     STATUS   ROLES                  AGE    VERSION
master   Ready    control-plane,master   4m6s   v1.23.9
node1    Ready    worker1                2m4s   v1.23.9
node2    Ready    worker2                109s   v1.23.9

四、安装KubeSphere

1、前置工作

搭建NFS作为默认存储驱动(sc)

master节点执行

sudo yum install -y nfs-utils
sudo vim /etc/exports
写入:
/nfs/data/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
#目录根据自己服务器实际进行选择
mkdir -p /data/nfssudo systemctl enable rpcbind --now && sudo systemctl enable nfs-server --now
sudo exportfs -r
sudo exportfs

work节点执行

#查看主节点挂载
showmount -e 192.168.11.11 #你的master ip
mkdir -p /data/nfsmountsudo mount -t nfs 192.168.11.11:/data/nfs /data/nfsmount/

创建存储类provisioner

编辑yml文件,vim sc-pro.yml,写入

## 创建存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-storageannotations:storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:archiveOnDelete: "true" ---
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 192.168.11.11 ## 自己nfs服务器地址- name: NFS_PATH  value: /data/nfs       ## nfs服务器共享目录volumes:- name: nfs-client-rootnfs:server: 192.168.11.11path: /data/nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: default
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: default
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: Rolekind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: default
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: default
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

使用该文件

#kubectl apply -f sc-pro.yml #kubectl get pod -n default  #打印如下
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7d8d494cc4-m94ln   1/1     Running   0          75s验证
创建文件pvc.yml,内容可如下:
kind: PersistentVolumeClaim         #创建PVC资源
apiVersion: v1
metadata:name: nginx-pvc         #PVC的名称
spec:accessModes:            #定义对PV的访问模式,代表PV可以被多个PVC以读写模式挂载- ReadWriteManyresources:              #定义PVC资源参数requests:             #设置资源需求storage: 200Mi      #申请200MI的空间资源storageClassName: nfs-storage#kubectl apply -f pvc.yml
#kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pvc-410987bb-9e80-4511-b832-04eaff610e11   200Mi      RWX            nfs-storage    34s
状态bound则为成功

安装metrics-server

由于kubesphere里面自带的metrics-server存在安装问题,所以我们手动安装

编辑ms.yml文件并apply。

文件内容为:

apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --kubelet-insecure-tls- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-portimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSperiodSeconds: 10securityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100

开启 Aggregator Routing

sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlnamespace: kube-system
spec:containers:- command:- kube-apiserver- --advertise-address=192.168.11.1- --allow-privileged=true- --authorization-mode=Node,RBAC- --client-ca-file=/etc/kubernetes/pki/ca.crt- --enable-admission-plugins=NodeRestriction- --enable-bootstrap-token-auth=true- --enable-aggregator-routing=true  ##文件中添加此行,开启聚合路由- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

部署验证

# kubectl apply -f ms.yml  #打印如下使用如下命令可查看是否应用成功
#kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   218m         2%     3014Mi          19%
node1    121m         1%     1421Mi          9%
node2    217m         2%     1365Mi          8%

2、部署KubeSphere

获取文件

#目前已有3.3.2,自己使用时镜像拉取有问题,改到了3.2.1
下载安装器
curl -O  https://ghproxy.com/https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
##https://ghproxy.com/是一个国内代理,如果当前网络较好可去掉代理下载安装配置文件
curl -O https://ghproxy.com/https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml注意:
1、关于kubesphere版本选择及一些安装介绍可参考官网:
https://kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/introduction/overview/

修改cluster-configuration.yaml

将ectd下的 endpointIps改为master节点IP地址即可,其余安装官网可插拔组件需要做如下修改(开启部分应用服务)

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.2.1
spec:persistence:storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.authentication:jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.local_registry: ""        # Add your private registry address if it is needed.# dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-install release version.etcd:monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.endpointIps: 192.168.11.11  # etcd cluster EndpointIps. It can be a bunch of IPs here.port: 2379              # etcd port.tlsEnable: truecommon:core:console:enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.port: 30880type: NodePort# apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster#  resources: {}# controllerManager:#  resources: {}redis:enabled: truevolumeSize: 2Gi # Redis PVC size.openldap:enabled: truevolumeSize: 2Gi   # openldap PVC size.minio:volumeSize: 20Gi # Minio PVC size.monitoring:# type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero. enabled: falsegpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs. kinds:         - resourceName: "nvidia.com/gpu"resourceType: "GPU"default: truees:   # Storage backend for logging, events and auditing.# master:#   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.#   replicas: 1      # The total number of master nodes. Even numbers are not allowed.#   resources: {}# data:#   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.#   replicas: 1       # The total number of data nodes.#   resources: {}logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.basicAuth:enabled: falseusername: ""password: ""externalElasticsearchUrl: ""externalElasticsearchPort: ""alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.enabled: true         # Enable or disable the KubeSphere Alerting System.# thanosruler:#   replicas: 1#   resources: {}auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.enabled: true         # Enable or disable the KubeSphere Auditing Log System.# operator:#   resources: {}# webhook:#   resources: {}devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.enabled: true             # Enable or disable the KubeSphere DevOps System.# resources: {}jenkinsMemoryLim: 2Gi      # Jenkins memory limit.jenkinsMemoryReq: 1500Mi   # Jenkins memory request.jenkinsVolumeSize: 8Gi     # Jenkins volume size.jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.jenkinsJavaOpts_Xmx: 512mjenkinsJavaOpts_MaxRAM: 2gevents:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.enabled: true         # Enable or disable the KubeSphere Events System.# operator:#   resources: {}# exporter:#   resources: {}# ruler:#   enabled: true#   replicas: 2#   resources: {}logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.enabled: true         # Enable or disable the KubeSphere Logging System.containerruntime: dockerlogsidecar:enabled: truereplicas: 2# resources: {}metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).enabled: false                   # Enable or disable metrics-server.monitoring:storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.# kube_rbac_proxy:#   resources: {}# kube_state_metrics:#   resources: {}# prometheus:#   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.#   volumeSize: 20Gi  # Prometheus PVC size.#   resources: {}#   operator:#     resources: {}#   adapter:#     resources: {}# node_exporter:#   resources: {}# alertmanager:#   replicas: 1          # AlertManager Replicas.#   resources: {}# notification_manager:#   resources: {}#   operator:#     resources: {}#   proxy:#     resources: {}gpu:                           # GPU monitoring-related plug-in installation. nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.# resources: {}multicluster:clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.network:networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.enabled: true # Enable or disable network policies.ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.store:enabled: true # Enable or disable the KubeSphere App Store.servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.enabled: false   # Enable or disable KubeEdge.cloudCore:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []cloudhubPort: "10000"cloudhubQuicPort: "10001"cloudhubHttpsPort: "10002"cloudstreamPort: "10003"tunnelPort: "10004"cloudHub:advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.- ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.nodeLimit: "100"service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"edgeWatcher:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []edgeWatcherAgent:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []

安装

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

安装进度检查

# 查看安装进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
#安装出现如下代表成功了
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################Console: http://192.168.11.11:30880
Account: admin
Password: P@88w0rdNOTES:1. After you log into the console, please check themonitoring status of service components in"Cluster Management". If any service is notready, please wait patiently until all components are up and running.2. Please change the default password after login.

prometheus修改

安装好了KubeSphere之后查看相关Pod,会发现有两个Prometheus一直处于ContainerCreating,describe检查

kubesphere-monitoring-system   prometheus-k8s-0                                   0/2     ContainerCreating   0             61m
kubesphere-monitoring-system   prometheus-k8s-1                                   0/2     ContainerCreating   0kubectl describe pods -n kubesphere-monitoring-system   prometheus-k8s-0
....
Events:Type     Reason       Age                  From     Message----     ------       ----                 ----     -------Warning  FailedMount  56m (x3 over 65m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[config config-out tls-assets prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf]: timed out waiting for the conditionWarning  FailedMount  46m (x4 over 62m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf config config-out tls-assets prometheus-k8s-db]: timed out waiting for the conditionWarning  FailedMount  31m (x2 over 44m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[kube-api-access-l25wf config config-out tls-assets prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs]: timed out waiting for the conditionWarning  FailedMount  26m (x2 over 33m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[config-out tls-assets prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf config]: timed out waiting for the conditionWarning  FailedMount  6m6s (x5 over 51m)   kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf config config-out tls-assets]: timed out waiting for the conditionWarning  FailedMount  112s (x40 over 67m)  kubelet  MountVolume.SetUp failed for volume "secret-kube-etcd-client-certs" : secret "kube-etcd-client-certs" not found
由于我们在cluster-configuration.yaml文件中开启了监控功能,但是Prometheus无法获取到etcd的证书
因此我们创建证书:
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

按提示所有pod running时就可以登录了,账号密码为日志打印那个

登录

done! 

OpenEuler安装Kubernetes+KubeSphere教程相关推荐

  1. [基础服务] [kubernetes] KubeSphere 基于 Kubernetes (K8S)的安装

    简介 KubeSphere是个全栈的Kubernetes容器云PasS解决方案 1.KubeSphere是个容器云平台,即PaaS平台,而Kubernetes是个容器编排系统,二者不一样.而在DevO ...

  2. centos 8 安装 kubernetes 1.8.2 最新完整教程

    本文讲述了如何在centos 8系统上,使用 kubeadm 安装 kubernetes 1.8.2 的方法,文章为作者一步一步实践后写的,综合了很多的文章,解决了很多安装中的问题. 本文引用的原文为 ...

  3. openEuler 22.09环境二进制安装Kubernetes(k8s) v1.26

    本文档描述了如何在openEuler 22.09上以二进制模式部署高可用Kubernetes集群(适用k8s v1.26版本). 注意:本文档中的所有操作均使用root权限执行. 1 部署环境 1.1 ...

  4. Kubernetes入门教程 --- 使用二进制安装

    Kubernetes入门教程 --- 使用二进制安装 1. Introduction 1.1 架构图 1.2 关键字介绍 1.3 简述 2. 使用Kubeadm Install 2.1 申请三个虚拟环 ...

  5. Kubernetes 入门教程

    简介:本文是一篇 kubernetes(下文用 k8s 代替)的入门文章,将会涉及 k8s 的架构.集群搭建.一个 Redis 的例子,以及如何使用 operator-sdk 开发 operator ...

  6. Linux下部署Kubernetes+Kubesphere(一)Kubernetes基础

    1.服务器规划 为配置Kubesphere高可用集群,需要三台或三台以上机器作为Master节点,每台机器既可以作为Master也可以作为Worker节点.其中Master节点数量建议为单数.该示例中 ...

  7. 如何在CentOS 7上安装Kubernetes Docker群集

    如何在CentOS 7上安装Kubernetes Docker群集 Kubernetes是一个开源平台,用于管理由Google开发的容器化应用程序.它允许您在集群环境中管理,扩展和自动部署容器化应用程 ...

  8. 轻松快速安装Kubernetes dashboard

    安装前需要准备工作: 1.虚拟机工具VirtualBox 2.系统镜像文件CentOS 7 3.安装好虚拟机2-3台 本次教程使用的是三台虚拟机: 192.168.4.13 k8s-master 19 ...

  9. Kubespray安装kubernetes

    kubernetes集群学习-01 Kubespray安装kubernetes(科学上网方式) kubespray 安装kubernetes 要求 kubernetes 节点规划 配置代理服务(非必需 ...

最新文章

  1. WebSphere Application Server中manageprofiles的使用
  2. python在财务中的应用实训报告-实践应用|PyQt5制作雪球网股票数据爬虫工具
  3. python标准库和第三方库_python常用标准库及三方库
  4. auth复习和BBS项目的登录(1)
  5. python编写统计选票的程序_使用python编写微信公众号发稿统计程序
  6. python基础知识500题_Python基础语法习题参考(0-9关)
  7. Node后端数据渲染
  8. 【渝粤教育】 广东开放大学 21秋期末考试社会工作综合能力10411k2
  9. 数据太大?你该了解Hadoop分布式文件系统
  10. 模仿概念版QQ登录界面(超炫)
  11. 做一个古诗词的html页面,制作一个古诗词的网页
  12. pythonselenium模拟登陆爬取信息_python3 使用selenium模拟登陆天眼查抓取数据
  13. Clouda安装和使用过程详解
  14. 5GC 网元介绍(AMF、SMF、UPF、UDM、PCF)
  15. 机械转计算机,成功上岸鹅厂。白菜价年薪40w
  16. 用stream流将list集合根据某个字段分组成Map<String,List<T>>类型的集合
  17. Intel和AMD处理器各有什么特点?该如何选择?
  18. js 页面动态创建一个坐标(图标)
  19. Evaluations
  20. java归并排序算法

热门文章

  1. C# word中插入页脚而不要页眉横线
  2. 人物建模行业确实不容易,零基础的小白如何学习,让你少走些弯路
  3. 360手机N6如何更新Android系统,360手机N6 Pro卡刷刷机教程_360手机N6 Pro升级更新官方包...
  4. vscode 中文插件失效
  5. app测试中常用的Android模拟器
  6. 为什么共享经济爱上了明星代言?
  7. boot返回码规范 spring_三种自定义SpringBoot返回的状态码
  8. 2020年物联网安全面临的挑战有哪些
  9. 【数据结构与算法】Manacher算法
  10. Oracle-day01 下