服务器信息

服务器IP 角色
192.168.233.201 master
192.168.233.202 worker
192.168.233.203 worker

安装前准备

  1. 配置主机名,ssh免密登录,hosts解析
# 配置主机名#192.168.233.201 执行hostnamectl set-hostname k8s-master#192.168.233.201 执行hostnamectl set-hostname k8s-node1#192.168.233.202 执行hostnamectl set-hostname k8s-node2
#配置hosts 三台机器执行vim /etc/hosts192.168.233.211 k8s-master192.168.233.212 k8s-node1192.168.233.213 k8s-node2
#配置免密登陆 三台机器执行ssh-keygenssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-masterssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2
  1. 关闭 防火墙,seLinux,swap,允许 iptables 检查桥接流量 同步系统时间
#三台机器同时执行
#关闭防火墙#关闭防火墙
systemctl stop firewalld.service #关闭防火墙自动启动
systemctl disable firewalld.service
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF#让配置生效
sudo sysctl --system#同步系统时间 安装ntpdate
yum -y install ntpdate
#同步时间
ntpdate -u  pool.ntp.org
#同步完成后,date命令查看时间是否正确
date3. 重启服务器

安装docker

yum install -y docker-ce-20.10.8-3.el7
systemctl start docker
systemctl enable docker
# 为docker设置阿里云镜像加速

安装KuberSphere

#3台服务器都要执行 安装一些工具
yum install -y conntrack socat
export KKZONE=cn
#获取安装文件
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.2 sh -
chmod +x kk#生成配置文件
./kk create config  --with-kubernetes v1.22.12 --with-kubesphere v3.3.1
#修改配置文件内的ip,超时时间,etcd等配置后执行安装
./kk create cluster -f config-sample.yaml
#查看进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

config-sample.yaml 配置文件示例:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:## timeout表示安装时的超时时间,如果网络不好的话可以设置大点 - {name: master, address: 192.168.233.206, internalAddress: 192.168.233.206, user: root, password: "root",timeout:1200}- {name: node1,  address: 192.168.233.207, internalAddress: 192.168.233.207, user: root, password: "root",timeout:1200}- {name: node2,  address: 192.168.233.208, internalAddress: 192.168.233.208, user: root, password: "root",timeout:1200}roleGroups:etcd:- mastermaster:- mastercontrol-plane: - masterworker:- node1- node2controlPlaneEndpoint:## Internal loadbalancer for apiservers # internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.23.10clusterName: cluster.localautoRenewCerts: truecontainerManager: docker
## etcd的type需要改为为kubeadm (新版本默认为kubekey)    etcd:type: kubeadmnetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:privateRegistry: ""namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: []---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.3.1
spec:persistence:storageClass: ""authentication:jwtSecret: ""zone: ""local_registry: ""namespace_override: ""# dev_tag: ""etcd:monitoring: falseendpointIps: localhostport: 2379tlsEnable: truecommon:core:console:enableMultiLogin: trueport: 30880type: NodePort# apiserver:#  resources: {}# controllerManager:#  resources: {}redis:enabled: falsevolumeSize: 2Giopenldap:enabled: falsevolumeSize: 2Giminio:volumeSize: 20Gimonitoring:# type: externalendpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090GPUMonitoring:enabled: falsegpu:kinds:- resourceName: "nvidia.com/gpu"resourceType: "GPU"default: truees:# master:#   volumeSize: 4Gi#   replicas: 1#   resources: {}# data:#   volumeSize: 20Gi#   replicas: 1#   resources: {}logMaxAge: 7elkPrefix: logstashbasicAuth:enabled: falseusername: ""password: ""externalElasticsearchHost: ""externalElasticsearchPort: ""alerting:enabled: false# thanosruler:#   replicas: 1#   resources: {}auditing:enabled: false# operator:#   resources: {}# webhook:#   resources: {}devops:enabled: false# resources: {}jenkinsMemoryLim: 8GijenkinsMemoryReq: 4GijenkinsVolumeSize: 8Gievents:enabled: false# operator:#   resources: {}# exporter:#   resources: {}# ruler:#   enabled: true#   replicas: 2#   resources: {}logging:enabled: falselogsidecar:enabled: truereplicas: 2# resources: {}metrics_server:enabled: falsemonitoring:storageClass: ""node_exporter:port: 9100# resources: {}# kube_rbac_proxy:#   resources: {}# kube_state_metrics:#   resources: {}# prometheus:#   replicas: 1#   volumeSize: 20Gi#   resources: {}#   operator:#     resources: {}# alertmanager:#   replicas: 1#   resources: {}# notification_manager:#   resources: {}#   operator:#     resources: {}#   proxy:#     resources: {}gpu:nvidia_dcgm_exporter:enabled: false# resources: {}multicluster:clusterRole: nonenetwork:networkpolicy:enabled: falseippool:type: nonetopology:type: noneopenpitrix:store:enabled: falseservicemesh:enabled: falseistio:components:ingressGateways:- name: istio-ingressgatewayenabled: falsecni:enabled: falseedgeruntime:enabled: falsekubeedge:enabled: falsecloudCore:cloudHub:advertiseAddress:- ""service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"# resources: {}# hostNetWork: falseiptables-manager:enabled: truemode: "external"# resources: {}# edgeService:#   resources: {}terminal:timeout: 600

如果安装过程中出现错误需要重新卸载安装

#删除集群
./kk delete cluster
#如果docker内已经生成容器并且删不掉需要删除kubelet
ps -ef | grep kubelet
kill -9 1949
rm -rf /usr/bin/kubelet

安装完kubesphere后还需要安装个nfs存储

#所有机器安装
yum install -y nfs-utils
mkdir -p /nfs/data#master执行
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server# 使配置生效
exportfs -r#检查配置是否生效
exportfs#node节点执行
showmount -e k8s-master
mount -t nfs k8s-master:/nfs/data /nfs/data#最后在k8smaster节点上应用yaml文件
kubectl apply -f nfs.yaml

nfs.yaml文件示例

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-storageannotations:storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份---
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2# resources:#    limits:#      cpu: 10m#    requests:#      cpu: 10mvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 172.31.0.4 ## 指定自己nfs服务器地址- name: NFS_PATH  value: /nfs/data  ## nfs服务器共享的目录volumes:- name: nfs-client-rootnfs:server: 172.31.0.4path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

常见问题汇总

登录提示api-server异常

request to http://ks-apiserver/oauth/token failed, reason: getaddrinfo EAI_AGAIN ks-apiserver

解决办法:

  1. 编辑coredns 配置信息
kubectl -n kube-system edit cm coredns -o yaml
#会进入编辑状态,注释掉
# forward . /etc/resolv.conf {#  max_concurrent 1000
#}
  1. 删除掉2个pod
#查询pod的id
kubectl get pods -n kube-system | grep coredns
#删除pod
kubectl delete pod coredns-b5648d655-lm2qf -n kube-system
kubectl delete pod coredns-b5648d655-lxgxm -n kube-system
  1. 稍等片刻再次登录即可

kubesphere多节点在线安装相关推荐

  1. kubesphere多节点安装出错

    kubesphere多节点安装(官方参考https://kubesphere.io/zh/docs/installing-on-linux/introduction/multioverview/) 1 ...

  2. kubesphere 3.0离线安装

    离线安装 离线安装几乎与在线安装相同,不同之处是您必须创建一个本地仓库来托管 Docker 镜像.本教程演示了如何在离线环境中将 KubeSphere 安装到 Kubernetes 上. 开始下方步骤 ...

  3. IOS 7.1 在线安装IPA(OTA无线发布)整理

    本地服务器名:xampp 思路: 1.将生成的 .plist文件放到dropbox中 复制分享链接  将连接写入到index.html中 2.将ipa程序包 放在本地或者上传到dropbox中 获取连 ...

  4. HEL库入门教程:STM32CubeIDE汉化教程 直接在线安装 简单易懂

    问题描述: STM32cubeIDE安装后如果需要汉化,主要有在线安装和离线安装两种,汉化时在网上了解到其他博主的教程主要以离线安装为主,而我采用的是在线安装的技巧,简单快捷,在这里把经验分享给大家. ...

  5. NFS 在线安装和离线安装方式

    NFS 在线安装和离线安装方式: 1. 在线安装: 第一步:在文件主服务器上安装 nfs-kernel-server # 1. 安装 rpcbind, nfs 依赖 rpc 进行相互通信 apt-ge ...

  6. openshift4.7 DHCP方式在线安装

    本文描述openshift4.7 baremental 在线安装方式,我的环境是 vmwamre esxi 虚拟化,也适用于其他方式提供的虚拟主机或者物理机. 本环境 3mater,2 worker ...

  7. ubuntu14.04如何在线安装eclipse以及C/C++开发组件,搭建软件开发平台

    在ubuntu14.04操作系统中进行C/C++软件开发,需要安装eclipse以及CDT等各种组件,下载安装包一一安装,之后再进行各种配置可能比较麻烦,在这里推荐一种在线安装方式,安装配置较为方便. ...

  8. Angular CLI在线安装和离线安装

    Angular CLI 安装方式 默认已经安装了 Node.js 和 npm 包管理器. 1. 在线安装 可以使用外网的情况下,可以使用在线安装的方式. 要使用 npm 命令全局安装 CLI,请打开终 ...

  9. OpenStack环境搭建(三:Computer与Controller节点的安装及配置)

    实验要求: 完成Virtual box平台安装,会应用相关操作: 在virtual box虚拟平台上部署Fuel Master节点: 在virtual box虚拟平台上部署计算节点Computer: ...

最新文章

  1. .Net Core快速创建Windows服务
  2. Unix toolbox注解2之Linux系统状态用户和限制
  3. 信息系统项目管理师-项目立项管理考点笔记
  4. docker 添加端口映射_Docker快速搭建PHP开发环境详细教程
  5. Oracle中较长number型数值的科学计数显示问题
  6. eq相等 ne、neq不相等 EL表达式
  7. ActionScript 3.0基础之事件机制
  8. 网银爬虫系统(爬取网银流水,爬取网银余额)难点分析
  9. arcgis语言如何中文改英文_值得收藏|不重装软件实现ArcGIS中英文版本之间切换...
  10. android 自带TextToSpeech没有声音
  11. HTML制作简单的页面
  12. Feature Selective Anchor-Free Module for Single-Shot Object Detection论文阅读翻译 - 2019CVPR
  13. linux c 语言uint32 t,Linux中uint16_t
  14. FFmpeg 音视频转封装(MP4与FLV互转,流数据转FLV、MP4)
  15. 修复计算机的英语怎么拼,漏洞英语怎么说,bag中文是啥意思。
  16. SAP FICO 解析成本要素类别
  17. 同学们零基础入门学写代码的最佳途径之一哦
  18. win7/win8卸载matlab时提示 bummer -uninstller error exeption calling main
  19. 无纸化案例分析之一——21位标准长方形会议室
  20. 重拾C语言基础 进阶成为JAVA小老板

热门文章

  1. 免费的桌面主题按钮 V1.0
  2. 机器人教育与编程教育的区别到底是什么?
  3. 评测酷睿i5 12500h和i7 12650h差多少 i512500h和i712650h对比
  4. iPhone 15 高端版本万元起步;华为授权 OPPO 使用其 5G 技术;DeepMind 推出 AI 编剧|极客头条
  5. springboot 利用aop实现系统日志和操作日志记录
  6. 资深猎头解密:什么样的简历一投就中?
  7. py4j开发配置idea+python
  8. 彻底解决CUDA安装,从翻译文档开始_Compiling CUDA Programs
  9. 机器学习笔记~HDF5 library version mismatched error与ImportError: 'save_model' requires h5py问题解决
  10. 三子棋(井字棋) 保姆级详解