阿里云-ECS云服务器跨地域部署k8s集群

  • 一 、背景介绍
  • 二、环境准备
    • 2.1 ECS云服务资源清单
    • 2.2 K8s软件列表
  • 三、阿里云ECS服务器网络问题
    • 3.1 问题阐述
    • 3.2 解决方案
  • 四、服务节点调整(master,node1,node2)
    • 4.1 关闭firewalld防火墙,并安装设置Iptables规则为空
    • 4.2 调整内核参数
    • 4.3 关闭 swap
    • 4.4 关闭 selinux
    • 4.5 设置hostname
    • 4.6 调整服务器时区
    • 4.7 关闭邮件服务
    • 4.8 设置rsyslogd和systemd journald
    • 4.9 ipvs前置条件准备
    • 4.10 安装 Docker
    • 4.11 安装 Kubeadm、Kubelet、Kubectl
    • 4.12 阿里云ECS安全组端口开放
  • 五、Kubeadm安装k8s
    • 5.1 建立虚拟网卡(master,node1,node2)
    • 5.2 修改kubelet启动参数(master,node1,node2)
    • 5.3 使用脚本导入镜像
      • 5.3.1 Master节点镜像导入脚本(pull_k8s_images_master.sh)-(master)
      • 5.3.2 Node节点镜像导入脚本(pull_k8s_images_node.sh)-(node1,node2)
    • 5.4 使用kubeadm初始化主节点(master)
    • 5.5 配置kube-apiserver参数(master)
    • 5.6 Node点加入集群(node1,node2)
    • 5.7 检查是否加入集群(Master)
    • 5.8 配置flannel文件并安装(Master)
    • 5.9 创建flannel
    • 5.10 检查网络是否连通(master)
    • 5.11 手动开启配置,开启ipvs转发模式(master)
    • 5.12 移除虚拟网卡(非必须)
  • 六、安装默认的StorageClass
    • 6.1 master和node节点安装nfs服务(master,node1,node2 )
    • 6.2 master节点创建共享挂载目录(客户端不需要创建共享目录和编辑配置文件,只安装服务就行)
    • 6.3 编辑master节点/etc/exports文件,将目录共享到*所有网段中:(网段根据自己的情况写)
    • 6.4 master主节点下载 NFS 插件(master)
    • 6.5 修改deployment.yaml文件
    • 6.6 部署yml文件
    • 6.7 验证安装
    • 6.8 测试文件
  • 七、安装kubeSphere监控
    • 7.1 kubeSphere架构
    • 7.2 环境要求
    • 7.3.安装yaml文件 (先安装第一个,再安装第二个)
    • 7.4 安装日志查看(中间需要等待时间)
    • 7.5 查看所有pod
    • 7.6 访问kubesphere
  • 八、总结

一 、背景介绍

不慌不忙,赶上了阿里云的双十一活动,用了三个不同的阿里云账号购买了三台不通地区的阿里云服务器,反正我觉得是一次成功的薅羊毛,哈哈哈,不知道你们有没有这样认知,但是购买后我才发现,ECS服务器内网是不能互通的,正赶巧我刚好要自建一个基于ECS服务器的K8S集群,然后因为网络问题折腾了好久,估计最少3天,差点就想放弃了,然后我鼓起勇气在Google搜索资料发现,可以搞虚拟一张网卡,IP用当前节点的公网IP,然后使用此IP注册进集群。总算看到了希望,哈哈哈,下面我们开始填坑摸索吧!

二、环境准备

2.1 ECS云服务资源清单

云服务商 主机名 公网ip/私网ip 推荐配置
阿里云 zanzancloud-k8s-master 123.57.36.xx / 172.20.213.xx 2C2G
阿里云 zanzancloud-k8s-node1 60.205.227.xx / 172.16.198.xx 2C4G
阿里云 zanzancloud-k8s-node2 123.56.148.xx / 172.22.51.xx 2C2G

2.2 K8s软件列表

软件 版本
CentOS 7.8
Kubernetes v1.18.6
Docker 20.10.10
Etcd 3.4.3-0
Flannel 20.10.10
Pause 3.2
DNS 1.6.7

三、阿里云ECS服务器网络问题

3.1 问题阐述

一般情况下,“kubeadm"部署集群时指定”–apiserver-advertise-address=<public_ip>"参数,即可在其他机器上,通过公网ip join到本机器,然而,阿里云ecs里没配置公网ip,etcd会无法启动,导致初始化失败!

3.2 解决方案

当我部署k8s集群的时候发现,网卡上绑定的地址不是公网IP,而应用只能绑定网卡上的地址。但是私网IP之间又不通。当时内心是崩溃的!最后在官方文档得知,可以采用公网IP部署,具体参考:传送门-公网安装k8s

四、服务节点调整(master,node1,node2)

4.1 关闭firewalld防火墙,并安装设置Iptables规则为空

#关闭firewalld防火墙
systemctl stop firewalld &&  systemctl  disable firewal
#安装Iptables
yum -y install iptables-services  &&  systemctl  start iptables  &&  systemctl  enable iptables&&  iptables -F  &&  service iptables save

4.2 调整内核参数

cat > k8s.conf <<EOF
#开启网桥模式
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#开启转发
net.ipv4.ip_forward = 1
##关闭ipv6
net.ipv6.conf.all.disable_ipv6=1
EOF
cp k8s.conf /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf

4.3 关闭 swap

#关闭 swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

4.4 关闭 selinux

#关闭 selinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

4.5 设置hostname

#设置hostname
hostnamectl set-hostname zanzancloud-k8s-master  # zanzancloud-k8s-node1 / zanzancloud-k8s-node2
hostname#配置host映射
cat >> /etc/hosts << EOF
123.57.36.xx  zanzancloud-k8s-master
60.205.227.xx zanzancloud-k8s-node1
123.56.148.xx zanzancloud-k8s-node2
EOF

4.6 调整服务器时区

# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond

4.7 关闭邮件服务

#关闭邮件服务
systemctl stop postfix && systemctl disable postfix

4.8 设置rsyslogd和systemd journald

默认有两个日志服务,使用journald关闭rsyslogd

# 持久化保存日志的目录
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化
Storage=persistent# 压缩历史日志
Compress=yesSysnIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000# 最大占用空间 10G
SystemMaxUse=10G# 单日志文件最大 200M
SystemMaxFileSize=200M# 日志保存时间 2 周
MaxRetentionSec=2week# 不将日志转发到 syslog
ForwardToSyslog=noEOF
#重启journald
systemctl restart systemd-journald

4.9 ipvs前置条件准备

ipvs转发效率比iptables更高,看上去也比iptables舒服

modprobe br_netfiltercat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

4.10 安装 Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce-20.10.6-3.el7
systemctl enable docker && systemctl start docker
docker --version
# 换成阿里Docker仓库
cat > /etc/docker/daemon.json << EOF
{"registry-mirrors": [" https://wl5zc6br.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
docker info# out info
# Registry Mirrors:
#  https://wl5zc6br.mirror.aliyuncs.com/

4.11 安装 Kubeadm、Kubelet、Kubectl

# 添加源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF# 关闭selinux
setenforce 0# 安装kubelet、kubeadm、kubectl
yum install -y kubelet kubeadm kubectl# 设置为开机自启
systemctl enable kubelet

4.12 阿里云ECS安全组端口开放

  • 10250/10260 TCP端口:给kube-schedule、kube-controll,kube-proxy、kubelet等使用

  • 6443 TCP端口:给kube-apiserver使用

  • 2379 2380 2381 TCP商品:ETCD使用

  • 8472 UDP端口:vxlan使用端口

五、Kubeadm安装k8s

5.1 建立虚拟网卡(master,node1,node2)

# 写入虚拟网卡
cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=你的公网IP
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF
# 重启网卡
systemctl restart network
# 查看ip
ip addr

5.2 修改kubelet启动参数(master,node1,node2)

# 此文件安装kubeadm后就存在了
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf# 注意,这步很重要,如果不做,节点仍然会使用内网IP注册进集群
# 在末尾添加参数 --node-ip=公网IP# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=公网IP

5.3 使用脚本导入镜像

5.3.1 Master节点镜像导入脚本(pull_k8s_images_master.sh)-(master)

set -o errexit
set -o nounset
set -o pipefail##这里定义版本,按照上面得到的列表自己改一下版本号
KUBE_VERSION=v1.18.6
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
DNS_VERSION=1.6.7##这是原始仓库名,最后需要改名成这个
GCR_URL=k8s.gcr.io##这里就是写你要使用的仓库
DOCKERHUB_URL=gotok8s##这里是镜像列表,新版本要把coredns改成coredns/coredns
images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${DNS_VERSION}
)##这里是拉取和改名的循环语句
for imageName in ${images[@]} ; dodocker pull $DOCKERHUB_URL/$imageNamedocker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageNamedocker rmi $DOCKERHUB_URL/$imageName
done

5.3.2 Node节点镜像导入脚本(pull_k8s_images_node.sh)-(node1,node2)

set -o errexit
set -o nounset
set -o pipefail##这里定义版本,按照上面得到的列表自己改一下版本号KUBE_VERSION=v1.18.6
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
DNS_VERSION=1.6.7##这是原始仓库名,最后需要改名成这个
GCR_URL=k8s.gcr.io##这里就是写你要使用的仓库
DOCKERHUB_URL=gotok8s##这里是镜像列表,新版本要把coredns改成coredns/coredns
images=(
kube-proxy:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${DNS_VERSION}
)##这里是拉取和改名的循环语句
for imageName in ${images[@]} ; dodocker pull $DOCKERHUB_URL/$imageNamedocker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageNamedocker rmi $DOCKERHUB_URL/$imageName
done

5.4 使用kubeadm初始化主节点(master)

# step1 添加配置文件,注意替换下面的IP
cat > kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
apiServer:certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP- zanzancloud-k8s-master    #请替换为hostname- 123.57.36.xx   #请替换为公网- 172.20.213.xx  #请替换为私网- 10.96.0.1   #不要替换,此IP是API的集群地址,部分服务会用到
controlPlaneEndpoint: 47.74.22.13:6443 #替换为公网IP
networking:podSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12
--- 将默认调度方式改为ipvs
apiVersion: kubeproxy-config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:SupportIPVSProxyMode: true
mode: ipvs
EOF# 如果是1核心或者1G内存的请在末尾添加参数(--ignore-preflight-errors=all),否则会初始化失败!
# 同时注意,此步骤成功后,会打印,两个重要信息!kubeadm init --config=kubeadm-config.yaml

注意:

信息1 上面初始化成功后,将会生成kubeconfig文件,用于请求api服务器,请执行下面操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

信息2 此信息用于后面工作节点加入主节点使用

kubeadm join 123.57.36.xx:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:28265e79f8695cf2ad2e23773e856c8b04c809b0e3b3d6f5e01fb1b543341065

5.5 配置kube-apiserver参数(master)

# 修改两个信息,添加--bind-address和修改--advertise-address
vim /etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1
kind: Pod
metadata:annotations:kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 123.57.36.xx:6443creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-system
spec:containers:- command:- kube-apiserver- --advertise-address=123.57.36.xx  #修改为公网IP- --bind-address=0.0.0.0 #添加此参数- --allow-privileged=true- --authorization-mode=Node,RBAC- --client-ca-file=/etc/kubernetes/pki/ca.crt- --enable-admission-plugins=NodeRestriction- --enable-bootstrap-token-auth=true- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key- --etcd-servers=https://127.0.0.1:2379- --insecure-port=0- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key- --requestheader-allowed-names=front-proxy-client- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt- --requestheader-extra-headers-prefix=X-Remote-Extra-- --requestheader-group-headers=X-Remote-Group- --requestheader-username-headers=X-Remote-User- --secure-port=6443- --service-account-key-file=/etc/kubernetes/pki/sa.pub- --service-cluster-ip-range=10.96.0.0/12- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt- --tls-private-key-file=/etc/kubernetes/pki/apiserver.keyimage: k8s.gcr.io/kube-apiserver:v1.18.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 175.24.19.12path: /healthzport: 6443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserverresources:requests:cpu: 250mvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certsreadOnly: true- mountPath: /etc/pkiname: etc-pkireadOnly: true- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certs- hostPath:path: /etc/pkitype: DirectoryOrCreatename: etc-pki- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certs
status: {}

5.6 Node点加入集群(node1,node2)

# 需要虚拟IP和kubelet启动参数都改成功后,再执行
kubeadm join 123.57.36.xx:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:28265e79f8695cf2ad2e23773e856c8b04c809b0e3b3d6f5e01fb1b543341065

5.7 检查是否加入集群(Master)

# 成功后,INTERNAL-IP均显示公网IP
kubectl get nodes -o wide

5.8 配置flannel文件并安装(Master)

#创建目录
mkdir -pv /var/lib/k8s/flannel
#下载flannel网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml# 共修改两个地方,一个是args下,添加args:- --public-ip=$(PUBLIC_IP) # 添加此参数,申明公网IP- --iface=eth0             # 添加此参数,绑定网卡# 然后是env下env:- name: PUBLIC_IP     #添加环境变量valueFrom:          fieldRef:          fieldPath: status.podIP 

kube-flannel.yml完整配置

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN', 'NET_RAW']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: quay.io/coreos/flannel:v0.15.0command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.15.0command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --public-ip=$(PUBLIC_IP)- --iface=eth0resources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: PUBLIC_IPvalueFrom:fieldRef:fieldPath: status.podIP- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

5.9 创建flannel

kubectl apply -f flannel.yaml

5.10 检查网络是否连通(master)

# 检查pod是否都是ready状态
kubectl get pods -o wide --all-namespaces
...# 手动创建一个pod
kubectl create deployment nginx --image=nginx# 查看pod的ip
kubectl get pods -o wide# 主节点或其它节点,ping一下此ip,看看是否能ping通# 没有的话,查看2.9章节中说明的端口是否打开

5.11 手动开启配置,开启ipvs转发模式(master)

# 前面都成功了,但是有时候默认并不会启用`IPVS`模式,那就手动修改一下,只修改一处
# 修改后,如果没有及时生效,请删除kube-proxy,会自动重新创建,然后使用ipvsadm -Ln命令,查看是否生效
# ipvsadm没有安装的,使用yum install ipvsadm安装
kubectl edit configmaps -n kube-system kube-proxy---
apiVersion: v1
data:config.conf: |-apiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:acceptContentTypes: ""burst: 0contentType: ""kubeconfig: /var/lib/kube-proxy/kubeconfig.confqps: 0clusterCIDR: 10.244.0.0/16configSyncPeriod: 0sconntrack:maxPerCore: nullmin: nulltcpCloseWaitTimeout: nulltcpEstablishedTimeout: nulldetectLocalMode: ""enableProfiling: falsehealthzBindAddress: ""hostnameOverride: ""iptables:masqueradeAll: falsemasqueradeBit: nullminSyncPeriod: 0ssyncPeriod: 0sipvs:excludeCIDRs: nullminSyncPeriod: 0sscheduler: ""strictARP: falsesyncPeriod: 0stcpFinTimeout: 0stcpTimeout: 0sudpTimeout: 0skind: KubeProxyConfigurationmetricsBindAddress: ""mode: "ipvs"  # 如果为空,请填入`ipvs`nodePortAddresses: nulloomScoreAdj: nullportRange: ""showHiddenMetricsForVersion: ""udpIdleTimeout: 0swinkernel:enableDSR: falsenetworkName: ""

预期结果如下图:

5.12 移除虚拟网卡(非必须)

注意: 此步骤,可以不执行,因为实测不移除,节点间的网络也通了,如果是centos8网络不通,再执行下面的操作,移除虚拟虚拟网卡

 # 移除虚拟网卡mv /etc/sysconfig/network-scripts/ifcfg-eth0\:1 /root/# 重启reboot

六、安装默认的StorageClass

6.1 master和node节点安装nfs服务(master,node1,node2 )

yum -y install nfs-utils rpcbind
#启动nfs并设为开机自启
systemctl start nfs && systemctl enable nfs
systemctl start rpcbind && systemctl enable rpcbind

6.2 master节点创建共享挂载目录(客户端不需要创建共享目录和编辑配置文件,只安装服务就行)

mkdir -pv /data/volumes/{v1,v2,v3}

6.3 编辑master节点/etc/exports文件,将目录共享到*所有网段中:(网段根据自己的情况写)

cat /etc/exports <<EOF
/data/volumes/v1  *(rw,no_root_squash,no_all_squash)
/data/volumes/v2  *(rw,no_root_squash,no_all_squash)
/data/volumes/v3  *(rw,no_root_squash,no_all_squash)
EOF# 发布
exportfs -arv
# 查看
showmount -e

注意:

* 表示所有网段支持,192.168.200.0/24表示指定网段才支持访问。

6.4 master主节点下载 NFS 插件(master)

#创建目录
mkdir -pv /var/lib/k8s/storage
#----------------------------
for file in class.yaml deployment.yaml rbac.yaml test-claim.yaml ; do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ; done

6.5 修改deployment.yaml文件

vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latest     ##默认是latest版本volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs          ##这里的供应者名称必须和class.yaml中的provisioner的名称一致,否则部署不成功- name: NFS_SERVERvalue: 123.57.36.19           ##这里写NFS服务器的IP地址或者能解析到的主机名- name: NFS_PATHvalue: /data/volumes/v1        ##这里写NFS服务器中的共享挂载目录(强调:这里的路径必须是目录中最后一层的文件夹,否则部署的应用将无权限创建目录导致Pending)volumes:- name: nfs-client-rootnfs:server: 123.57.36.19            ##NFS服务器的IP或可解析到的主机名 path: /data/volumes/v1           ##NFS服务器中的共享挂载目录(强调:这里的路径必须是目录中最后一层的文件夹,否则部署的应用将无权限创建目录导致Pending)

6.6 部署yml文件

kubectl apply -f .

6.7 验证安装

查看服务:

kubectl get pods


列出你的集群中的StorageClass:

kubectl get storageclass


标记一个StorageClass为默认的:

kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl get storageclass

6.8 测试文件

#创建测试文件并写入yml配置
cat statefulset-nfs.yaml << EOF
apiVersion: v1
kind: Service
metadata:name: nginxlabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: nfs-web
spec:serviceName: "nginx"replicas: 3selector:matchLabels:app: nfs-web # has to match .spec.template.metadata.labelstemplate:metadata:labels:app: nfs-webspec:terminationGracePeriodSeconds: 10containers:- name: nginximage: nginx:1.7.9ports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwannotations:volume.beta.kubernetes.io/storage-class: managed-nfs-storagespec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 1Gi
EOF# 安装测试用例
kubectl apply -f statefulset-nfs.yaml

查看pod:

[root@zanzancloud-k8s-master ~]# kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6d4469b5b5-m6jgp   1/1     Running   1          47h
nfs-web-0                                 1/1     Running   0          44m
nfs-web-1                                 1/1     Running   0          44m
nfs-web-2                                 1/1     Running   0          43m

查看pvc:

[root@zanzancloud-k8s-master ~]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim      Bound    pvc-5dc58dfa-bd9d-4ad3-98c4-28649c13113c   1Mi        RWX            managed-nfs-storage   47h
www-nfs-web-0   Bound    pvc-7cdcdc4c-e9d2-4848-b434-9caf7e72db5a   1Gi        RWO            managed-nfs-storage   45m
www-nfs-web-1   Bound    pvc-23e3cdb2-a365-43ff-8936-d0e3df30ffac   1Gi        RWO            managed-nfs-storage   44m
www-nfs-web-2   Bound    pvc-2c34b87d-4f09-4063-aea9-9ae1e7567194   1Gi        RWO            managed-nfs-storage   44m

查看pv:

[root@zanzancloud-k8s-master ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                             STORAGECLASS          REASON   AGE
pvc-23e3cdb2-a365-43ff-8936-d0e3df30ffac   1Gi        RWO            Delete           Bound    default/www-nfs-web-1                                             managed-nfs-storage            45m
pvc-2c34b87d-4f09-4063-aea9-9ae1e7567194   1Gi        RWO            Delete           Bound    default/www-nfs-web-2                                             managed-nfs-storage            45m
pvc-5dc58dfa-bd9d-4ad3-98c4-28649c13113c   1Mi        RWX            Delete           Bound    default/test-claim                                                managed-nfs-storage            47h
pvc-7bf72c3c-be16-43ab-b43a-a84659b9c688   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            45h
pvc-7cdcdc4c-e9d2-4848-b434-9caf7e72db5a   1Gi        RWO            Delete           Bound    default/www-nfs-web-0                                             managed-nfs-storage            46m
pvc-dfac2b35-6c22-487e-baee-f381c44a5254   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            45h

查看 nfs server 目录中信息:

[root@zanzancloud-k8s-master ~]# ll /data/volumes/v1
total 0
drwxrwxrwx 2 root root  6 Feb 11 15:58 default-test-claim-pvc-5dc58dfa-bd9d-4ad3-98c4-28649c13113c
drwxrwxrwx 2 root root  6 Feb 11 10:35 default-test-claim-pvc-b68c2fde-14eb-464a-8907-f778a654e8b8
drwxrwxrwx 2 root root  6 Feb 13 14:30 default-www-nfs-web-0-pvc-7cdcdc4c-e9d2-4848-b434-9caf7e72db5a
drwxrwxrwx 2 root root  6 Feb 13 14:31 default-www-nfs-web-1-pvc-23e3cdb2-a365-43ff-8936-d0e3df30ffac
drwxrwxrwx 2 root root  6 Feb 13 14:31 default-www-nfs-web-2-pvc-2c34b87d-4f09-4063-aea9-9ae1e7567194
drwxrwxrwx 3 root root 27 Feb 13 14:44 kubesphere-monitoring-system-prometheus-k8s-db-prometheus-k8s-0-pvc-dfac2b35-6c22-487e-baee-f381c44a5254
drwxrwxrwx 3 root root 27 Feb 13 14:44 kubesphere-monitoring-system-prometheus-k8s-db-prometheus-k8s-1-pvc-7bf72c3c-be16-43ab-b43a-a84659b9c688

七、安装kubeSphere监控

7.1 kubeSphere架构

7.2 环境要求

前提条件:
1.k8s集群版本必须是1.18.x(v1.18.6)
2.必须有默认的storageclass(上面已经安装)
3.内存和cpu最低要求:CPU > 1 Core, Memory > 2 G

7.3.安装yaml文件 (先安装第一个,再安装第二个)

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yamlkubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

7.4 安装日志查看(中间需要等待时间)

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

7.5 查看所有pod

[root@zanzancloud-k8s-master ~]# kubectl get pod -A
NAMESPACE                      NAME                                               READY   STATUS    RESTARTS   AGE
default                        nfs-client-provisioner-57944b54d5-mv895            1/1     Running   5          2d3h
kube-system                    coredns-66bff467f8-k67hr                           1/1     Running   7          3d23h
kube-system                    coredns-66bff467f8-pvlbn                           1/1     Running   7          3d23h
kube-system                    etcd-zanzancloud-k8s-master                        1/1     Running   8          3d23h
kube-system                    kube-apiserver-zanzancloud-k8s-master              1/1     Running   7          3d21h
kube-system                    kube-controller-manager-zanzancloud-k8s-master     1/1     Running   19         3d23h
kube-system                    kube-flannel-ds-6tjtn                              1/1     Running   3          3d
kube-system                    kube-flannel-ds-b2c6w                              1/1     Running   3          3d
kube-system                    kube-flannel-ds-vdssz                              1/1     Running   4          3d
kube-system                    kube-proxy-dv252                                   1/1     Running   6          3d23h
kube-system                    kube-proxy-gfxcv                                   1/1     Running   8          3d23h
kube-system                    kube-proxy-mmqff                                   1/1     Running   6          3d23h
kube-system                    kube-scheduler-zanzancloud-k8s-master              1/1     Running   18         3d23h
kube-system                    snapshot-controller-0                              1/1     Running   0          2d2h
kubesphere-controls-system     default-http-backend-857d7b6856-vhv55              1/1     Running   0          2d2h
kubesphere-controls-system     kubectl-admin-58f985d8f6-lfhc7                     1/1     Running   0          2d2h
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running   0          2d2h
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running   0          2d2h
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running   0          2d2h
kubesphere-monitoring-system   kube-state-metrics-95c974544-2jxp2                 3/3     Running   0          2d2h
kubesphere-monitoring-system   node-exporter-7crqm                                2/2     Running   0          2d2h
kubesphere-monitoring-system   node-exporter-jf5zs                                2/2     Running   2          2d2h
kubesphere-monitoring-system   node-exporter-kfpg9                                2/2     Running   0          2d2h
kubesphere-monitoring-system   notification-manager-deployment-7c8df68d94-mc2fr   1/1     Running   0          2d2h
kubesphere-monitoring-system   notification-manager-deployment-7c8df68d94-tqlqm   1/1     Running   0          2d2h
kubesphere-monitoring-system   notification-manager-operator-6958786cd6-rbkcn     2/2     Running   2          40h
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running   11         2d2h
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running   11         2d2h
kubesphere-monitoring-system   prometheus-operator-84d58bf775-x5mm2               2/2     Running   3          2d2h
kubesphere-system              ks-apiserver-686f9d89f5-md894                      1/1     Running   2          40h
kubesphere-system              ks-console-b4df86d6f-kqh98                         1/1     Running   0          2d2h
kubesphere-system              ks-controller-manager-5b85577bdb-2kf9k             1/1     Running   5          2d2h
kubesphere-system              ks-installer-7cb866bd-krx6k                        1/1     Running   0          2d2h
kubesphere-system              openldap-0                                         1/1     Running   0          2d2h
kubesphere-system              redis-644bc597b9-t7brt                             1/1     Running   0          2d2h

7.6 访问kubesphere

访问地址:ip:30880
账号:admin 密码: P@88w0rd

八、总结

呀呀呀,总算折腾完了,经过几天不懈努力,从网络到k8s集群,再到监控端,废了九牛二虎之力,上诉流程是经过折腾总结的经验,希望对于你有所帮助!!

阿里云-ECS云服务器跨地域部署k8s集群相关推荐

  1. 通过阿里云ecs部署k8s集群

    通过阿里云ecs部署k8s集群 1. 搭建环境 2. 安装步骤 禁用Selinux Restart Docker 此处仅有两台服务器,一台master节点,一台node节点,后期可按照步骤继续增加no ...

  2. 阿里云部署k8s集群

    ​ 阿里云部署k8s集群 前言 1.k8集群架构 Kubernetes Cluster = N Master Node + N Worker Node:N主节点+N工作节点: N>=1 2.机器 ...

  3. 腾讯云部署K8s集群

    腾讯云部署K8s集群 文章目录 腾讯云部署K8s集群 0. 导读(重要!!!一定要读) 1. 环境准备 版本 服务器配置 2. 安装步骤 所有主机操作 1. 提前准备 2. 安装docker 3. 安 ...

  4. 【云原生Kubernetes系列第五篇】kubeadm v1.20 部署K8S 集群架构(人生这道选择题,总会有遗憾)

    系列文章目录 ??即日起,更新云原生相关知识,喜欢的小伙伴可以给我点个三连喔 ??收录至专栏 云原生 ??[云原生Docker系列第一篇]Docker镜像管理 ??[云原生Docker系列第二篇]Do ...

  5. 【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台

    [云原生之kubernetes实战]在k8s集群下部署Weave Scope监控平台 一.Weave Scope介绍 1.Weave Scope简介 2.Weave Scope的特点 3.Weave ...

  6. 《云原生之K8s实战》基于kubeadm部署K8S集群

    目录 基于kubeadm部署K8S集群 一.环境准备 1.1.主机初始化配置 1.2.部署docker环境

  7. RKE安装部署K8S集群、Rancher

    服务器准备:三台虚拟机(master:1,node:2:这里选用的阿里云ECS) OS hostname 内网IP Centos7 joker-master-1 172.27.31.149 Cento ...

  8. kubeadm部署K8S集群并使用containerd做容器运行时

    kubeadm部署K8S集群并使用containerd做容器运行时(内容全部实战验证有任何问题欢迎留言咨询讨论) 前言 去年12月份,当Kubernetes社区宣布1.20版本之后会逐步弃用docke ...

  9. k8s入门:裸机部署 k8s 集群

    系列文章 第一章:✨ k8s入门:裸机部署 k8s 集群 第二章:✨ k8s入门:部署应用到 k8s 集群 第三章:✨ k8s入门:service 简单使用 第四章:✨ k8s入门:StatefulS ...

最新文章

  1. 阿里年会的马老师说:认真生活、快乐工作、保持理想
  2. quartz 分布式_后端必备分布式技术之-调度系统Quartz设计原理
  3. c++ 暂停功能_10月10日丨注意:东部华侨城茶溪谷暂停开放!深圳9月街道环卫榜出炉!前五街道盐田占其三!…今日盐田大小事...
  4. android模拟全局按键,android命令行模拟输入事件(文字、按键、触摸等)
  5. hadoop重命名文件_Hadoop -- 3. 从Flume到HDFS
  6. CSS染色图标(图片)
  7. 啊u学科学计算机,动画 | 《阿U学科学》:开学啦!在游戏中学习科学知识吧
  8. Win10 打开PLSql,其他软件卡到爆,原因CnPlugin_1.5.4插件的问题
  9. csgo躲猫猫模式显示服务器已满,csgo躲猫猫攻略大全
  10. 【分层图最短路】通信线路
  11. Python 基于jieba的三国演义词频分析与可视化
  12. 提取文件原生ico图标
  13. regedit 命令行参数
  14. Windows 7笔记本创建wifi热点供手机上网教程
  15. 奖金高达200万!武汉市便利店行业发展专项资金支持条件、材料及流程
  16. 重新思考人体姿态估计 Rethinking Human Pose Estimation
  17. 【必会系列】c语言转义字符
  18. 阿里云域名https证书(ssl证书)配置
  19. 我的Qt作品(1)高仿海康威视MVS主界面
  20. JDK中文API帮助文档【沈东良】

热门文章

  1. Color argb()方法的参数
  2. 王者系统不能连上聊天服务器,《王者荣耀》同城频道不能发言聊天原因 同城为什么不能发言...
  3. 23 Three.js的光源种类
  4. MySQL通过数据文件恢复数据库
  5. UDP 的报文结构和注意事项
  6. php-90坦克大乱斗源码 v1.1(源码)
  7. 关于微服务架构的描述
  8. 【中秋征文】使用Python中秋节程序员的浪漫《嫦娥奔月》
  9. SAR(逐次逼近)型ADC
  10. 2021年陕西高考英语作文成绩查询,2021陕西高考英语作文多少分满分?