linux搭建kubernetes集群(一主二从)
目录
1.环境准备
1.1禁用swap(kubernetes特性)
1.2 关闭iptables(三台机器都要设置)
1.3 修改主机名(三台机器都要设置)
1.4 域名解析,ssh免密登录
2.安装k8s
2.1下载yum源
2.2创建缓存(将后面需要下载的rpm包缓存下来,方便其他机器使用)
2.3 打开iptables桥接功能(三个节点都需调整)
2.4 打开路由转发(三个节点都需调整)
2.5 回到master节点
2.6 初始化集群(下载镜像)
2.7 其他节点加入集群
Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
Kubernetes可以在物理或虚拟机的Kubernetes集群上运行容器化应用,Kubernetes能提供一个以“容器为中心的基础架构”,满足在生产环境中运行应用的一些常见需求。
kubernetes中文官网:
https://kubernetes.io/zh/
kubernetes中文社区:
https://www.kubernetes.org.cn/
1.环境准备
我的服务器资源环境
每台主机必须安装docker,关闭防火墙(一般kubernetes是局域网运行),禁用selinux(确保文件访问),确保时间同步。
1.1禁用swap(kubernetes特性)
注意:所有节点都需禁用,不然无法加入集群。
#查看是否启用swap
[root@m72 ~]# free -htotal used free shared buff/cache available
Mem: 62G 647M 54G 1.0G 7.9G 60G
Swap: 15G 0B 15G
关闭swap,执行
# 关闭swap
[root@m72 ~]# swapoff -a
# 修改swap配置文件
[root@m72 ~]# vim /etc/fstab
注释掉下面部分
1.2 关闭iptables(三台机器都要设置)
关闭iptables,并重启docker服务
[root@m72 ~]# iptables -F[root@m72 ~]# systemctl daemon-reload
[root@m72 ~]# systemctl restart docker
1.3 修改主机名(三台机器都要设置)
修改成对应的主机名,我这里主机设置的是m72, 2个节点是s73和s74
[root@m72 ~]# hostnamectl set-hostname m72[root@s73 ~]# hostnamectl set-hostname s73[root@s74 ~]# hostnamectl set-hostname s74
1.4 域名解析,ssh免密登录
修改主机hosts文件
[root@m72 ~]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.220.15.72 m72
10.220.15.73 s73
10.220.15.74 s74
将hosts文件拷贝给其他两个节点:
[root@m72 ~]# scp /etc/hosts root@10.220.15.73:/etc/hosts[root@m72 ~]# scp /etc/hosts root@10.220.15.74:/etc/hosts
主机生成ssh秘钥
[root@m72 ~]# ssh-keygen -t rsaGenerating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ZEdN2IJIihMOhSrVavfgY81EDm6mJw1clOIg7hLwzpg root@m72
The key's randomart image is:
+---[RSA 2048]----+
|.ooo.o.. ..=. |
|=ooo=.o ..o o |
|=*+=.+ o .. |
|+.B.* oo . |
|oB O * S |
|E.= * + |
|. + . |
| |
| |
+----[SHA256]-----+
拷贝秘钥给另外两个节点
[root@m72 ~]# ssh-copy-id s73
[root@m72 ~]# ssh-copy-id s74
2.安装k8s
我们安装k8s时,利用的是kubernetes官方开发出来的自动化部署的软件(kubeadm),以来实现更快速的安装k8s。
2.1下载yum源
我们这里选择阿里的yum源
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
新建一个kubernetes的repo文件
vim /etc/yum.repos.d/kubernetes.repo
加入以下内容,保存
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enable=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.2创建缓存(将后面需要下载的rpm包缓存下来,方便其他机器使用)
[root@m72 ~]# yum makecache
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.bfsu.edu.cn* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
updates | 2.9 kB 00:00:00
......
kubernetes 797/797
kubernetes 797/797
Metadata Cache Created
将kubernetes.repo复制给其他两台机器
[root@m72 /]# scp /etc/yum.repos.d/kubernetes.repo s73:/etc/yum.repos.d/[root@m72 /]# scp /etc/yum.repos.d/kubernetes.repo s74:/etc/yum.repos.d/
2.3 打开iptables桥接功能(三个节点都需调整)
自定义文件k8s.conf
[root@m72 /]# vim /etc/sysctl.d/k8s.conf# 加入以下内容
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1# 使配置生效
[root@m72 /]# sysctl -p /etc/sysctl.d/k8s.conf
同样方式调整另外两台机器
2.4 打开路由转发(三个节点都需调整)
[root@m72 /]# echo net.ipv4.ip_forward = 1 >> /etc/sysctl.conf # 使配置生效
[root@master ~]# sysctl -p
同样方式调整另外两台机器
2.5 回到master节点
[root@m72 /]# vim /etc/yum.conf
修改keepcache=1 //启用缓存
下载rpm包
[root@m72 /]# yum -y install kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1
下载完成后查看是否缓存了rpm包
[root@m72 /]# cd /var/cache/yum/x86_64/7/kubernetes/packages/
[root@m72 packages]# lltotal 67204
-rw-r--r--. 1 root root 7401938 Mar 18 06:26 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm
-rw-r--r--. 1 root root 9290806 Jan 4 2021 aa386b8f2cac67415283227ccb01dc043d718aec142e32e1a2ba6dbd5173317b-kubeadm-1.15.1-0.x86_64.rpm
-rw-r--r--. 1 root root 19487362 Jan 4 2021 db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r--. 1 root root 9920226 Jan 4 2021 f27b0d7e1770ae83c9fce9ab30a5a7eba4453727cdc53ee96dc4542c8577a464-kubectl-1.15.1-0.x86_64.rpm
-rw-r--r--. 1 root root 22704558 Jan 4 2021 f5edc025972c2d092ac41b05877c89b50cedaa7177978d9e5e49b5a2979dbc85-kubelet-1.15.1-0.x86_64.rpm
设置kubelet开机自启动
[root@m72 packages]# systemctl enable kubelet.service
2.6 初始化集群(下载镜像)
可是由于国内网络环境限制,我们不能直接从谷歌的镜像站下载镜像,我们直接上传镜像文件到集群,然后导入即可
链接:百度网盘
提取码:kuxd
编写脚本自动导入
vim images-import.sh
加入以下内容
#!/bin/bash
# 默认会解压到/root/kubeadm-basic.images文件下
tar -zxvf /root/kubeadm-basic.images.tar.gz
ls /root/kubeadm-basic.images > /tmp/image-list.txt
cd /root/kubeadm-basic.imagesfor i in $( cat /tmp/image-list.txt )
dodocker load -i $i
donerm -rf /tmp/image-list.txt
执行脚本,查看docker镜像列表是否成功
[root@m72 ~]# sh images-import.sh # 查看是否成功
[root@m72 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 2 years ago 207MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 2 years ago 159MB
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 2 years ago 82.4MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 2 years ago 81.1MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 3 years ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 3 years ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 4 years ago 742kB
docker镜像都导入后,执行初始化集群
kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
#–kubernetes-version:**指定当前kubernetes版本号(查看版本:kubelet --version)
#–pod-network: 指定pod网段,kubernetes默认指定网络。
#–ignore:忽略所有报错
执行后如下图,记住最后打印的token语句,其他节点加入需要用得到
[init] Using Kubernetes version: v1.15.1......[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.220.15.72:6443 --token cralff.5gzyqjx8jirtta2c \--discovery-token-ca-cert-hash sha256:d72a9456b0ccb8e2bbdf58ae735410249ec9d6dfb641aa9e38e60e336de6d5bc
根据上面的提示创建目录并授权
[root@m72 ~]# mkdir -p $HOME/.kube
[root@m72 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@m72 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看节点
[root@m72 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m72 NotReady master 9m54s v1.15.1
显示 NotReady,是因为还缺少一个附件flannel,没有网络各Pod是无法通信的。
下载kube-flannel.yml文件,可能被墙无法下载,那么本地直接创建。
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
内容如下:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-amd64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- amd64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-amd64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-arm64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- arm64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-arm64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-arm64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-armnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- armhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-armcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-armcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-ppc64lenamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- ppc64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-ppc64lecommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-ppc64lecommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-s390xnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- s390xhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-s390xcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-s390xcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
重新加载,再次查看
[root@m72 ~]# kubectl apply -f kube-flannel.yml [root@m72 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m72 Ready master 38m v1.15.1
2.7 其他节点加入集群
其他节点分别执行
yum -y install kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1
加入开机自启动
systemctl enable kubelet.service
创建文件夹
mkdir images
主节点复制镜像文件
[root@m72 ~]# cd kubeadm-basic.images
[root@m72 kubeadm-basic.images]# scp * s73:/root/images/
[root@m72 kubeadm-basic.images]# scp * s73:/root/images/
回到s73从节点,docker加入上述镜像
docker load -i xxx.tar
最后s73执行之前打印的加入语句
kubeadm join 10.220.15.72:6443 --token cralff.5gzyqjx8jirtta2c \--discovery-token-ca-cert-hash sha256:d72a9456b0ccb8e2bbdf58ae735410249ec9d6dfb641aa9e38e60e336de6d5bc
提示已加入, 查看主节点
[root@m72 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m72 Ready master 3h7m v1.15.1
s73 Ready <none> 8m32s v1.15.1
s73节点加入成功,同理加入s74节点,集群部署完成
linux搭建kubernetes集群(一主二从)相关推荐
- Kubernetes入门--搭建Kubernetes集群,并启动容器服务
英文原作者:Ben Cane 翻译作者:Miazzy 翻译&转载:https://blog.codeship.com/getting-started-with-kubernetes/ Kub ...
- k8s学习(2)- 虚拟机搭建搭建Kubernetes集群(1.24.2)
虚拟机搭建搭建Kubernetes集群 环境 规划 虚拟机搭建 配置网络 解决和主机复制粘贴的问题 使用MobaXterm连接虚拟机 安装vmware tools(建议使用MobaXterm) 配置y ...
- Centos7搭建Kubernetes集群
@Author:By Runsen Kubernetes Kubernetes 及其整个生态系统(工具.模块.插件等)均使用 Go 语言编写,从而构成一套面向 API.可高速运行的程序集合,这些程序文 ...
- Kubeadm介绍与使用Kubeadm搭建kubernetes集群环境
文章目录 1.Kubeadm介绍 2.使用Kubeamd搭建kubernetes集群环境 2.1.首先准备一个三台的centos机器 2.2.yum -y update [在三台机器上执行更新包] 2 ...
- 使用VirtualBox【四步】搭建Kubernetes集群(2023-02-13)
文章内容: 使用VirtualBox.Containerd.Kubeadm等在本地搭建一个用于测试的3节点Kubernetes集群. 搭建过程中踩过的坑以及解决方案说明. 搭建环境:MacOS Mon ...
- 在Linux搭建Kafka集群
文章目录 前言 准备工作 安装和配置 测试 参考链接 前言 以kafka_2.13-2.8.0版本做示例,安装架构图如下所示,4台服务器,4个节点的Zookeeper集群(1主2从1观察)以及3个Ka ...
- 搭建kubernetes集群管理平台
一. kubernetes和相关组件介绍 1. kubernetes概述 Kubernetes是google开源的容器集群管理系统,基于docker构建一个容器的调度服务,提供资源调度.均衡容灾.服务 ...
- ASP.NET Core应用程序容器化、持续集成与Kubernetes集群部署(二)
在上文中我介绍了ASP.NET Core应用程序容器化时需要注意的几个问题,并给出了一个案例应用程序:tasklist.今天接着上文的内容,继续了解一下如何使用Azure DevOps进行ASP.NE ...
- 从头开始搭建kubernetes集群+istio服务网格(3)—— 搭建istio
(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3) 本文参考网址: htt ...
最新文章
- linux cp无法创建一般文件夹,cp: 无法创建普通文件 : 文件已存在
- 字符串匹配的KMP算法
- 量子计算机模型取,Grover算法在单道量子计算模型下的实现
- c语言rtu crc16,Modbus-RTU-crc16校验方法C语言实现
- linux下后台启动springboot项目
- C++ passes by reference, Java and Ruby don’t
- 用Webbench进行网站压力测试
- java 日期年度 35变2035_连接IBM MQ原因码报2035的错误解决办法
- 【转】SIP 中的Dialog,call,session 和 transaction
- 51单片机超声波测距和报警+Proteus仿真
- codeblocks 汉化教程
- 系统集成项目管理师和高级项目管理师考试心得
- Element-UI Select 下拉框 根据汉字拼音过滤选择选项(使用filter-method,filterable属性)
- 学习单片机我们到底在学习什么?
- Python【第十天】文件与模块
- Regionals 2015 Asia - Daejeon acmliveoj7233 - Polynomial
- 数据模型 LP32 ILP32 LP64 ILP64 LLP64
- 从技术角度看“星闪“技术
- repeater导出excel html,Repeater控件数据导出Excel(附演示动画)
- 与门、或门、非门、与非门、或非门、异或门、同或门