CentOS版本

​​​​​​​#cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

角色 IP
k8s-master 172.26.197.206
k8s-node1 172.26.201.116
k8s-node2 172.26.203.93

以下步骤,1-8在所有节点执行;9-10在master节点执行

  • 1 关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld
  • 2 关闭 selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 临时
  • 3 关闭 swap:
$ swapoff -a # 临时$ vi /etc/fstab # 永久
#注释掉 /dev/mapper/centos-swap swap swap defaults 0 0 这行
systemctl reboot #重启生效
free ‐m #查看下swap交换区是否都为0,如果都为0则swap关闭成功
  • 4 设置主机名:
$ hostnamectl set-hostname <hostname>
  • 5 添加 hosts:
$ cat >> /etc/hosts << EOF
172.26.197.206 k8s-master
172.26.201.116 k8s-node1
172.26.203.93  k8s-node2
EOF
  • 6 将桥接的 IPv4 流量传递到 iptables 的链:
$ cat > /etc/sysctl.d/99-sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system # 生效
  • 7 时间同步:
$ yum install ntpdate -y
$ ntpdate time.windows.com
  • 8、所有节点安装 Docker/kubeadm/kubelet

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。
(1)安装 Docker

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O
/etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version

(2)添加阿里云 YUM 软件源
设置仓库地址

# cat > /etc/docker/daemon.json << EOF
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF# systemctl stop docker
# systemctl start docker

添加 yum 源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(3)安装 kubeadm,kubelet 和 kubectl

$ yum install -y kubelet-1.23.0 kubectl-1.23.0 kubeadm-1.23.0
$ systemctl enable kubelet
  • 9、部署 Kubernetes Master

(1)在 172.26.197.206(Master)执行

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=172.26.197.206 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.26.197.206]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.197.206 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.197.206 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.562436 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 27smjn.9i4zmj65gmru6aed
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa 

由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。
(2)使用 kubectl 工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
  • 10、安装 Pod 网络插件(CNI)
$ kubectl apply –f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml

确保能够访问到 quay.io 这个 registery。如果 Pod 镜像下载失败,可以多试几次,或者按以下任意方式处理

(1)通过https://www.ipaddress.com, 查询raw.githubusercontent.com的ip地址为 185.199.108.133,配置 /etc/hosts

$ vi /etc/hosts
185.199.108.133 raw.githubusercontent.com

(2)先wget到本地再执行

$ kubectl apply -f [path_to_kube-flannel.yml]
  • 11、加入 Kubernetes Node

(1)在 172.26.201.116  172.26.203.93(Node)执行
向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:

[root@k8s-node1 ~]#  kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
  • 12、测试 kubernetes 集群

在master执行,在 Kubernetes 集群中创建一个 pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-85b98978db-lr4b6   1/1     Running   0          21mNAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        20h
service/nginx        NodePort    10.104.120.119   <none>        80:32215/TCP   20m

访问地址:http://NodeIP:Port
http://172.26.197.206:32215

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

------------------------------------------------------------------------------------------------------------------------------------------

可能遇到的问题:

  • bridge-nf-call-iptables
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=172.26.197.206 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.23.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

按提示信息执行如下命令:

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
  • join超时
kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed --discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'
error execution phase kubelet-start: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
$ vi /etc/docker/daemon.json
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
$ systemctl stop docker
$ systemctl start docker
  • 重启服务器后,kubelet服务无法启动
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor presDrop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since 二 2022-09-27 14:Docs: https://kubernetes.io/docs/Process: 7645 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONMain PID: 7645 (code=exited, status=1/FAILURE)9月 27 14:18:50 k8s-master systemd[1]: Unit kubelet.service entered failed state
9月 27 14:18:50 k8s-master systemd[1]: kubelet.service failed.$ journalctl -xefu kubelet
9月 27 15:11:52 k8s-master systemd[1]: kubelet.service holdoff time over, scheduling restart.
9月 27 15:11:52 k8s-master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
9月 27 15:11:52 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
9月 27 15:11:52 k8s-master kubelet[22728]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
9月 27 15:11:52 k8s-master kubelet[22728]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.565895   22728 server.go:446] "Kubelet version" kubeletVersion="v1.23.0"
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.566152   22728 server.go:874] "Client rotation is on, will bootstrap in background"
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.568237   22728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.570169   22728 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.657911   22728 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
9月 27 15:11:52 k8s-master kubelet[22728]: E0927 15:11:52.658112   22728 server.go:302] "Failed to run kubelet" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename\t\t\t\tType\t\tSize\tUsed\tPriority /dev/dm-1                               partition\t4063228\t0\t-2]"
9月 27 15:11:52 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
9月 27 15:11:52 k8s-master systemd[1]: Unit kubelet.service entered failed state.
9月 27 15:11:52 k8s-master systemd[1]: kubelet.service failed.running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false

参照本文第三步,永久关闭swap

  • node加入master,token超时用以下命令重新生成
$ kubeadm token create --print-join-command

node加入master时,提示Unauthorized

[root@k8s-node2 ~]# kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: Unauthorized
To see the stack trace of this error execute with --v=5 or higher

解决办法如下:

[root@k8s-node2 ~]# swapoff -a
[root@k8s-node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1112 10:51:20.127298    3527 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-node2 ~]# rm -rf /etc/cni/net.d
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl restart kubelet
[root@k8s-node2 ~]#
[root@k8s-node2 ~]#
[root@k8s-node2 ~]# kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

CentOS7部署k8s集群相关推荐

  1. centos7 部署k8s集群

    操作环境为VMware 虚拟机 部署前最好 yum update 更新一下系统的服务(时间较长) #1.能访问外网#2.关闭防火墙: systemctl stop firewalld &&am ...

  2. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 文章目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 k ...

  3. kubeadm部署K8S集群并使用containerd做容器运行时

    kubeadm部署K8S集群并使用containerd做容器运行时(内容全部实战验证有任何问题欢迎留言咨询讨论) 前言 去年12月份,当Kubernetes社区宣布1.20版本之后会逐步弃用docke ...

  4. kubeadm部署k8s集群

    1.准备环境 虚拟机操作系统: Centos7 角色                IP Master        192.168.150.140 Node1        192.168.150. ...

  5. RKE安装部署K8S集群、Rancher

    服务器准备:三台虚拟机(master:1,node:2:这里选用的阿里云ECS) OS hostname 内网IP Centos7 joker-master-1 172.27.31.149 Cento ...

  6. 通过阿里云ecs部署k8s集群

    通过阿里云ecs部署k8s集群 1. 搭建环境 2. 安装步骤 禁用Selinux Restart Docker 此处仅有两台服务器,一台master节点,一台node节点,后期可按照步骤继续增加no ...

  7. 【02】Kubernets:使用 kubeadm 部署 K8S 集群

    写在前面的话 通过上一节,知道了 K8S 有 Master / Node 组成,但是具体怎么个组成法,就是这一节具体谈的内容.概念性的东西我们会尽量以实验的形式将其复现. 部署 K8S 集群 互联网常 ...

  8. 【运维开发】Mac OS(10.13.6)使用 vagrant+VirtualBox +centos7搭建k8s集群

    Mac OS(10.13.6)使用 vagrant+VirtualBox +centos7搭建k8s集群步骤 环境准备工作 下载VirtualBox 地址:https://www.virtualbox ...

  9. 如何通过rancher部署k8s集群

    目录 1 安装前准备 2 安装rancher 3 配置Rancher 4 创建k8s集群 最近的工作中需要使用到K8S,而面临的第一个问题就是如何部署一个K8S集群环境.现有多种部署方式,如:kube ...

  10. linux上部署K8S集群

    部署K8S集群 服务器硬件要求:三台虚拟机服务器,操作系统都为centos: ​ 硬盘最低配置:内存2GB,CPU2核,硬盘30GB. 准备环境 master 192.168.200.110 node ...

最新文章

  1. 【Android 内存优化】Android 工程中使用 libjpeg-turbo 压缩图片 ( JNI 传递 Bitmap | 获取位图信息 | 获取图像数据 | 图像数据过滤 | 释放资源 )
  2. OpenCv:Mat矩阵的初始化
  3. vs2012打包和部署程序成可安装安装包文件(InstallShield
  4. LeetCode 476. Number Complement
  5. oracle 循环块,Oracle语句块PL/SQL循环判断
  6. 一文速学-最小二乘法曲线拟合算法详解+项目代码
  7. C++/MFC 串口通讯——光源控制器控制
  8. java开发运行环境的安装步骤_配置java开发运行环境的步骤
  9. 二、神经元与神经胶质
  10. 网易邮箱好用还是TOM邮箱好用?企业邮箱可绑定微信吗?
  11. 第七届蛋白质与蛋白质组学国际研讨会(CPP 2022)
  12. Matlab roundn()函数使用样例
  13. ZOOMIT的使用方法
  14. 闲鱼上遇到违规怎么处理?
  15. 常用类库-java.lang.String
  16. 记录一次C语言调用go生成的动态库的踩坑过程
  17. CTU 2017 B - Pond Cascade 二分
  18. JADE学习笔记4:Agent通信
  19. hdmi接口线_想要畅享4K高画质,HDMI线怎么选?
  20. linux设置自己的网站,我是如何在Linux服务器部署自己的网站

热门文章

  1. mysql blast2go_blast2go本地化教程
  2. 使用Blast2GO进行GO注释
  3. 模拟一个简单的购房商贷月供计算器,按照总利息和每月还款金额
  4. tp5 怎么安装 后盾网加密解密crypt扩充扩展包?
  5. 计算机网络共享文件共享,终于发现如何取消计算机网络共享文件
  6. 元宵节动画贺卡制作_2017鸡年元宵节flash电子贺卡模板下载-2017元宵节电子贺卡flash动画模板完整打包下载-东坡下载...
  7. java游戏项目推箱子
  8. 教你如何去掉桌面图标的蓝底
  9. ios 4.2.1完全越狱图文教程
  10. Effective Receptive Field