部署前的准备

1)、关闭swapp 功能否则kubelet启动将失败。

vim /etc/fstab注释下面这行内容

/dev/mapper/cl-swap     swap                    swap    defaults        0 0

然后执行

swapoff -a

2)关闭senlinux

  关闭SeLinux的方法

  A 不需要重启服务器

  [root@localhost ~]# setenforce 0

  B 需要重启Linux:

  vi /etc/selinux/config 将SELINUX=enforcing 改成SELINUX=disabled

3) 安装docker服务,详情请见(centos8系统 centos8 安装docker_云深海阔专栏-CSDN博客 centos7系统:centos7下安装docker_云深海阔专栏-CSDN博客)

4)修改docker.service配置文件,在文件中添加一下内容

EnvironmentFile=/etc/flannel/subnet.env

然后重启docker服务

[root@k8s_Node1 ~]# systemctl daemon-reload
[root@k8s_Node1 ~]# systemctl restart docker
[root@k8s_Node1 ~]# systemctl status docker

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrap

实际上从上述提示的system:kube-controller-manager的提示很容易发现真正的原因在于证书内容或者设定的错误。但是一般还是一步步地来确认

# 确认当前user信息
[root@k8s_Master ~]# kubectl config current-context
kubernetes# 确认kubectl的config设定
[root@k8s_Master ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.0.221:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: adminname: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: adminuser:client-certificate-data: REDACTEDclient-key-data: REDACTED# 看一下相关的clusterrole是否存在
[root@k8s_Master ~]# kubectl get clusterrole |grep system:node-bootstrapper
system:node-bootstrapper                                               2020-08-24T17:51:06Z# 看一下clusterrole的详细信息
[root@k8s_Master ~]# kubectl describe clusterrole system:node-bootstrapper
Name:         system:node-bootstrapper
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:Resources                                       Non-Resource URLs  Resource Names  Verbs---------                                       -----------------  --------------  -----certificatesigningrequests.certificates.k8s.io  []                 []              [create get list watch]
  • --user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件;

kubelet 通过认证后向 kube-apiserver 发送 register node 请求,需要先将 kubelet-nodes 用户赋予 system:node cluster角色(role) 和 system:nodes 组(group), 然后 kubelet 才能有权限创建节点请求:

kubectl create clusterrolebinding kubelet-nodes \--clusterrole=system:node \--group=system:nodes

查询相关的配置信息,实际上从上述提示的system:kube-controller-manager的提示很容易发现真正的原因在于证书内容或者设定的错误。但是一般还是一步步地来确认,

# 确认当前user信息
[root@k8s_Node2 ~]# kubectl config current-context
kubernetes# 确认kubectl的config设定
[root@k8s_Node2 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.0.221:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: adminname: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: adminuser:client-certificate-data: REDACTEDclient-key-data: REDACTED# 看一下相关的clusterrole是否存在:
[root@k8s_Node2 ~]#  kubectl get clusterrole |grep system:node
system:node                                                            2020-08-24T17:51:06Z
system:node-bootstrapper                                               2020-08-24T17:51:06Z
system:node-problem-detector                                           2020-08-24T09:33:10Z
system:node-proxier                                                    2020-08-24T09:34:09Z# 看一下clusterrole的详细信息
[root@k8s_Node2 ~]# kubectl describe clusterrole system:node
Name:         system:node
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:Resources                                       Non-Resource URLs  Resource Names  Verbs---------                                       -----------------  --------------  -----leases.coordination.k8s.io                      []                 []              [create delete get patch update]csinodes.storage.k8s.io                         []                 []              [create delete get patch update]nodes                                           []                 []              [create get list watch patch update]certificatesigningrequests.certificates.k8s.io  []                 []              [create get list watch]events                                          []                 []              [create patch update]pods/eviction                                   []                 []              [create]serviceaccounts/token                           []                 []              [create]tokenreviews.authentication.k8s.io              []                 []              [create]localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]subjectaccessreviews.authorization.k8s.io       []                 []              [create]pods                                            []                 []              [get list watch create delete]configmaps                                      []                 []              [get list watch]secrets                                         []                 []              [get list watch]services                                        []                 []              [get list watch]runtimeclasses.node.k8s.io                      []                 []              [get list watch]csidrivers.storage.k8s.io                       []                 []              [get list watch]persistentvolumeclaims/status                   []                 []              [get patch update]endpoints                                       []                 []              [get]persistentvolumeclaims                          []                 []              [get]persistentvolumes                               []                 []              [get]volumeattachments.storage.k8s.io                []                 []              [get]nodes/status                                    []                 []              [patch update]pods/status                                     []                 []              [patch update]

2、我们已经获得了bin文件,开始配置相应的服务器文件

拷贝二进制bin文件

[root@k8s-master01 kubernetes]# scp /root/kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/[root@k8s-master01 kubernetes]# scp /root/kubernetes/server/bin/{kube-proxy,kubelet} k8s-node01:/usr/local/bin/                                                                                                                                                                                                                                 100%   39MB  53.7MB/s   00:00
kubelet                                                                                                                                                                                                                                     100%  109MB  54.5MB/s   00:02    [root@k8s-master01 kubernetes]# scp /root/kubernetes/server/bin/{kube-proxy,kubelet} k8s-node02:/usr/local/bin/
kube-proxy                                                                                                                                                                                                                                  100%   39MB  59.7MB/s   00:00
kubelet                                                                                                                                                                                                                                     100%  109MB  69.1MB/s   00:01    [root@k8s-master01 kubernetes]# scp /root/kubernetes/server/bin/{kube-proxy,kubelet} k8s-node03:/usr/local/bin/
kube-proxy                                                                                                                                                                                                                                  100%   39MB  52.3MB/s   00:00
kubelet                                                                                                                                                                                                                                     100%  109MB  66.3MB/s   00:01

添加配置文件kubelt:

  • 对于kuberentes1.8集群中的kubelet配置,取消了KUBELET_API_SERVER的配置,而改用kubeconfig文件来定义master地址,所以请注释掉KUBELET_API_SERVER配置。

vim /etc/kubernets/kubelet

k8s-1.18版本

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.0.222"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.222"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER="--api-servers=http://192.168.0.221:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
KUBELET_POD_INFRA_CONTAINER是指定pod运行的基础镜像,必须存在,我这里直接指定的是一个本地的镜像,镜像的或许地址为:

k8s-1.25版本其中(--cgroup-driver=systemd --cluster-dns=10.254.0.2 --cluster-domain --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false被摒弃,--experimental-bootstrap-kubeconfig被--bootstrap-kubeconfig替换)

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS=""
#KUBELET_ADDRESS="--address=192.168.1.243"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.1.243"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER="--api-servers=http://192.168.1.241:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER=""
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/ssl"#--cgroup-driver=systemd --cluster-dns=10.254.0.2 --cluster-domain --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false

1)KUBELET_POD_INFRA_CONTAINER是指定pod运行的基础镜像,必须存在,我这里直接指定的是一个本地的镜像,镜像的或许地址为:

[root@k8s_Node1 kubernetes]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
3.0: Pulling from google-containers/pause-amd64
a3ed95caeb02: Pull complete
f11233434377: Pull complete
Digest: sha256:3b3a29e3c90ae7762bdf587d19302e62485b6bef46e114b741f7d75dba023bd3
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

下载到本地后tag一下,方便使用,当然你也可以添加其他的公共pod基础镜像,在线地址也行,注意不要被墙就好。

docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 pause-amd64:3.0

安装containerd服务

[root@k8s-node03 ~]# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF
overlay
br_netfilter
[root@k8s-node03 ~]# systemctl restart systemd-modules-load.service
[root@k8s-node03 ~]# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-iptables  = 1
> net.ipv4.ip_forward                 = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8s-node03 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf ...
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/99-kubernetes-cri.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
[root@k8s-node03 ~]# yum install -y containerd.io
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-managerThis system is not registered with an entitlement server. You can use subscription-manager to register.Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
base
docker-ce-stable
epel
extras
updates
(1/3): epel/x86_64/updateinfo
(2/3): epel/x86_64/primary_db
(3/3): updates/7/x86_64/primary_db
Resolving Dependencies
--> Running transaction check
---> Package containerd.io.x86_64 0:1.6.7-3.1.el7 will be installed
--> Finished Dependency ResolutionDependencies Resolved============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Package                                                                                                                               Arch                                                                                                                           Version                                                                                                                                  Repository
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Installing:containerd.io                                                                                                                         x86_64                                                                                                                         1.6.7-3.1.el7                                                                                                                            docker-ce-stable                                                                                             Transaction Summary
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Install  1 PackageTotal download size: 33 M
Installed size: 125 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.6.7-3.1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY                                                                                                        99% [=====================================================================================================================================================================================================================================================
Public key for containerd.io-1.6.7-3.1.el7.x86_64.rpm is not installed
containerd.io-1.6.7-3.1.el7.x86_64.rpm
Retrieving key from https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Importing GPG key 0x621E9F35:Userid     : "Docker Release (CE rpm) <docker@docker.com>"Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35From       : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionInstalling : containerd.io-1.6.7-3.1.el7.x86_64                                                                                                                                                                                                                                                                                                                                                                                                                                                                           Verifying  : containerd.io-1.6.7-3.1.el7.x86_64                                                                                                                                                                                                                                                                                                                                                                                                                                                                           Installed:containerd.io.x86_64 0:1.6.7-3.1.el7                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Complete!
[root@k8s-node03 ~]# mkdir /etc/containerd -p
[root@k8s-node03 ~]# scp k8s-node01:/etc/containerd/config.toml /etc/containerd/
The authenticity of host 'k8s-node01 (192.168.1.243)' can't be established.
ECDSA key fingerprint is SHA256:xmCHhi0DppLmU06mtl9UIQG/8vPs+QkiiClLcwSGlO0.
ECDSA key fingerprint is MD5:b5:49:42:39:bf:83:69:40:25:3b:c5:e6:04:82:f2:2e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-node01,192.168.1.243' (ECDSA) to the list of known hosts.
root@k8s-node01's password:
config.toml                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   100% 7018     4.9MB/s   00:00
[root@k8s-node03 ~]# systemctl enable containerd
'Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
'[root@k8s-node03 ~]# ''^C^C
[root@k8s-node03 ~]# systemctl start containerd
[root@k8s-node03 ~]# ctr version
Client:Version:  1.6.7Revision: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccbGo version: go1.17.13Server:Version:  1.6.7Revision: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccbUUID: 2d8da6a4-7e2d-4299-b76b-b41a91e4cd90
[root@k8s-node03 ~]# runc -version
runc version 1.1.3
commit: v1.1.3-0-g6724737
spec: 1.0.2-dev
go: go1.17.13
libseccomp: 2.3.1

3、下载对应的包后,进行以下操作

[root@k8s_Node1 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@k8s_Node1 ~]# cd kubernetes
[root@k8s_Node1 kubernetes]# tar -xf kubernetes-src.tar.gz
[root@k8s_Node1 kubernetes]# cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/

4、创建系统启动文件

[root@k8s_Node2 kubernetes]# vim /usr/lib/systemd/system/kubelet.service
[root@k8s_Node2 kubernetes]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBELET_API_SERVER \$KUBELET_ADDRESS \$KUBELET_PORT \$KUBELET_HOSTNAME \$KUBE_ALLOW_PRIV \$KUBELET_POD_INFRA_CONTAINER \$KUBELET_ARGS
Restart=on-failure[Install]
WantedBy=multi-user.target

kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改为你的每台node节点的IP地址。

注意:在启动kubelet之前,需要先手动创建/var/lib/kubelet目录。

[root@k8s_Node1 kubernetes]# mkdir /var/lib/kubelet

4、启动服务

[root@k8s_Node1 ~]# systemctl daemon-reload
[root@k8s_Node1 ~]# systemctl restart kubelet
[root@k8s_Node1 ~]# systemctl status kubelet
[root@k8s_Node1 ~]# netstat -atnpu|grep 6443
tcp        0      0 192.168.0.222:41730     192.168.0.221:6443      ESTABLISHED 11173/kubelet

通过 kubelet 的 TLS 证书请求

Kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过后 Kubernetes 系统才会将该 Node 加入到集群。

查看未授权的 CSR 请求

[root@k8s-master01 kubernetes]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-Iu93kmMioiObIT5K8qbWXmwAYtXR8vIxo--1x_ZBWDw   9m6s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
node-csr-KmtQnNjp163u76aTU9ePOG9s8DPAp5PxQIQE53EgaoU   9m5s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
node-csr-wmYLHXMT3FZctwrJSztY-Bm7tiyVZqO6UPZrbsYRk9w   9m6s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
[root@k8s-master01 kubernetes]# kubectl get nodes
No resources found

通过 CSR 请求

[root@k8s-master01 kubernetes]# kubectl certificate approve node-csr-Iu93kmMioiObIT5K8qbWXmwAYtXR8vIxo--1x_ZBWDw
certificatesigningrequest.certificates.k8s.io/node-csr-Iu93kmMioiObIT5K8qbWXmwAYtXR8vIxo--1x_ZBWDw approved
[root@k8s-master01 kubernetes]# kubectl certificate approve node-csr-KmtQnNjp163u76aTU9ePOG9s8DPAp5PxQIQE53EgaoU
certificatesigningrequest.certificates.k8s.io/node-csr-KmtQnNjp163u76aTU9ePOG9s8DPAp5PxQIQE53EgaoU approved
[root@k8s-master01 kubernetes]# kubectl certificate approve node-csr-wmYLHXMT3FZctwrJSztY-Bm7tiyVZqO6UPZrbsYRk9w
certificatesigningrequest.certificates.k8s.io/node-csr-wmYLHXMT3FZctwrJSztY-Bm7tiyVZqO6UPZrbsYRk9w approved[root@k8s-master01 kubernetes]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-Iu93kmMioiObIT5K8qbWXmwAYtXR8vIxo--1x_ZBWDw   12m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved
node-csr-KmtQnNjp163u76aTU9ePOG9s8DPAp5PxQIQE53EgaoU   12m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved
node-csr-wmYLHXMT3FZctwrJSztY-Bm7tiyVZqO6UPZrbsYRk9w   12m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved

5、配置 kube-proxy

1)安装conntrack

[root@k8s_Node2 kubernetes]# vim /usr/lib/systemd/system/kube-proxy.service
[root@k8s_Node2 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

2)kube-proxy配置文件/etc/kubernetes/proxy

[root@k8s_Node1 ~]# cat /etc/kubernetes/proxy
###
# kubernetes proxy config# default config should be adequate# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.222 --hostname-override=192.168.0.222 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

3)启动服务

[root@k8s_Node1 ~]# systemctl daemon-reload
[root@k8s_Node1 ~]# systemctl enable kube-proxy
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /usr/lib/systemd/system/kube-proxy.service.
[root@k8s_Node1 ~]# systemctl start kube-proxy
[root@k8s_Node1 ~]# systemctl status kube-proxy

6、服务验证

Kubernetes (K8s) 安装部署过程(七)之部署node节点相关推荐

  1. Centos7 安装部署Kubernetes(k8s)集群过程

    1.系统环境 服务器版本 docker软件版本 CPU架构 CentOS Linux release 7.9 Docker version 20.10.12 x86_64 2.前言 如下图描述了软件部 ...

  2. Kubernetes -K8S安装部署及SpringCloud应用

    k set image deploy kubia nodejs=luksa/kubia:v2 一.Kubernetes - 一键安装Kubernetes集群 集群方案 使用三台物理机或VMware虚拟 ...

  3. ASP.NET Core on K8S深入学习(2)部署过程解析与部署Dashboard

    上一篇<K8S集群部署>中搭建好了一个最小化的K8S集群,这一篇我们来部署一个ASP.NET Core WebAPI项目来介绍一下整个部署过程的运行机制,然后部署一下Dashboard,完 ...

  4. k8s dashboard_ASP.NET Core on K8S深入学习(2)部署过程解析与部署Dashboard

    文章转载于公众号[恰同学骚年],作者Edison Zhou 上一篇<K8S集群部署>中搭建好了一个最小化的K8S集群,这一篇我们来部署一个ASP.NET Core WebAPI项目来介绍一 ...

  5. kubernetes(K8S)容器部署,重新启动后,node节点提示notready无法正常工作。

    打开服务器,查看容器部署k8s组件节点是否正常. [root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RES ...

  6. k8s==安装仪表盘,用仪表盘部署nginx

    k8s安装仪表盘_ChengQian's blog的博客-CSDN博客_k8s仪表盘 https://zhuanlan.zhihu.com/p/513430375 # 下载所需要的yaml文件 wge ...

  7. Kubernetes学习之路(四)之Node节点二进制部署

    K8S Node节点部署 1.部署kubelet (1)二进制包准备 [root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/[ro ...

  8. k8s集群flannel问题之telnet node节点开放端口Connect timeout情况

    前段时间在腾讯云clb上面的端口健康检测突然出现一堆异常,去手动检测时现实一切正常.去咨询了腾讯云工程师,他们对于端口检测处理方式是设置sysctl.conf中的几个参数,参数如下 net.ipv4. ...

  9. Mac下的Docker及Kubernetes(k8s)本地环境搭建与应用部署、管理界面kubernetes-dashboard

    Mac下的Docker及Kubernetes环境搭建与应用部署 Mac安装docker: brew cask install docker 当然也可以直接去官网下载docker的pkg文件安装 Mac ...

最新文章

  1. Django模型之数据库操作-查询
  2. 岛屿类-网格类问题-DFS | 力扣695. 岛屿的最大面积
  3. 笔记本电脑投屏到电视_同是无线投屏器,家用级与商用级的区别,除了盘活老电视还能干嘛...
  4. OpenShift 4 - 用KubeletConfig和ContainerRuntimeConfig分别修改集群节点的Kubelet和cri-o的配置
  5. 电脑扫描文件怎么弄_彻底清除手机垃圾文件,释放内存的方法
  6. 单引号、双引号、倒引号
  7. JAVA生成64,32位UUID密钥
  8. Windows 10桌面空白处鼠标右键转圈
  9. netty服务器怎么推送消息,我来学Netty之推送消息给客户端
  10. 《缠中说禅108课》32:走势的当下与投资者的思维方式
  11. 电脑实时监控信息:CPU 内存 GPU使用率在桌面上动态展现
  12. 3.Ubuntu 安装Pinta图片处理工具
  13. Docker Dockerfile 验证Docker内部使用jmap报错问题解决
  14. 小程序更新后,wx.getUserInfo 接口不再出现授权弹窗,新方法获取用户信息
  15. 折半查找(二分查找)的理解
  16. 七彩虹colorful SL500 360G开卡(量产)rts5732dl教程+量产工具
  17. 开源crm suitecrm docker安装教程
  18. Ubuntu docker 安装 QQ/微信wechat
  19. ESP8266 无限重启踩坑
  20. 免费的求职简历模板下载

热门文章

  1. 零点城市社交电商2.1.7.4 VUE全端+全开源+全插件+独立版
  2. 01_测试基础知识---微信公众号测试点
  3. 学画画软件app推荐_可以学画画的APP有哪些?
  4. 如何在PowerPoint中创建自定义模板
  5. 欢迎大家加入Xcode公社
  6. 奇葩副业:下班遛狗撸猫,月入10000 !
  7. 张朝阳喊话俞敏洪:为什么还不退休?
  8. 拼多多提前批(7月28号笔试题
  9. 《嵌入式系统设计师》笔记之一——嵌入式系统基础知识
  10. 【转】提高MATLAB运行效率