K8s Master 三个组件:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager. 其中kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master高可用模式下可用

github 官网

https://github.com/kubernetes/kubernetes

基本流程图

基本功能图

安装步骤

  • 下载文件
  • 制作证书
  • 创建TLS Bootstrapping Token
  • 部署apiserver组件
  • 部署kube-scheduler组件
  • 部署kube-controller-manager组件
  • 验证服务

下载文件,解压缩

wget https://dl.k8s.io/v1.13.6/kubernetes-server-linux-amd64.tar.gz
[root@k8s ~]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
[root@k8s ~]# tar zxf kubernetes-server-linux-amd64.tar.gz
[root@k8s ~]# cd kubernetes/server/bin/
[root@k8s bin]# cp kube-scheduler kube-apiserver kube-controller-manager /opt/kubernetes/bin/
[root@k8s bin]# cp kubectl /usr/bin/
[root@k8s bin]# 

制作kubernetes ca证书

  • 制作ca配置文件
cd  /opt/kubernetes/ssl/
cat << EOF | tee /opt/kubernetes/ssl/ca-config.json
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat << EOF | tee /opt/kubernetes/ssl/ca-csr.json
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]
}
EOF
  • 生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/05/28 14:47:18 [INFO] generating a new CA key and certificate from CSR
2019/05/28 14:47:18 [INFO] generate received request
2019/05/28 14:47:18 [INFO] received CSR
2019/05/28 14:47:18 [INFO] generating key: rsa-2048
2019/05/28 14:47:18 [INFO] encoded CSR
2019/05/28 14:47:19 [INFO] signed certificate with serial number 34219464473634319112180195944445301722929678647
[root@k8s ssl]# 
  • 制作apiserver证书
cat << EOF | tee server-csr.json
{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","10.0.52.13","10.0.52.7","10.0.52.8","10.0.52.9","10.0.52.10","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server[root@k8s ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/05/28 15:03:31 [INFO] generate received request
2019/05/28 15:03:31 [INFO] received CSR
2019/05/28 15:03:31 [INFO] generating key: rsa-2048
2019/05/28 15:03:31 [INFO] encoded CSR
2019/05/28 15:03:31 [INFO] signed certificate with serial number 114040551556369232239873744650692828468613738631
2019/05/28 15:03:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@k8s ssl]# 

server-csr.json文件中的hosts配置,需要"10.0.52.13", “10.0.52.7”, “10.0.52.8”, “10.0.52.9”, “10.0.52.10”,这几个IP是自己设置的,其他的都是内置的,无需改动,其中我们k8smaster的ip,如果高可用的话,几个master的ip,lb的ip都需要设置,否则无法连接apiserver.

创建TLS Bootstrapping Token

[root@k8s ssl]# BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s ssl]# cat << EOF | tee /opt/kubernetes/cfg/token.csv
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
7d558bb3a5206cf78f881de7d7b82ca6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s ssl]# cat /opt/kubernetes/cfg/token.csv
7d558bb3a5206cf78f881de7d7b82ca6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s ssl]# 

部署kube-apiserver组件

1. 创建kube-apiserver配置文件

Apiserver配置文件里面需要配置的有etcd-servers地址,bind-address,advertise-address都是当前master节点的IP地址.token-auth-file和各种pem认证文件,需要把对应的文件位置输入即可.

cat << EOF | tee /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 \\
--bind-address=10.0.52.13 \\
--secure-port=6443 \\
--advertise-address=10.0.52.13 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

2.创建apiserver systemd文件

cat << EOF | tee /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF

3.启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver[root@k8s ssl]# ps -ef |grep kube-apiserver
root     19404     1 89 15:50 ?        00:00:09 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root     19418 19122  0 15:50 pts/1    00:00:00 grep --color=auto kube-apiserver
[root@k8s ssl]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-05-28 15:50:21 CST; 26s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 19404 (kube-apiserver)Memory: 221.2MCGroup: /system.slice/kube-apiserver.service└─19404 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --...May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.057378   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.709711ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.076300   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.984796ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.076874   19404 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.095073   19404 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.887241ms) 404 [kube-apiserver/v1.13.6 (linux/amd64)...f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.097100   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.654384ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.115586   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.390436ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.115766   19404 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.134609   19404 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.458696ms) 404 [kube-apiserver/v1.13.6 (linux/amd64) ...f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.136356   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.420447ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.155628   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.433057ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
Hint: Some lines were ellipsized, use -l to show in full.[root@k8s ssl]# ps -ef |grep -v grep |grep kube-apiserver
root     19404     1  1 15:50 ?        00:00:25 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
[root@k8s ssl]# [root@k8s ssl]# netstat -tulpn |grep kube-apiserve
tcp        0      0 10.0.52.13:6443         0.0.0.0:*               LISTEN      19404/kube-apiserve
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      19404/kube-apiserve
[root@k8s ssl]# 

部署kube-scheduler组件

1.创建kube-scheduler配置文件

–master: kube-scheduler 使用它连接 kube-apiserver; –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

cat << EOF | tee /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect"
EOF

2.创建kube-scheduler systemd文件

cat << EOF | tee /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF

3.启动&验证

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
[root@k8s ssl]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes SchedulerLoaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-05-28 16:06:21 CST; 9s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 19524 (kube-scheduler)Memory: 10.8MCGroup: /system.slice/kube-scheduler.service└─19524 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-electMay 28 16:06:22 k8s.master kube-scheduler[19524]: I0528 16:06:22.942604   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.042738   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.142882   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.243024   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.243057   19524 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343173   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343195   19524 controller_utils.go:1034] Caches are synced for scheduler controller
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343249   19524 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.351601   19524 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.451916   19524 shared_informer.go:123] caches populated

部署kube-controller-manager组件

1.创建kube-controller-manager配置文件

cat << EOF | tee /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

2.创建kube-controller-manager systemd文件

cat << EOF | tee /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF

3.启动&验证

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager[root@k8s ssl]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller ManagerLoaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-05-28 16:18:52 CST; 10s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 19606 (kube-controller)Memory: 31.9MCGroup: /system.slice/kube-controller-manager.service└─19606 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca...May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.140091   19606 controller_utils.go:1034] Caches are synced for garbage collector controller
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.140098   19606 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.168470   19606 request.go:530] Throttling request took 1.399047743s, request: GET:http://127.0.0.1:8080/apis/apiextensions.k8s.io/v1beta1?timeout=32s
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169456   19606 resource_quota_controller.go:427] syncing resource quota controller with updated resources from discovery: map[/v1, Resource=replicationcontrollers:{} extensio...beta1, Resource=eve
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169593   19606 resource_quota_monitor.go:180] QuotaMonitor unable to use a shared informer for resource "extensions/v1beta1, Resource=networkpolicies": no informer found for ...rce=networkpolicies
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169632   19606 resource_quota_monitor.go:243] quota synced monitors; added 0, kept 29, removed 0
May 28 16:18:55 k8s.master kube-controller-manager[19606]: E0528 16:18:55.169647   19606 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable ...ce=networkpolicies"
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225106   19606 shared_informer.go:123] caches populated
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225138   19606 controller_utils.go:1034] Caches are synced for garbage collector controller
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225146   19606 garbagecollector.go:245] synced garbage collector
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s ssl]# 

验证master服务状态

[root@k8s ssl]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
[root@k8s ssl]# 

如果出现如上界面,表示Master安装成功!

Kubernetes Master 二进制部署相关推荐

  1. kubeadm安装kubernetes之MASTER节点部署

    kubernetes MASTER节点部署 1.初始化环境,基础组件安装 #各个节点配置主机名 hostnamectl set-hostname k8smaster #关闭防火墙 systemctl ...

  2. 【重要】kubernetes二进制部署单master节点

    目录 1.安装要求 2.安装规划 3.1.分步骤操作 3.2.一键执行脚本 4.1.安装cfssl证书生成工具 4.2.创建认证中心(根CA中心) 4.3.使用自签CA签发Etcd证书 4.4.部署E ...

  3. kubernetes二进制部署单master节点

    目录 1.安装要求 2.安装规划 3.1.分步骤操作 3.2.一键执行脚本 4.1.安装cfssl证书生成工具 4.2.创建认证中心(根CA中心) 4.3.使用自签CA签发Etcd证书 4.4.部署E ...

  4. Kubernetes二进制部署——Flannel网络

    Kubernetes二进制部署--Flannel网络 一.Flannel简介 二.Flannel原理 三.Flannel的作用 四.Flannel 网络配置 1.node 节点安装 docker 2. ...

  5. Kubernetes二进制部署——证书的制作和ETCD的部署

    Kubernetes二进制部署--证书的制作和ETCD的部署 一.实验环境 自签 SSL 证书 二.ETCD集群部署 1.环境部署 2.master节点 3.node1节点 4.node2节点 5.m ...

  6. K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)

    K8S--单master节点和基于单master节点的双master节点二进制部署 一.准备 二.ETCD集群 1.master节点 2.node节点 三.Flannel网络部署 四.测试容器间互通 ...

  7. 二进制部署Kubernetes v1.13.4 HA可选

    本次采用二进制文件方式部署,本文过程写成了更详细的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansible 和之前的步骤差不多都 ...

  8. CentOS 使用二进制部署 Kubernetes 1.13集群

    CentOS 使用二进制部署 Kubernetes 1.13集群 一.概述 kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本.Kubernetes 1.13 ...

  9. Kubernetes多节点二进制部署

    Kubernetes多节点二进制部署 一.部署master02 节点 修改主机名,关闭防火墙 在k8smaster01上操作 在k8smaster02上操作 二.部署负载均衡 1.配置nginx的官方 ...

最新文章

  1. python资源百度云_Python Selenium 百度云分享链接资源 批量保存
  2. oracle临时表空间大小查询,查看oracle 系统临时表空间、undo表空间、SGA和PGA大小...
  3. mysql5.6修改默认目录_MySQL修改默认存储路径
  4. javaweb教务管理系统_基于Java web的教务管理系统
  5. Angular添加class的正确方式
  6. python中backward_pytorch的梯度计算以及backward方法详解
  7. ELK logstash基本配置
  8. LLDP协议编写要点
  9. Unity与 DLL文件 ☀️| 怎样使用 C# 类库 生成一个DLL文件 并 调用!
  10. 清华刘云浩教授回复学生2000问,你想了解的人工智能问题可能都在这里
  11. pdf怎么编辑修改内容?以下方法你都知道吗
  12. yigo 第一阶段 异常处理 解决方案
  13. 数理统计——描述统计与Python实现
  14. 微软官方操作系统(需空U盘)
  15. 如何实现多人在线编辑文档?
  16. 这样写英文Email,老外会感觉你很有礼貌、很有风度,很想帮助你
  17. 酒店企业私域流量运营方案来了
  18. 【日常点滴014】python关于wordcloud词云图多种绘制方法教程
  19. Python入门:循环语句
  20. stein算法(求gcd)

热门文章

  1. 大学学python在金融中的应用_《Python金融数据挖掘及其应用》教学大纲
  2. Unity 性能优化-代码
  3. jq循环获取table中各td中input框的内容 ajax传到后台
  4. 软件测试之新闻类项目登录与文章发表手工测试
  5. 遗传算法求三元函数极值(python)-采用二进制编码
  6. C语言中的 #include <stdio.h>是什么?
  7. 上传文件和导出的测试用例设计
  8. Web API 项目中启用 Swagger UI
  9. matlab加权滤波,matlab实现七种滤波方法
  10. c++ - 第18节 - 哈希