kubernetes v1.20项目之二进制扩容多Master

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

 相关所需资源下载链接:https://pan.baidu.com/s/1emtDOy7bzxlR_hUw6vY2GQ 提取码:a7j4 --来自百度网盘超级会员V2的分享**部分文件需要更改ip地址或其他的配置,请改成自己的使用**

准备master2 node

主要是新添加一台服务器,作为master2。ip为192.168.100.16

系统类型 cpu men ip
centos 7.5 2c 2G 192.168.100.16

部署基础环境

可以参考:kubernetes v1.20项目之部署二进制安装_系统环境配置

需要注意的是:需要更换所有的node的/etc/hosts文件

记得关闭防火墙和selinux哈

安装docker

这个master2上面的操作和master1上面的操作一样的,所以我们只需要把master上面的文件复制到master就可以了。

###master1上面操作
[root@k8s-master01 k8s]# scp /usr/bin/docker* root@192.168.100.16:/usr/bin
The authenticity of host '192.168.100.16 (192.168.100.16)' can't be established.
ECDSA key fingerprint is SHA256:o98cQWSKlxj3FYKpIcckFsAsb3+hRJ9w+DQThSbUUks.
ECDSA key fingerprint is MD5:9d:ee:d4:8e:1d:02:be:c9:ba:5f:15:51:99:3a:ed:97.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.16' (ECDSA) to the list of known hosts.
root@192.168.100.16's password:
docker                               100%   58MB  35.4MB/s   00:01
dockerd                              100%   66MB  33.8MB/s   00:01
docker-init                          100%  692KB  29.0MB/s   00:00
docker-proxy                         100% 2860KB  31.8MB/s   00:00
[root@k8s-master01 k8s]# scp /usr/bin/runc root@192.168.100.16:/usr/bin
root@192.168.100.16's password:
runc                                 100% 9376KB  47.8MB/s   00:00
[root@k8s-master01 k8s]# scp /usr/bin/containerd* root@192.168.100.16:/usr/bin
root@192.168.100.16's password:
containerd                           100%   31MB  49.8MB/s   00:00
containerd-shim                      100% 5872KB  49.9MB/s   00:00
[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/docker.service root@192.168.100.16:/usr/lib/systemd/system
root@192.168.100.16's password:
docker.service                       100%  460   187.5KB/s   00:00
[root@k8s-master01 k8s]# scp -r /etc/docker root@192.168.100.16:/etc
root@192.168.100.16's password:
daemon.json                          100%   67     2.1KB/s   00:00
key.json                             100%  244    14.4KB/s   00:00 ###在新增的master上面启动一下docker
[root@k8s-master2 ~]# systemctl daemon-reload
[root@k8s-master2 ~]# systemctl start docker
[root@k8s-master2 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-master2 ~]# docker version
Client: Docker Engine - CommunityVersion:           19.03.9API version:       1.40Go version:        go1.13.10Git commit:        9d988398e7Built:             Fri May 15 00:22:47 2020OS/Arch:           linux/amd64Experimental:      falseServer: Docker Engine - CommunityEngine:Version:          19.03.9API version:      1.40 (minimum version 1.12)Go version:       go1.13.10Git commit:       9d988398e7Built:            Fri May 15 00:28:17 2020OS/Arch:          linux/amd64Experimental:     falsecontainerd:Version:          v1.2.13GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429runc:Version:          1.0.0-rc10GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dddocker-init:Version:          0.18.0GitCommit:        fec3683

创建etcd证书目录

###在新增节点上面操作
[root@k8s-master2 ~]# mkdir -p /opt/etcd/ssl

拷贝文件(Master1操作)

拷贝master1上面的所有k8s文件以及etcd证书到master2

###master1上面操作
[root@k8s-master01 k8s]# scp -r /opt/kubernetes root@192.168.100.16:/opt
root@192.168.100.16's password:
kube-apiserver                       100%  113MB  63.0MB/s   00:01
kube-scheduler                       100%   42MB  47.4MB/s   00:00
kube-controller-manager              100%  108MB  53.8MB/s   00:02
kubelet                              100%  109MB  30.2MB/s   00:03
kube-proxy                           100%   38MB  28.2MB/s   00:01
token.csv                            100%   84     3.0KB/s   00:00
kube-apiserver.conf                  100% 1709   112.4KB/s   00:00
kube-controller-manager.conf         100%  582   319.0KB/s   00:00
kube-controller-manager.kubeconfig   100% 6344     2.9MB/s   00:00
kube-scheduler.kubeconfig            100% 6306   260.0KB/s   00:00
kube-scheduler.conf                  100%  188     7.0KB/s   00:00
kubelet.conf                         100%  394    23.3KB/s   00:00
kubelet-config.yml                   100%  611    49.9KB/s   00:00
bootstrap.kubeconfig                 100% 2168     1.4MB/s   00:00
kubelet.kubeconfig                   100% 2297   175.8KB/s   00:00
kube-proxy.conf                      100%  132    55.0KB/s   00:00
kube-proxy-config.yml                100%  260   117.2KB/s   00:00
kube-proxy.kubeconfig                100% 6270     2.6MB/s   00:00
ca-key.pem                           100% 1679     1.0MB/s   00:00
ca.pem                               100% 1359     1.1MB/s   00:00
server-key.pem                       100% 1679     1.6MB/s   00:00
server.pem                           100% 1635     1.2MB/s   00:00
kubelet.crt                          100% 2271    13.7KB/s   00:00
kubelet.key                          100% 1679     1.1MB/s   00:00
kubelet-client-2021-11-16-23-34-17.p 100% 1224     1.1MB/s   00:00
kubelet-client-current.pem           100% 1224     1.0MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  274KB   2.7MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  822   428.6KB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 2023     1.3MB/s   00:00
kube-apiserver.WARNING               100%  893KB   4.2MB/s   00:00
kube-controller-manager.k8s-master01 100%  290KB   2.4MB/s   00:00
kube-scheduler.FATAL                 100%  156KB   5.6MB/s   00:00
kube-controller-manager.k8s-master01 100%  276KB   1.1MB/s   00:00
kube-scheduler.ERROR                 100% 1677     1.0MB/s   00:00
kube-controller-manager.k8s-master01 100%  276KB  23.1MB/s   00:00
kube-controller-manager.INFO         100%   62KB   1.8MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  110KB   4.3MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   98KB  22.2MB/s   00:00
kube-scheduler.INFO                  100%   15KB 504.4KB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   97KB  26.5MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   97KB  19.1MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  822   475.2KB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 1907   693.1KB/s   00:00
kubelet.WARNING                      100%   59KB 402.7KB/s   00:00
kube-controller-manager.k8s-master01 100%   41KB  14.3MB/s   00:00
kubelet.ERROR                        100%   57KB 976.4KB/s   00:00
kube-controller-manager.k8s-master01 100%   28KB   9.7MB/s   00:00
kube-controller-manager.WARNING      100% 3029     2.7MB/s   00:00
kube-controller-manager.k8s-master01 100%   27KB  10.4MB/s   00:00
kube-controller-manager.ERROR        100%  939   778.6KB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   52KB  17.6MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   40KB  11.9MB/s   00:00
kube-proxy.INFO                      100%   17KB 271.6KB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   39KB  15.8MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  145KB   3.1MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  822   385.4KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   44KB  20.6MB/s   00:00
kube-proxy.WARNING                   100%  288   173.2KB/s   00:00
kube-controller-manager.k8s-master01 100%   46KB   1.1MB/s   00:00
kube-scheduler.WARNING               100% 2487     1.6MB/s   00:00
kube-controller-manager.k8s-master01 100%   32KB   2.1MB/s   00:00
kube-apiserver.INFO                  100% 2565KB   4.9MB/s   00:00
kube-controller-manager.k8s-master01 100%   32KB   4.7MB/s   00:00
kube-apiserver.ERROR                 100%  146KB 461.7KB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   58KB  17.2MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   46KB  13.2MB/s   00:00
kube-proxy.ERROR                     100% 1893    41.2KB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   45KB   5.3MB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 7934     1.0MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   1.9MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  263    32.3KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   2.0MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   1.7MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  408KB   2.5MB/s   00:00
kubelet.INFO                         100%  174KB   4.5MB/s   00:00
kubelet.k8s-master01.root.log.WARNIN 100%   59KB  25.2MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  395KB   3.9MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  395KB   3.1MB/s   00:00
kube-controller-manager.k8s-master01 100% 1736KB  15.4MB/s   00:00
kube-controller-manager.k8s-master01 100% 1559KB   7.3MB/s   00:00
kube-controller-manager.k8s-master01 100% 1517KB   3.7MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  189    68.0KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   12KB   6.1MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   5.9MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   7.3MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   8.1MB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 4532    32.5KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   12KB   5.2MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   8.4MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   5.0MB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 7109     4.8MB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 8828     5.2MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB 949.1KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  189    94.2KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   9.1MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   6.5MB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 7747     3.0MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  189    11.3KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  189    87.2KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  189   113.3KB/s   00:00
kube-apiserver.k8s-master01.root.log 100%   15KB   7.3MB/s   00:00
kube-apiserver.k8s-master01.root.log 100% 2565KB  66.5MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  146KB  24.5MB/s   00:00
kube-apiserver.k8s-master01.root.log 100%  893KB  63.2MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  163KB   7.8MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  167KB   1.9MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  145KB   6.1MB/s   00:00
kubelet.k8s-master01.root.log.INFO.2 100%   57MB   9.5MB/s   00:05
kubelet.k8s-master01.root.log.WARNIN 100%   56MB   8.8MB/s   00:06
kubelet.k8s-master01.root.log.ERROR. 100%   31MB   6.1MB/s   00:05
kube-proxy.k8s-master01.root.log.INF 100%   21KB 325.3KB/s   00:00
kube-proxy.k8s-master01.root.log.WAR 100% 7104     3.0MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  144KB   5.2MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  143KB   2.6MB/s   00:00
kube-controller-manager.k8s-master01 100% 1172KB  23.3MB/s   00:00
kube-controller-manager.FATAL        100% 1459KB  27.1MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  172KB   2.5MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  157KB   9.7MB/s   00:00
kube-controller-manager.k8s-master01 100% 1563KB  10.9MB/s   00:00
kube-controller-manager.k8s-master01 100% 1462KB  28.8MB/s   00:00
kube-controller-manager.k8s-master01 100% 1459KB  17.0MB/s   00:00
kube-proxy.k8s-master01.root.log.ERR 100% 7005   103.4KB/s   00:00
kube-proxy.k8s-master01.root.log.INF 100%   11KB 347.2KB/s   00:00
kube-proxy.k8s-master01.root.log.WAR 100% 1992   134.2KB/s   00:00
kube-proxy.k8s-master01.root.log.ERR 100% 1893     1.4MB/s   00:00
kube-proxy.k8s-master01.root.log.INF 100%   17KB  10.3MB/s   00:00
kubelet.k8s-master01.root.log.INFO.2 100%  175KB  28.5MB/s   00:00
kube-proxy.k8s-master01.root.log.WAR 100%  288   191.3KB/s   00:00
kubelet.k8s-master01.root.log.ERROR. 100%   58KB  27.4MB/s   00:00
kube-controller-manager.k8s-master01 100% 1459KB  61.5MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  156KB   1.6MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%  156KB  40.0MB/s   00:00
kube-scheduler.k8s-master01.root.log 100%   15KB   6.0MB/s   00:00
kube-scheduler.k8s-master01.root.log 100% 2487     1.7MB/s   00:00
kube-controller-manager.k8s-master01 100%   62KB  34.6MB/s   00:00
kube-controller-manager.k8s-master01 100% 3029     2.2MB/s   00:00
kube-scheduler.k8s-master01.root.log 100% 1677     1.2MB/s   00:00
kube-controller-manager.k8s-master01 100%  939   836.9KB/s   00:00 [root@k8s-master01 k8s]# scp -r /opt/etcd/ssl root@192.168.100.16:/opt/etcd
root@192.168.100.16's password:
ca-key.pem                           100% 1679    68.7KB/s   00:00
ca.pem                               100% 1265   561.0KB/s   00:00
server-key.pem                       100% 1679     1.3MB/s   00:00
server.pem                           100% 1346   751.0KB/s   00:00
[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube* root@192.168.100.16:/usr/lib/systemd/system
root@192.168.100.16's password:
kube-apiserver.service               100%  286    15.2KB/s   00:00
kube-controller-manager.service      100%  321   158.0KB/s   00:00
kubelet.service                      100%  246    12.4KB/s   00:00
kube-proxy.service                   100%  253   174.8KB/s   00:00
kube-scheduler.service               100%  285   203.1KB/s   00:00
[root@k8s-master01 k8s]# scp /usr/bin/kubectl  root@192.168.100.16:/usr/bin
root@192.168.100.16's password:
kubectl                              100%   38MB  17.8MB/s   00:02
[root@k8s-master01 k8s]# scp -r ~/.kube root@192.168.100.16:~
root@192.168.100.16's password:
config                               100% 6276   400.0KB/s   00:00
e48f6ae5064536c3f40004357bbd77b9     100%  423    16.3KB/s   00:00
7a501d24ef53df20bb8693e690458cd3     100% 5019   261.0KB/s   00:00
bb15ed9eb3ae7217e960c74d4514ed74     100%  939   100.8KB/s   00:00
60289862ba0f945260379690fd76cc89     100% 6355   170.3KB/s   00:00
504b5152d4353f06c61a52ee603f2482     100%  990   402.9KB/s   00:00
17f4e99618cf81977c7dda9dabe49514     100% 1218   767.6KB/s   00:00
bedc9584c7819bf9cc7702b44dd8b5e5     100%  611   428.7KB/s   00:00
b104fd963724c9f83f8540e3df4d598a     100%  944   411.3KB/s   00:00
692e43b9c8cee4bfa351dc4427fd9565     100%  828   355.9KB/s   00:00
f6f6f4498cdbd9db7c02f3fd6d54c2c7     100%  724   460.4KB/s   00:00
dd922d91c9d4fb2ff5efca985ccc8dc9     100% 1105   749.2KB/s   00:00
9e6a5ed2f8f6b4f1857f704e2e1fd9f9     100%  924   430.0KB/s   00:00
f4e2928d9581e74096023ba63f7da649     100% 1346   170.7KB/s   00:00
fa469a2142dc3774045d1a73b1b2f00e     100% 1206   728.1KB/s   00:00
ca079a27d2ba7a91641da3c43abe167b     100%  693   300.7KB/s   00:00
369d866d31580f09cbdc00c0d4b6eeea     100%  941   656.9KB/s   00:00
d734e513561cb35b47d567e4a5c3877d     100%  823   278.1KB/s   00:00
9137ac3ec1cd12bec4277981b59038ed     100%  936   569.8KB/s   00:00
482f44ba3989195736866d9078c4809f     100%  575   390.1KB/s   00:00
18e7520ddc6d89423f07e79d6f59e03a     100% 1201   573.7KB/s   00:00
7b89b837bf73496a548316a6e870ffb5     100%  583   385.9KB/s   00:00
f78adb7e89e5912fb66b1b2a8edcfc2e     100%  746   433.2KB/s   00:00
f2ba8903f98ce19f723bea28394d5a3a     100%  741   393.5KB/s   00:00
183b2ac8939f5541948cfff31d383927     100%  593   363.5KB/s   00:00
2026188586212c64d1c643de60be9ff5     100%  790   477.1KB/s   00:00
30cafa72b8a5dc11daca66229eb5a513     100%  683   409.3KB/s   00:00
ba39d8284b5df0e047138090f612d82a     100%  795   526.5KB/s   00:00
0d8c8ebbca01506b8dbccee84fcd53c8     100%  588   218.8KB/s   00:00
ca0fce889495043f97db2c358a3f2678     100%  616   318.8KB/s   00:00
0781673ec7e512d4e953ea89f6ad51e6     100%  488   288.6KB/s   00:00
67744128ea32f04cfbc2e360014b2c9c     100%  594   237.4KB/s   00:00
ca3c006fb2a41aa8539aaf57ba68453b     100%  877   475.7KB/s   00:00
fabe7c6cbf0c1817104ea82fe622d83d     100%  882    42.7KB/s   00:00
0e5082fb19604105046e29553190b24b     100%  795   419.2KB/s   00:00
e81c9f7d92ac9ab99dd5b395af66b01d     100% 1150   879.4KB/s   00:00
701186d54f0ff777b64b8da976259b83     100% 2483     1.4MB/s   00:00
de619c7889ab3b33fac6078869b60216     100%  580   279.9KB/s   00:00
244770df30fb6b0abaca97b6e140ebeb     100%  493   304.6KB/s   00:00
acdf19969c69d10ba33814697cf41d21     100%  589   308.5KB/s   00:00
2c36c5e890ffcc1c030142ee9ecc1312     100% 3582KB  18.5MB/s   00:00
465ce81c014e083c84ca76ab3a906bba     100% 3719   449.7KB/s   00:00
servergroups.json                    100% 4853     4.7MB/s   00:00
serverresources.json                 100%  653   432.6KB/s   00:00
serverresources.json                 100%  658   658.1KB/s   00:00
serverresources.json                 100% 6089     4.9MB/s   00:00
serverresources.json                 100%  704   439.7KB/s   00:00
serverresources.json                 100%  932   860.1KB/s   00:00
serverresources.json                 100% 1059   536.3KB/s   00:00
serverresources.json                 100%  325   249.8KB/s   00:00
serverresources.json                 100%  330   237.5KB/s   00:00
serverresources.json                 100%  542   308.9KB/s   00:00
serverresources.json                 100%  537   320.4KB/s   00:00
serverresources.json                 100%  438   540.5KB/s   00:00
serverresources.json                 100%  397   485.8KB/s   00:00
serverresources.json                 100%  819   619.1KB/s   00:00
serverresources.json                 100% 2196     1.8MB/s   00:00
serverresources.json                 100%  638   633.8KB/s   00:00
serverresources.json                 100%  864   627.3KB/s   00:00
serverresources.json                 100%  920   682.7KB/s   00:00
serverresources.json                 100%  915   479.2KB/s   00:00
serverresources.json                 100%  425   403.7KB/s   00:00
serverresources.json                 100%  655   445.3KB/s   00:00
serverresources.json                 100%  650   619.9KB/s   00:00
serverresources.json                 100%  289   278.7KB/s   00:00
serverresources.json                 100%  294   109.3KB/s   00:00
serverresources.json                 100%  297   313.1KB/s   00:00
serverresources.json                 100%  302    23.1KB/s   00:00
serverresources.json                 100%  460   500.4KB/s   00:00
serverresources.json                 100%  455   369.0KB/s   00:00
serverresources.json                 100%  307   281.6KB/s   00:00
serverresources.json                 100%  504   360.7KB/s   00:00
serverresources.json                 100%  509   325.1KB/s   00:00
serverresources.json                 100%  509   495.7KB/s   00:00
serverresources.json                 100%  202   167.8KB/s   00:00
serverresources.json                 100%  207   143.4KB/s   00:00
serverresources.json                 100%  308   211.2KB/s   00:00
serverresources.json                 100%  303   252.3KB/s   00:00
serverresources.json                 100%  591   412.3KB/s   00:00
serverresources.json                 100%  596   294.3KB/s   00:00
serverresources.json                 100% 3432     2.8MB/s   00:00 

删除证书文件

##在master2上面操作
[root@k8s-master2 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
[root@k8s-master2 ~]# rm -f /opt/kubernetes/ssl/kubelet*

修改配置文件IP和主机名

修改apiserver、kubelet和kube-proxy配置文件为本地IP

###在master2上面操作[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-apiserver.confKUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.100.13:2379,https://192.168.100.14:2379,https://192.168.100.15:2379 \
--bind-address=192.168.100.16 \       ##修改这一行
--secure-port=6443 \
--advertise-address=192.168.100.16 \   ##修改这一行
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
-- INSERT --[root@k8s-master2 ~]# vim /opt/kubernetes/cfg/kube-controller-manager.kubeconfigapiVersion: v1
clusters:
- cluster:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVYkRROHZIM3NwV1JPanZ1R1NXL2VJVHJQRHJBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl4TVRFeE5qRXlNRE13TUZvWERUSTJNVEV4TlRFeU1ETXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcUs4VzNRNDdleThjZnhMY3FIRDYKQVJ6dExxdGZvbGdieDZVMXMrNHMrVjQrV3FHbmtBWk05NHA2d1lCUkFQQzJJSTRzSjBJNjd0WEhaUTRhM2Z5VwpGenM2L1p4cXVPVld4T0M5Lzh2QnJvQzV1MmZxdjlULy9FbzFZWmtzNU52VlFSdE9WK1FhV05DKzBuYWI0MzF3CmFST3dMODFhOU5yY0phOUZ3bHN6Sm5XVW9vRzFDTDY0a1dXd2E0LzlwcEVtSW9heFkzRUFwc3YrNmZCYWhKakgKd2dQRThKZjUwRnVPeVpnWUFLb3BMeHlYMmZhVXgwT25rdTRnRWg4Y0FYUHNuWDlib3YwOTVaM2JQMlNZZHJtdwpDSWVRK1M5akVJRW1qS3d6eTRLemI4bFVMSTg2NGduemhjTXN2S1ZacFVXNGlWTzZMdW1oVGNWcDhVRjFXd2tLCm93SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVTzU1Y3c5MHpkN01vNmlxQm1vUTQ4VUFQcE04d0h3WURWUjBqQkJnd0ZvQVVPNTVjdzkwegpkN01vNmlxQm1vUTQ4VUFQcE04d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGaS94SUN3SHJGMHUzREFpRkozCkp4UzFaeCtibUZmWW1oMktKRGVZQU9jVXp3RXRJRXIwWGRja3FyOU9EdWFpM0x6dEl0NXZ3cTFFQTBVNktiYUsKSHFiT0l4NXhSeVNnc1NTWkJxdGVrTXdsRTRDT0Q4ZjVBUVVnY3RMOXZqc1B0Z1I1RTBpOCtNaEZKRDQ0dzM4RgpKQkRiTm5yb3VxZHlRT2lhZXRLb2FNNk1oQm5mYW1BbFpFeGVhRlF2cG50alZsS3pDbHprV1JONStkNFZXbnBWClB6UXpDSFlHdFJrSzMyZzRzSkFCWUVlcWp3bGpKTWhKS2duaHNvUTFROXp0NENTa25oaUV2bnp2K3NONkttRlkKb25JdUg2M2tsN2ZXNUQrVjI5aEtmamZKUzg2OE5QQ1lYWEhNbGFRN0VxVnd1d25CL2JWeHpYaWhSaWpXUmhwcgo4Nm89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://192.168.100.16:6443               ##主要修改这一行name: kubernetes
contexts:
- context:cluster: kubernetes
-- 插入 --                                  [root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-scheduler.kubeconfigapiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVYkRROHZIM3NwV1JPanZ1R1NXL2VJVHJQRHJBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl4TVRFeE5qRXlNRE13TUZvWERUSTJNVEV4TlRFeU1ETXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcUs4VzNRNDdleThjZnhMY3FIRDYKQVJ6dExxdGZvbGdieDZVMXMrNHMrVjQrV3FHbmtBWk05NHA2d1lCUkFQQzJJSTRzSjBJNjd0WEhaUTRhM2Z5VwpGenM2L1p4cXVPVld4T0M5Lzh2QnJvQzV1MmZxdjlULy9FbzFZWmtzNU52VlFSdE9WK1FhV05DKzBuYWI0MzF3CmFST3dMODFhOU5yY0phOUZ3bHN6Sm5XVW9vRzFDTDY0a1dXd2E0LzlwcEVtSW9heFkzRUFwc3YrNmZCYWhKakgKd2dQRThKZjUwRnVPeVpnWUFLb3BMeHlYMmZhVXgwT25rdTRnRWg4Y0FYUHNuWDlib3YwOTVaM2JQMlNZZHJtdwpDSWVRK1M5akVJRW1qS3d6eTRLemI4bFVMSTg2NGduemhjTXN2S1ZacFVXNGlWTzZMdW1oVGNWcDhVRjFXd2tLCm93SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVTzU1Y3c5MHpkN01vNmlxQm1vUTQ4VUFQcE04d0h3WURWUjBqQkJnd0ZvQVVPNTVjdzkwegpkN01vNmlxQm1vUTQ4VUFQcE04d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGaS94SUN3SHJGMHUzREFpRkozCkp4UzFaeCtibUZmWW1oMktKRGVZQU9jVXp3RXRJRXIwWGRja3FyOU9EdWFpM0x6dEl0NXZ3cTFFQTBVNktiYUsKSHFiT0l4NXhSeVNnc1NTWkJxdGVrTXdsRTRDT0Q4ZjVBUVVnY3RMOXZqc1B0Z1I1RTBpOCtNaEZKRDQ0dzM4RgpKQkRiTm5yb3VxZHlRT2lhZXRLb2FNNk1oQm5mYW1BbFpFeGVhRlF2cG50alZsS3pDbHprV1JONStkNFZXbnBWClB6UXpDSFlHdFJrSzMyZzRzSkFCWUVlcWp3bGpKTWhKS2duaHNvUTFROXp0NENTa25oaUV2bnp2K3NONkttRlkKb25JdUg2M2tsN2ZXNUQrVjI5aEtmamZKUzg2OE5QQ1lYWEhNbGFRN0VxVnd1d25CL2JWeHpYaWhSaWpXUmhwcgo4Nm89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://192.168.100.16:6443               ##主要修改这一行name: kubernetes
contexts:
-- INSERT --[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kubelet.confKUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master2 \              ##主要修改这一行
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
~
~
~                                                            [root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-proxy-config.ymlkind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master2               ##主要修改这一行
clusterCIDR: 10.244.0.0/16
~                                 [root@k8s-master2 ~]# vi ~/.kube/configapiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVYkRROHZIM3NwV1JPanZ1R1NXL2VJVHJQRHJBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl4TVRFeE5qRXlNRE13TUZvWERUSTJNVEV4TlRFeU1ETXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcUs4VzNRNDdleThjZnhMY3FIRDYKQVJ6dExxdGZvbGdieDZVMXMrNHMrVjQrV3FHbmtBWk05NHA2d1lCUkFQQzJJSTRzSjBJNjd0WEhaUTRhM2Z5VwpGenM2L1p4cXVPVld4T0M5Lzh2QnJvQzV1MmZxdjlULy9FbzFZWmtzNU52VlFSdE9WK1FhV05DKzBuYWI0MzF3CmFST3dMODFhOU5yY0phOUZ3bHN6Sm5XVW9vRzFDTDY0a1dXd2E0LzlwcEVtSW9heFkzRUFwc3YrNmZCYWhKakgKd2dQRThKZjUwRnVPeVpnWUFLb3BMeHlYMmZhVXgwT25rdTRnRWg4Y0FYUHNuWDlib3YwOTVaM2JQMlNZZHJtdwpDSWVRK1M5akVJRW1qS3d6eTRLemI4bFVMSTg2NGduemhjTXN2S1ZacFVXNGlWTzZMdW1oVGNWcDhVRjFXd2tLCm93SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVTzU1Y3c5MHpkN01vNmlxQm1vUTQ4VUFQcE04d0h3WURWUjBqQkJnd0ZvQVVPNTVjdzkwegpkN01vNmlxQm1vUTQ4VUFQcE04d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGaS94SUN3SHJGMHUzREFpRkozCkp4UzFaeCtibUZmWW1oMktKRGVZQU9jVXp3RXRJRXIwWGRja3FyOU9EdWFpM0x6dEl0NXZ3cTFFQTBVNktiYUsKSHFiT0l4NXhSeVNnc1NTWkJxdGVrTXdsRTRDT0Q4ZjVBUVVnY3RMOXZqc1B0Z1I1RTBpOCtNaEZKRDQ0dzM4RgpKQkRiTm5yb3VxZHlRT2lhZXRLb2FNNk1oQm5mYW1BbFpFeGVhRlF2cG50alZsS3pDbHprV1JONStkNFZXbnBWClB6UXpDSFlHdFJrSzMyZzRzSkFCWUVlcWp3bGpKTWhKS2duaHNvUTFROXp0NENTa25oaUV2bnp2K3NONkttRlkKb25JdUg2M2tsN2ZXNUQrVjI5aEtmamZKUzg2OE5QQ1lYWEhNbGFRN0VxVnd1d25CL2JWeHpYaWhSaWpXUmhwcgo4Nm89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://192.168.100.16:6443               ##主要修改这一行name: kubernetes
contexts:
-- INSERT --

启动设置开机启动

###master2上面操作[root@k8s-master2 ~]# systemctl daemon-reload
[root@k8s-master2 ~]# systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
[root@k8s-master2 ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

查看集群状态

###master1上面操作[root@k8s-master01 k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
scheduler            Healthy   ok
controller-manager   Healthy   ok     # 查看证书请求
[root@k8s-master01 k8s]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-U5GnB9VorCdEn-Z-Xlt6jJRhhIlep1s-b99u9ZBdVgw   3m29s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending# 授权请求
[root@k8s-master01 k8s]# kubectl certificate approve node-csr-U5GnB9VorCdEn-Z-Xlt6jJRhhIlep1s-b99u9ZBdVgw
certificatesigningrequest.certificates.k8s.io/node-csr-U5GnB9VorCdEn-Z-Xlt6jJRhhIlep1s-b99u9ZBdVgw approved# 查看Node
[root@k8s-master01 k8s]# kubectl get node
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   Ready      <none>   15h   v1.20.5
k8s-master2   NotReady   <none>   62s   v1.20.5
k8s-node01    Ready      <none>   13h   v1.20.5###出现notready的原因是master2这个node节点没有关闭防火墙和selinux
[root@k8s-master01 k8s]# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    <none>   15h     v1.20.5
k8s-master2   Ready    <none>   9m27s   v1.20.5
k8s-node01    Ready    <none>   14h     v1.20.5##### 至此,扩容新的master节点成功

结束语

加油和坚持,一起努力吧少年

上一篇:kubernetes v1.20项目之二进制安装部署Dashboard和CoreDNS(亲测)
下一篇:kubernetes v1.20项目之二进制安装部署master的高可用(待完善)

kubernetes v1.20项目之二进制扩容多Master相关推荐

  1. kubernetes v1.20项目之部署二进制安装_系统环境配置

    kubernetes v1.20项目之二进制部署安装系统环境配置 好久没有操作过k8s了,自从离开了大厂也没有接触k8s的机会了,正好最近有朋友打听k8s相关的事情,这个文章也是自己根据自己脑子里面的 ...

  2. 二进制安装kubernetes(v1.20.16)

    目录 1.集群规划 2.软件版本 3.下载地址 4.初始化虚拟机 4.1安装虚拟机 4.2升级内核 4.3安装模块 4.4系统设置 4.5设置hoss 4.6设置IPv4转发 4.7时间同步 4.8安 ...

  3. Kubernetes学习总结(4)——Kubernetes v1.20 重磅发布 | 新版本核心主题 主要变化解读

    K8sMeetup 中国社区第一时间整理了 v1.20 的亮点内容,为大家详细介绍此版本的主要功能. 作者:Bach(才云).bot(才云) 技术校对:星空下的文仔(才云) 美国时间 12 月 8 日 ...

  4. kubernetes_22_基于containerd部署kubernetes v1.20.5

    介绍 多年间,Docker.Kubernetes 被视为云计算时代下开发者的左膀右臂 Docker 作为一种开源的应用容器引擎,开发者可以打包他们的应用及依赖到一个可移植的容器中,发布到流行的 Lin ...

  5. a24.ansible 生产实战案例 -- 基于kubeadm安装kubernetes v1.20 -- 集群部署(一)

    源码下载地址:https://github.com/raymond999999/kubernetes-ansible 1.高可用Kubernetes集群规划 角色 机器名 机器配置 ip地址 安装软件 ...

  6. kubernetes 二进制安装(v1.20.16)(四)部署 master

    文章目录 自签CA证书 生成CA证书配置 生成CA证书 部署Apiserver 签发apiserver 证书 创建配置文件 启用 TLS Bootstrapping 机制 创建管理文件 分发文件 核对 ...

  7. 部署一套完整的K8s高可用集群(二进制-V1.20)

    <部署一套完整的企业级K8s集群> v1.20,二进制方式 作者信息 李振良(阿良),微信:xyz12366699 DevOps实战学院 http://www.aliangedu.cn 说 ...

  8. s19.基于 Kubernetes v1.25 (kubeadm) 和 Docker 部署高可用集群(一)

    基于 Kubernetes v1.25 和 Docker 部署高可用集群 主要内容 Kubernetes 集群架构组成 容器运行时 CRI Kubernetes v1.25 新特性 Kubernete ...

  9. minikube v1.20.0版本的一个bug

    近期在研究dapr(分布式应用运行时)[1],这是一个很朴素却很棒的想法,目前大厂,如阿里和鹅厂都有大牛在研究该项目,甚至是利用dapr落地了部分应用.关于dapr,后续我也会用单独的文章详细说说. ...

最新文章

  1. 调侃 -- 刚入职时的单纯
  2. pandas使用isna函数和any函数判断dataframe中的每一个数据列中是否包含缺失值
  3. [转载]日历设计之重复事件规则设计
  4. ES6/7 异步编程学习笔记
  5. JAVA 子父类的特点
  6. MySQL(二): 表的增删查改
  7. 没人告诉你的小工具整理收藏
  8. vba发送邮件 签名_如何更改“从Windows 10的邮件发送”签名
  9. NOIP模拟测试7「方程的解·visit」
  10. mybatis 中针对指定区间内的时间的查询
  11. 快捷键_AutoCAD 2021中的默认快捷键、新建或编辑快捷键
  12. 字符串使用与内部实现原理
  13. 为什么程序员老在改 Bug,就不能一次改好吗?
  14. js调整数组某些元素到指定位置顺序_如何在JS数组特定索引处指定位置插入元素?...
  15. 一个简单实用的boost升压电路
  16. 几种码农使用的等宽字体比较——让自己的眼睛爽起来!
  17. c语言写莫迪康通信,组态王modbus通信用法教程modbus-rtu、modbus-tcp莫迪康通信配置步骤...
  18. 易到暂停办理线下提现 称贾跃亭隐瞒巨额债务成影响提现关键因素
  19. java电话簿_JAVA实现简单电话簿功能
  20. oracle drop tablespace 恢复杀手锏

热门文章

  1. 极客日报:阿里再度调整组织架构:天猫淘宝大融合,新设三大中心;苹果M1首席芯片设计师跳槽至英特尔
  2. 基于Spark实现电影点评系统用户行为分析—RDD篇(一)
  3. python用百度云接口实现身份证识别
  4. Winform MDI窗体子窗体显示区域大小
  5. mpi_barrier
  6. numeric和integer的区别
  7. 国内主流入门级云主机简评(多角度对比)
  8. 如何使用burp绕过token
  9. 得之坦然,失之淡然,顺其自然,争其必然。真的太经典了啊!
  10. 用C语言实现一个cat命令