Docker Swarm集群与Kubernetes的搭建与试用
一、Docker Swarm集群的环境搭建与试用
Docker Swarm 搭建
1. OS设置
[root@vm1 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.100/24
[root@vm2 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.120/24
2. 安装Docker
[root@vm1 ~]# cat install-docker.sh
yum remove docker* -y
rm -rf /var/lib/docker
yum -y install wget
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
docker –version
systemctl enable docker –now
docker run hello-world
[root@vm1 ~]# bash install-docker.sh
[root@vm1 ~]# curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[root@vm1 ~]# chmod +x /usr/local/bin/docker-compose
[root@vm1 ~]# docker -v
Docker version 20.10.12, build e91ed57
[root@vm1 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c
[root@vm2 ~]# docker -v
Docker version 20.10.12, build e91ed57
[root@vm2 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c
3. 设置Docker0网络
[root@vm1 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.80.0/24 192.168.80.1 map[]}]
[root@vm2 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.90.0/24 192.168.90.1 map[]}]
搭建Swarm集群
1. 初始化
[root@vm1 ~]# docker swarm init --advertise-addr 192.168.50.100
Swarm initialized: current node (kdcrkd6sqteevq9jgy70fd0h0) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1- 0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@vm1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
[root@vm1 ~]# docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0* vm1 Ready Active Leader 20.10.12
2. 添加Worker到Swarm集群中
[root@vm2 ~]# docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
This node joined a swarm as a worker.
[root@vm2 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12
3. 添加Label
(1)查询DNS
[root@vm1 ~]# docker node update --label-add name=swarm-master-1 vm1
[root@vm1 ~]# docker node update --label-add name=swarm-master-2 vm2
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12
(2)查看Label
[root@vm1 ~]# docker node inspect vm1 -f "{{.Spec.Labels}}"
map[name:swarm-master-1]
[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]3
(3)添加Label
[root@vm1 ~]# docker node update --help
Usage: docker node update [OPTIONS] NODE
Update a nodeOptions:--availability string Availability of the node ("active"|"pause"|"drain")--label-add list Add or update a node label (key=value)--label-rm list Remove a node label if exists--role string Role of the node ("worker"|"manager")
[root@vm1 ~]# docker node update --label-add name=master-2
[root@vm1 ~]# echo $?
[root@vm1 ~]#
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12
[root@vm1 ~]# docker node promote master-2
[root@vm1 ~]# docker node update --label-add HOSTNAME=master-2 vm2
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12
[root@vm1 ~]# docker node promote master-2
4.提升Worker为Master
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12
[root@vm2 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq * vm2 Ready Active Reachable 20.10.12
5.查看节点信息
[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]
[root@vm1 ~]# docker node inspect vm2
6.创建网络
[root@vm1 ~]# docker network create -d overlay --subnet=192.168.82.0/24 --gateway=192.168.82.1 --attachable swarm-net
xywzrf7ftwenaxbu0zmewh183
[root@vm1 ~]# docker network inspect swarm-net -f "{{.IPAM}}"
{default map[] [{192.168.82.0/24 192.168.82.1 map[]}]}
7.创建Service并验证
(1)创建
[root@vm1 ~]# docker service create --replicas 3 -p 10080:80 --network swarm-net --name nginx-cluster nginx
r4v6w094yxl370bynyzghh37a
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
[root@vm1 ~]#
(2)查看
[root@vm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
r4v6w094yxl3 nginx-cluster replicated 3/3 nginx:latest *:10080->80/tcp
[root@vm1 ~]# ss -ntl | grep 10080
LISTEN 0 128 *:10080 *:*
[root@vm1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ddd0e479de6 nginx:latest "/docker-entrypoint.…" 7 minutes ago Up 7 minutes 80/tcp
nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# docker port nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# echo $?
0
[root@vm1 ~]#
(3)访问
[root@vm1 ~]# curl 192.168.50.100:10080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
8.查看同一台主机的负载均衡分配情况
(1)修改默认Web主页
[root@vm1 ~]# docker exec -it nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l bash
root@3ddd0e479de6:/# echo '#1 in master 1' > /usr/share/nginx/html/index.html
[root@vm2 ~]# docker exec -it nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga bash
root@0d6709372322:/# echo '#2 in master 2' > /usr/share/nginx/html/index.html
root@0d6709372322:/#
[root@vm2 ~]# docker exec -it nginx-cluster.3.yofvioldzci3k4geve7lykyrs bash
root@6b1e246bdc34:/# echo '#3 in master 2' > /usr/share/nginx/html/index.html
(2)访问测试
Step 1
[root@vm2 ~]# curl 192.168.50.120:10080
#3 in master 2
[root@vm2 ~]# curl 192.168.50.120:10080
#1 in master 1
[root@vm2 ~]# curl 192.168.50.120:10080
#2 in master 2
[root@vm2 ~]# curl 192.168.50.120:10080
#3 in master 2
9.验证HA
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b1e246bdc34 nginx:latest “/docker-entrypoint.…” 18 minutes ago Up 18 minutes 80/tcp nginx-cluster.3.yofvioldzci3k4geve7lykyrs
0d6709372322 nginx:latest “/docker-entrypoint.…” 18 minutes ago Up 18 minutes 80/tcp nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga
[root@vm2 ~]#
[root@vm2 ~]# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:docker.socket
[root@vm2 ~]# systemctl status docker
● docker.service – Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
[root@vm1 ~]# docker node ls
Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded
[root@vm2 ~]# systemctl status docker
● docker.service – Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2022-01-23 13:22:38 JST; 28s ago
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12
[root@vm2 ~]# shutdown -h now
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active Unreachable 20.10.12
[root@vm1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
acc40b31df3f nginx:latest “/docker-entrypoint.…” 2 minutes ago Up 2 minutes 80/tcp nginx-cluster.3.ei48wkjvtmn53mfsjthlb52ef
d4635d8f2322 nginx:latest “/docker-entrypoint.…” 2 minutes ago Up 2 minutes 80/tcp nginx-cluster.2.yyc2fmh73p23adbu44auuzf7r
3ddd0e479de6 nginx:latest “/docker-entrypoint.…” 31 minutes ago Up 31 minutes 80/tcp nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12
lp4i21pj0sij9yz81f7u8dzy7 vm3 Ready Active 20.10.12
[root@vm1 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’
192.168.50.100/24
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@vm2 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’
192.168.50.120/24
[root@vm3 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’
192.168.50.130/24
[root@vm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12
4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12
lp4i21pj0sij9yz81f7u8dzy7 vm3 Ready Active 20.10.12
[root@vm1 ~]# docker service rm r4v6w094yxl3
r4v6w094yxl3
[root@vm1 ~]# docker service create –replicas 6 -p 10080:80 –network swarm-net –name nginx-cluster nginx kzdm5zhgt1eo9goxy0rjwmklm
overall progress: 6 out of 6 tasks
1/6: running [================================================è]
2/6: running [================================================è]
3/6: running [================================================è]
4/6: running [================================================è]
5/6: running [================================================è]
6/6: running [================================================è]
verify: Service converged
[root@vm1 ~]#
[root@vm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
kzdm5zhgt1eo nginx-cluster replicated 6/6 nginx:latest *:10080->80/tcp
[root@vm3 ~]# docker service ls
Error response from daemon: This node is not a swarm manager. Worker nodes can’t be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
[root@vm1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
31177737f781 nginx:latest “/docker-entrypoint.…” 28 seconds ago Up 25 seconds 80/tcp nginx-cluster.5.9a7wd7yqo4lweayw3352kssru
791b93b799d8 nginx:latest “/docker-entrypoint.…” 28 seconds ago Up 25 seconds 80/tcp nginx-cluster.2.bcrt9chkfrmbjyy30n4g3il71
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a59afd4d2d2 nginx:latest “/docker-entrypoint.…” 3 seconds ago Up 2 seconds 80/tcp nginx-cluster.1.c46oo5nzbchhbn6v07rft9d5b
bac746d67e4f nginx:latest “/docker-entrypoint.…” 4 seconds ago Up 2 seconds 80/tcp nginx-cluster.4.71wn1fz6yojbzunktws2qqslg
[root@vm3 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0985bc21d07 nginx:latest “/docker-entrypoint.…” 7 seconds ago Up 5 seconds 80/tcp nginx-cluster.6.vvykeb5g04gihc5jg4la26903
d14c20577a85 nginx:latest “/docker-entrypoint.…” 7 seconds ago Up 5 seconds 80/tcp nginx-cluster.3.qwccv5u0jlwfp5txp5593d8cc
10.删除某容器
[root@vm1 ~]# docker rm $(docker stop nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48)
nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
719e0941efa4 nginx:latest "/docker-entrypoint.…" 20 seconds ago Up 15 seconds 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
11.删除服务
[root@vm1 ~]# docker service --help
Usage: docker service COMMAND
Manage services
Commands:create Create a new serviceinspect Display detailed information on one or more serviceslogs Fetch the logs of a service or taskls List servicesps List the tasks of one or more servicesrm Remove one or more servicesrollback Revert changes to a service's configurationscale Scale one or multiple replicated servicesupdate Update a service
[root@vm1 ~]# docker service rm nginx-cluster
nginx-cluster
[root@vm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
sfcq2vc5orxs nginx-cluster-2 replicated 2/2 nginx:latest *:10081->80/tcp
[root@vm1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@vm1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@vm1 ~]#
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
719e0941efa4 nginx:latest "/docker-entrypoint.…" 2 minutes ago Up About a minute 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
719e0941efa4 nginx:latest "/docker-entrypoint.…" 2 minutes ago Up About a minute 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
12.手动停止容器
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAME
719e0941efa4 nginx:latest "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n68
[root@vm2 ~]# docker stop nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@vm2 ~]#
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
598c665b7a8e nginx:latest "/docker-entrypoint.…" 19 seconds ago Up 13 seconds 80/tcp nginx-cluster-2.2.qmgbury0aixey2sozive6377l
[root@vm2 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CRATED STATUS PORTS NAMES
598c665b7a8e nginx:latest "/docker-entrypoint.…" 24 seconds ago Up 18 seconds 80/tcp nginx-cluster-2.2.qmgbury0aixey2sozive6377l
719e0941efa4 nginx:latest "/docker-entrypoint.…" 5 minutes ago Exited (0) 24 seconds ago nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
sfcq2vc5orxs nginx-cluster-2 replicated 2/2 nginx:latest *:10081->80/tcp
[root@vm2 ~]# docker start nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
598c665b7a8e nginx:latest "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 80/tcp nginx-cluster-2.2.qmgbury0aixey2sozive6377l
719e0941efa4 nginx:latest "/docker-entrypoint.…" 7 minutes ago Up 1 second 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
sfcq2vc5orxs nginx-cluster-2 replicated 2/2 nginx:latest *:10081->80/tcp
[root@vm2 ~]# docker service rm $(docker service ls -q)
sfcq2vc5orxs
[root@vm2 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
[root@vm2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@vm2 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@vm2 ~]#
13.离开集群
[root@vm3 ~]# docker swarm leave
Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. The only way to restore a swarm that has lost consensus is to reinitialize it with `--force-new-cluster`. Use `--force` to suppress this message.
[root@vm3 ~]# docker swarm leave --force
[root@vm3 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
14.删除集群
[root@vm1 ~]# docker swarm leave --force
Node left the swarm.
[root@vm1 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
二、Kubernetes集群的环境搭建与试用
1.安装Docker
wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-18.06.0.ce-3.el7.x86_64
systemctl start docker.service
systemctl enable docker.service
2.安装Kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.12.3
yum install -y kubeadm-1.12.3
yum install -y kubectl-1.12.3
3.获取镜像
docker save -o k8s-1.12.3.tar k8s.gcr.io/kube-proxy:v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.2 quay.io/coreos/flannel:v0.10.0-amd64 k8s.gcr.io/pause:3.1
docker load -i k8s-1.12.3.tar
4.禁用节点上的Swap
swapoff -a
Step 2
sysctl -p
Step 3
vim /ets/fstab
5.开启路由转发功能以及iptables的过滤策略
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
6.初始化Master节点
kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.6.6.110
7.从节点加入
kubeadm join 10.6.6.192:6443 --token afbkdo.6335xh1w0lv7odbh --discovery-token-ca-cert-hash sha256:b9abe5a668609f0225c8bb3ecba3a70a0be370f90905fcce79a6d783bbd0aeef
8.配置主节点是否参与调度
kubectl taint nodes master.k8s node-role.kubernetes.io/master-
kubectl taint nodes master.k8s node-role.kubernetes.io/master=:NoSchedule
9.开启非安全端口访问
--secure-port=6443--insecure-bind-address=0.0.0.0--insecure-port=8080
10.配置证书续期
--kubeconfig=/etc/kubernetes/controller-manager.conf--experimental-cluster-signing-duration=87600h0m0s--feature-gates=RotateKubeletServerCertificate=true
Docker Swarm集群与Kubernetes的搭建与试用相关推荐
- 正式环境docker部署hyperf_应用部署 - Docker Swarm 集群搭建 - 《Hyperf v1.1.1 开发文档》 - 书栈网 · BookStack...
Docker Swarm 集群搭建 现阶段,Docker容器技术已经相当成熟,就算是中小型公司也可以基于 Gitlab.Aliyun镜像服务.Docker Swarm 轻松搭建自己的 Docker集群 ...
- Docker——阿里云搭建Docker Swarm集群
阿里云搭建Docker Swarm集群 Docker Swarm概念 环境部署 Swarm集群搭建 安装Docker 配置阿里云镜像加速 搭建集群 Raft一致性算法 Swarm集群弹性创建服务(扩缩 ...
- docker swarm集群搭建及使用Portainer、shipyard
一.规划 1.swarm01作为manager节点,swarm02和swarm03作为worker节点. # cat /etc/hosts 127.0.0.1 localhost 192.168.13 ...
- 持续集成docker—第三篇(docker swarm集群搭建)
一.规划 1.net-master作为manager节点,net-salve作为worker节点. cat >>/etc/hosts<<EOF 47.96.65.70 yund ...
- 【Docker】docker swarm集群搭建和相关命令分享
Docker swarm 集群通过 docker cli 来创建,并通过docker cli来实现应用的部署和集群的管理. Docker swarm集群的搭建相对简单,这里使用三台虚拟机(一个管理节点 ...
- Docker Swarm集群搭建以及服务命令等操作
前言:之前都是采用rancher可视化管理工具进行管理K8S进一步管理容器,但是每次机器宕机后rancher中集群特别容易挂掉,出现的问题五花八门,在网上很难搜到解决方案,所以准备采用docker官方 ...
- docker swarm 集群服务编排部署指南(docker stack)
Docker Swarm 集群管理 概述 Docker Swarm 是 Docker 的集群管理工具.它将 Docker 主机池转变为单个虚拟 Docker 主机,使得容器可以组成跨主机的子网网络.D ...
- Docker Swarm集群部署
Docker Swarm集群部署 1 方案介绍 1.1 概述 1.2 软件包 2 Swarm集群搭建 2.1 IP规划 2.2 基础配置 2.2.1 关闭SELinux 2.2.2 关闭防火墙或开放需 ...
- Docker Swarm集群仓库和可视化管理
Docker Swarm集群仓库和可视化管理 1 背景 2 环境 3 安装操作 3.1 registary部署 3.1.1 下载 3.1.2 部署 3.2 portainer部署 3.2.1 下载 3 ...
最新文章
- asp php 语法区别,asp与php语法对比
- 创建响应式布局的优秀网格工具集锦《系列五》
- Web服务cxf框架发布2
- PHP、jQuery、jQueryPager结合实现Ajax分页
- ViewPagerIndicator+viewpager的简单使用,不需要导入Library包
- vboxmanage查询正在运行的vbox虚拟机
- linux安装两个jdk_jdk在linux上安装过程
- tigervnc远程控制linux,CentOS 6.8 安装TigerVNC 实现 Linux 远程桌面
- 手桌面上没有计算机,手把手教你电脑桌面图标都不见了怎么办
- python进阶04IO的同步异步,阻塞非阻塞
- 从DCF到DCX:构想照进现实
- 不敢穷,不敢病,不敢死……我们是独生子女
- lua反射的一个例子
- Sql执行计划,优化sql必备!
- RBF神经网络参数的参数优化(进化算法)+Matlab源码
- caj阅读器Mac版下载
- 设计模式在项目中的应用案例_案例|P6软件在水电项目施工管理中的应用
- ASP.NET操作EXCEL 合并单元格 大全
- 阿里云ECS服务器常用入门配置命令
- 文件操作细致详解(下)
热门文章
- yum 安装mysql 后 which is not functionally dependent on columns in GROUP BY clause; this is incompatibl
- java oval 使用_java开源验证框架OVAL应用实例
- Doxygen简介及使用说明
- Androidnbsp;滑动开关控件
- 学习Altium Designer软件总结
- (ElasticSearch02)day80分布式查漏补缺
- 考研政治——马克思三大原理之对立统一
- 我的linux内核学习之路,Linux再学习(一)-学习路线规划
- 漫步华尔街——股市历久弥新的成功投资策略读书笔记
- arcgis for js 4.X自定义气泡点击地图对象弹出对话框