Docker-1.12 swarm模式
官方网站:
https://docs.docker.com/engine/swarm/
环境:
CentOS 7.2
docker-1.12.0

Manager1:192.168.8.201
Manager1:192.168.8.202
Manager1:192.168.8.203
Worker1:192.168.8.101
Worker2:192.168.8.102

Worker3:192.168.8.103
采用3台Manager,3台Worker的swarm集群
提示:并不是Manager越多性能越好,官方推荐Manager为7节点
  • A three-manager swarm tolerates a maximum loss of one manager.
  • A five-manager swarm tolerates a maximum simultaneous loss of two manager nodes.
  • An N manager cluster will tolerate the loss of at most (N-1)/2 managers.
  • Docker recommends a maximum of seven manager nodes for a swarm.

    Important Note: Adding more managers does NOT mean increased scalability or higher performance. In general, the opposite is true.

https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/
https://docs.docker.com/engine/swarm/admin_guide/
https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/
一.docker
请参看 CentOS6/7 docker安装
二.创建swarm集群
1.初始化swarm
https://docs.docker.com/engine/swarm/swarm-tutorial/

[root@node4 ~]# docker swarm init --advertise-addr 192.168.8.201 

Swarm initialized: current node (avex4e0pezsywuzb4aqjm5zf1) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join \

    --token SWMTKN-1-4o56a1xdy98sbpy18rtbxezsb2olre9l1im95q2tbepkvc75u2-7mk2byc266iibj36ev64ljtla \

    192.168.8.201:2377

To add a manager to this swarm, run the following command:

docker swarm join \

    --token SWMTKN-1-4o56a1xdy98sbpy18rtbxezsb2olre9l1im95q2tbepkvc75u2-dte6sh24rjwqcoq34w1uzplaa \

    192.168.8.201:2377

2.添加swarm节点
https://docs.docker.com/engine/swarm/swarm-tutorial/add-nodes/
添加swarm节点非常简单,在初始化的时候就非常贴心地给出了添加命令,后期还可以通过docker swarm join-token worker|manager来查看
添加Manager节点
在node5,node6上分别执行上面提示的命令即可
添加Worker节点

在node1,node2,node3上分别执行上面提示的命令即可    

[root@node4 ~]# docker node ls

ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS

3cpke3p5dugg1xkvrp2y5719q    node6.example.com  Ready   Active        Reachable

4zqj46vjvr2htekleqha5f45s    node2.example.com  Ready   Active

82v5xgvmvwsecqb4hz6jrr3ci    node1.example.com  Ready   Active

8x4acgqf2h3uq70ulwcs5fs1f    node5.example.com  Ready   Active        Reachable

8y9fut6xh4k85bpp47owi1eol    node3.example.com  Ready   Active

avex4e0pezsywuzb4aqjm5zf1 *  node4.example.com  Ready   Active        Leader

提示:docker node ls仅可在Manager上执行

三.swarm network
https://docs.docker.com/engine/reference/commandline/network_create/
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
https://docs.docker.com/engine/userguide/networking/work-with-networks/
跨主机组网需要overlay网络,从docker-1.12.0版本开始,集群初始化后会生成默认的overlay网络ingress,新加入的swarm节点会自动同步所以组网,包括overlay网络

[root@node6 ~]# docker network ls

NETWORK ID          NAME                DRIVER              SCOPE

5dafc94f57c3        bridge              bridge              local

6f60b175cf00        docker_gwbridge     bridge              local

c039662c7673        host                host                local

1mhcme8jygd4        ingress             overlay             swarm 

7dbc1f6966e3        none                null                local

[root@node6 ~]# docker network inspect ingress

[

{

"Name": "ingress",

"Id": "1mhcme8jygd43nf82yhbdda4x",

"Scope": "swarm",

"Driver": "overlay",

"EnableIPv6": false,

"IPAM": {

"Driver": "default",

"Options": null,

"Config": [

{

"Subnet": "10.255.0.0/16",

"Gateway": "10.255.0.1"

}

]

},

"Internal": false,

"Containers": {

"ingress-sbox": {

"Name": "ingress-endpoint",

"EndpointID": "8258a07deb8387f9d7b0f86512c0f7df42166e9502c1433a8f4293bd2fcd517b",

"MacAddress": "02:42:0a:ff:00:05",

"IPv4Address": "10.255.0.5/16",

"IPv6Address": ""

}

},

"Options": {

"com.docker.network.driver.overlay.vxlanid_list": "256"

},

"Labels": {}

}

]

当然,也可以自定义overlay网络

https://docs.docker.com/engine/userguide/networking/get-started-overlay/

docker network create --driver overlay --subnet 10.0.9.0/24 swarm-network

四.管理swarm

1.创建service
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
https://docs.docker.com/engine/swarm/services/

[root@node4 ~]# docker service create --replicas=1 --name redis --network ingress --endpoint-mode vip --publish 6379:6379 192.168.8.254:5000/redis

6r7th5xvkuvetsy6wjgbfs00h

2.更新service
https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/

[root@node4 ~]# docker service scale redis=3

redis scaled to 3

[root@node4 ~]# docker service ps redis

ID                         NAME     IMAGE                     NODE               DESIRED STATE  CURRENT STATE            ERROR

31pjo1yrxygkq10i8ep639eat  redis.1  192.168.8.254:5000/redis  node4.example.com  Running        Running 34 seconds ago

alfiv9kyo4mxkm6h5j1ggvn93  redis.2  192.168.8.254:5000/redis  node6.example.com  Running        Preparing 5 seconds ago

94yf3q1sqwviazu69nw2fo5j6  redis.3  192.168.8.254:5000/redis  node3.example.com  Running        Running 4 seconds ago

3.删除service

[root@node4 ~]# docker service rm redis

redis

4.rolling update
https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/

[root@node4 ~]# docker service create --replicas 6 --name redis --update-parallelism 2 --update-delay 10s  --update-failure-action continue --network ingress --endpoint-mode vip --publish 6379:6379 192.168.8.254:5000/redis

7w0f6dcjdwtbonxe180rdfbzp

[root@node4 ~]# docker service ls

ID            NAME   REPLICAS  IMAGE                     COMMAND

7w0f6dcjdwtb  redis  6/6       192.168.8.254:5000/redis

[root@node4 ~]# docker service ps redis

ID                         NAME     IMAGE                     NODE               DESIRED STATE  CURRENT STATE           ERROR

0cqgbrpk7cpcwsute6swnnxb6  redis.1  192.168.8.254:5000/redis  node6.example.com  Running        Running 14 seconds ago

b89fh3gslxb4031erpisqgk52  redis.2  192.168.8.254:5000/redis  node4.example.com  Running        Running 15 seconds ago

dmsxpmbrujyhpf468fiw04789  redis.3  192.168.8.254:5000/redis  node5.example.com  Running        Running 15 seconds ago

2rnh7gloduoux8fcuubthshyz  redis.4  192.168.8.254:5000/redis  node3.example.com  Running        Running 15 seconds ago

18z4ogihy5xaaki3li13vfrbi  redis.5  192.168.8.254:5000/redis  node1.example.com  Running        Running 15 seconds ago

7d56gdx74gzdwcsvs7s6qksy9  redis.6  192.168.8.254:5000/redis  node2.example.com  Running        Running 15 seconds ago

[root@node4 ~]# docker service inspect --pretty redis

ID: 7w0f6dcjdwtbonxe180rdfbzp

Name: redis

Mode: Replicated

Replicas: 6

Placement:

UpdateConfig:

Parallelism: 2

Delay: 10s

On failure: pause

ContainerSpec:

Image: 192.168.8.254:5000/redis

Resources:

提示:还可以轮询更新docker image,示例如下
docker service update --image 192.168.8.254:5000/redis:3.0.7 redis
5.零downtime维护docker节点

[root@node4 ~]# docker service ps nginx

ID                         NAME      IMAGE                     NODE               DESIRED STATE  CURRENT STATE               ERROR

bfx5sv59nzsfutfjlcaz98gtt  nginx.1   192.168.8.254:5000/nginx  node4.example.com  Running        Running about a minute ago

c4i1sepzfd0v6p9rgncxyud6t  nginx.2   192.168.8.254:5000/nginx  node3.example.com  Running        Running 2 minutes ago

1zm2fpa4ufh0q2jc8iywor4iw  nginx.3   192.168.8.254:5000/nginx  node6.example.com  Running        Running about a minute ago

f2u8ywqoccekr8qz5pvle2kt2  nginx.4   192.168.8.254:5000/nginx  node1.example.com  Running        Running 2 minutes ago

f2w57lnt1495grq366b34eypd  nginx.5   192.168.8.254:5000/nginx  node4.example.com  Running        Running about a minute ago

7nu5a435xch7jux3lqlyoz095  nginx.6   192.168.8.254:5000/nginx  node2.example.com  Running        Running 2 minutes ago

2v10rpt7bzttjc6q6wpjf32d8  nginx.7   192.168.8.254:5000/nginx  node5.example.com  Running        Running about a minute ago

d5i7uxtv2hen5nu06hagrjflt  nginx.8   192.168.8.254:5000/nginx  node3.example.com  Running        Running 2 minutes ago

cch34fphg8coce9gn0umetkud  nginx.9   192.168.8.254:5000/nginx  node2.example.com  Running        Running 2 minutes ago

9afmuacg5d9w53vokwup4wb0i  nginx.10  192.168.8.254:5000/nginx  node5.example.com  Running        Running about a minute ago

3mwnc4sb61t94bqcbw55xfuqi  nginx.11  192.168.8.254:5000/nginx  node4.example.com  Running        Running 59 seconds ago

amx18cuwwhsei4j19jsjsuutq  nginx.12  192.168.8.254:5000/nginx  node3.example.com  Running        Running about a minute ago

3tjgqroqhw8ty6o5yr1zyh8e8  nginx.13  192.168.8.254:5000/nginx  node6.example.com  Running        Running 59 seconds ago

7y2jtgvyg9shs1lvx6ku8se46  nginx.14  192.168.8.254:5000/nginx  node1.example.com  Running        Running 59 seconds ago

6tpetexucqv3li2rtou6vh6dr  nginx.15  192.168.8.254:5000/nginx  node1.example.com  Running        Running 58 seconds ago 
i.drain掉待维护的docker节点  

[root@node4 ~]# docker node update --availability drain node1.example.com

node1.example.com

[root@node4 ~]# docker service ps nginx

ID                         NAME          IMAGE                     NODE               DESIRED STATE  CURRENT STATE               ERROR

bfx5sv59nzsfutfjlcaz98gtt  nginx.1       192.168.8.254:5000/nginx  node4.example.com  Running        Running 2 minutes ago

c4i1sepzfd0v6p9rgncxyud6t  nginx.2       192.168.8.254:5000/nginx  node3.example.com  Running        Running 3 minutes ago

1zm2fpa4ufh0q2jc8iywor4iw  nginx.3       192.168.8.254:5000/nginx  node6.example.com  Running        Running 2 minutes ago

02qmxvoa9tjeyvtvetantm9ki  nginx.4       192.168.8.254:5000/nginx  node5.example.com  Running        Running 10 seconds ago

f2u8ywqoccekr8qz5pvle2kt2   \_ nginx.4   192.168.8.254:5000/nginx  node1.example.com  Shutdown       Shutdown 11 seconds ago

f2w57lnt1495grq366b34eypd  nginx.5       192.168.8.254:5000/nginx  node4.example.com  Running        Running 2 minutes ago

7nu5a435xch7jux3lqlyoz095  nginx.6       192.168.8.254:5000/nginx  node2.example.com  Running        Running 3 minutes ago

2v10rpt7bzttjc6q6wpjf32d8  nginx.7       192.168.8.254:5000/nginx  node5.example.com  Running        Running 2 minutes ago

d5i7uxtv2hen5nu06hagrjflt  nginx.8       192.168.8.254:5000/nginx  node3.example.com  Running        Running 3 minutes ago

cch34fphg8coce9gn0umetkud  nginx.9       192.168.8.254:5000/nginx  node2.example.com  Running        Running 3 minutes ago

9afmuacg5d9w53vokwup4wb0i  nginx.10      192.168.8.254:5000/nginx  node5.example.com  Running        Running 2 minutes ago

3mwnc4sb61t94bqcbw55xfuqi  nginx.11      192.168.8.254:5000/nginx  node4.example.com  Running        Running about a minute ago

amx18cuwwhsei4j19jsjsuutq  nginx.12      192.168.8.254:5000/nginx  node3.example.com  Running        Running about a minute ago

3tjgqroqhw8ty6o5yr1zyh8e8  nginx.13      192.168.8.254:5000/nginx  node6.example.com  Running        Running about a minute ago

34xw07c0bsptc81gv5trt5uv7  nginx.14      192.168.8.254:5000/nginx  node2.example.com  Running        Running 10 seconds ago

7y2jtgvyg9shs1lvx6ku8se46   \_ nginx.14  192.168.8.254:5000/nginx  node1.example.com  Shutdown       Shutdown 11 seconds ago

5q5upnii1tgypmml2ebjr6hcu  nginx.15      192.168.8.254:5000/nginx  node5.example.com  Running        Running 10 seconds ago

6tpetexucqv3li2rtou6vh6dr   \_ nginx.15  192.168.8.254:5000/nginx  node1.example.com  Shutdown       Shutdown 11 seconds ago

[root@node4 ~]# docker node ls

ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS

3cpke3p5dugg1xkvrp2y5719q    node6.example.com  Ready   Active        Reachable

4zqj46vjvr2htekleqha5f45s    node2.example.com  Ready   Active

82v5xgvmvwsecqb4hz6jrr3ci    node1.example.com  Ready  Drain

8x4acgqf2h3uq70ulwcs5fs1f    node5.example.com  Ready   Active        Reachable

8y9fut6xh4k85bpp47owi1eol    node3.example.com  Ready   Active

avex4e0pezsywuzb4aqjm5zf1 *  node4.example.com  Ready   Active        Leader

ii.维护完成后重新active

[root@node4 ~]# docker node update --availability active node1.example.com

node1.example.com

[root@node4 ~]# docker node ls

ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS

3cpke3p5dugg1xkvrp2y5719q    node6.example.com  Ready   Active        Reachable

4zqj46vjvr2htekleqha5f45s    node2.example.com  Ready   Active

82v5xgvmvwsecqb4hz6jrr3ci    node1.example.com  Ready   Active

8x4acgqf2h3uq70ulwcs5fs1f    node5.example.com  Ready   Active        Reachable

8y9fut6xh4k85bpp47owi1eol    node3.example.com  Ready   Active

avex4e0pezsywuzb4aqjm5zf1 *  node4.example.com  Ready   Active        Leader

更多帮助请man docker-node-update

6.Worker<->Manager互转
i.Worker提升为Manager

[root@node4 ~]# docker node promote node{2,3}.example.com

Node node2.example.com promoted to a manager in the swarm.

Node node3.example.com promoted to a manager in the swarm.

比在对应节点上执行docker node update --role manager更方便

[root@node4 ~]# docker node ls

ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS

3cpke3p5dugg1xkvrp2y5719q    node6.example.com  Ready   Active        Reachable

4zqj46vjvr2htekleqha5f45s    node2.example.com  Ready   Active        Reachable

82v5xgvmvwsecqb4hz6jrr3ci    node1.example.com  Ready   Active

8x4acgqf2h3uq70ulwcs5fs1f    node5.example.com  Ready   Active        Reachable

8y9fut6xh4k85bpp47owi1eol    node3.example.com  Ready   Active        Reachable

avex4e0pezsywuzb4aqjm5zf1 *  node4.example.com  Ready   Active        Leader

提升后,原Worker节点转变为Manager节点
ii.Manager降级为Worker

[root@node4 ~]# docker node demote node{2,3}.example.com

Manager node2.example.com demoted in the swarm.

Manager node3.example.com demoted in the swarm.

比在对应节点上执行docker node update --role worker更方便

7.删除swarm节点

i.从swarm中leave

[root@node6 ~]# docker swarm leave --force

Node left the swarm.

删除Manager节点需要加上--force参数,Worker节点可以直接leave掉

[root@node4 ~]# docker node ls

ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS

2jw9hn8cmm701ewnsdxrhpfgo    node1.example.com  Ready   Active

3cpke3p5dugg1xkvrp2y5719q    node6.example.com Down    Active        Unreachable

4zqj46vjvr2htekleqha5f45s    node2.example.com  Ready   Active

8x4acgqf2h3uq70ulwcs5fs1f    node5.example.com  Ready   Active        Reachable

8y9fut6xh4k85bpp47owi1eol    node3.example.com  Ready   Active

avex4e0pezsywuzb4aqjm5zf1 *  node4.example.com  Ready   Active        Leader

注意:当节点从swarm中leave后状态及变为Down,如果是worker节点则可以直接node rm删掉,而如果是manager节点则需要先将manager降级为worker后再删

[root@node4 ~]# docker node rm node6.example.com

Error response from daemon: rpc error: code = 9 desc = node 3cpke3p5dugg1xkvrp2y5719q is a cluster manager and is a member of the raft cluster. It must be demoted to worker before removal

ii.降级为worker

[root@node4 ~]# docker node demote node6.example.com

Manager node6.example.com demoted in the swarm.

iii.删除节点

[root@node4 ~]# docker node rm node6.example.com

node6.example.com

$ docker node rm node9Error response from daemon: rpc error: code = 9 desc = node node9 is not down and can't be removed$ docker node rm --force node9Node node9 removed from swarm
8.卷挂载
https://docs.docker.com/engine/swarm/services/
swarm支持volume和bind两种挂载类型,因为bind是映射宿主机目录到容器,所以只有当所有swarm节点上都有相同的映射目录时容器“漂移”才不会出问题,故docker官方并不推荐bind类型,这里演示volume
i.创建一个volume

[root@node4 ~]# docker volume create --name data

data

[root@node4 ~]# docker volume ls

DRIVER              VOLUME NAME

local               b555e4b4cdc8a96d657f3f714b180324f8a69fa9152f305ff6b5c5e64c58044d

local               data

ii.附加volume到service

[root@node4 ~]# docker service create --replicas=3 --name nginx --network ingress --endpoint-mode vip --publish 80:80 --publish 443:443 --mount type=volume,src=data,dst=/opt 192.168.8.254:5000/nginx

978a4hagibhj7jp969th5apvs

[root@node4 ~]# docker service inspect redis

[

{

"ID": "978a4hagibhj7jp969th5apvs",

"Version": {

"Index": 407

},

"CreatedAt": "2016-09-23T14:44:25.861546182Z",

"UpdatedAt": "2016-09-23T14:44:25.867551569Z",

"Spec": {

"Name": "redis",

"TaskTemplate": {

"ContainerSpec": {

"Image": "192.168.8.254:5000/nginx",

"Mounts": [

{

"Type": "volume",

  "Source": "data",

                            "Target": "/opt",

"VolumeOptions": {

"DriverConfig": {

"Name": "fake",

"Options": {

"size": "100m",

"uid": "1000"

}

}

... ...

9.通过VIP(或DNS)访问service
https://docs.docker.com/engine/swarm/networking/
https://docs.docker.com/engine/swarm/ingress/

[root@node4 ~]# docker service ps nginx

ID                         NAME     IMAGE                     NODE               DESIRED STATE  CURRENT STATE          ERROR

4sp8inl597sz4ath6nk1ru1e3  nginx.1  192.168.8.254:5000/nginx node3.example.com  Running        Running 7 minutes ago

8xx1ehn9cawxiudb3su5orl1y  nginx.2  192.168.8.254:5000/nginx node4.example.com  Running        Running 6 minutes ago

3lbdc4hvf8vq2l12hcaf23wly  nginx.3  192.168.8.254:5000/nginx node2.example.com  Running        Running 6 minutes ago

[root@node4 ~]# curl node2.example.com -I

HTTP/1.1 200 OK

Server: nginx/1.11.3

Date: Fri, 23 Sep 2016 15:08:37 GMT

Content-Type: text/html

Content-Length: 612

Last-Modified: Tue, 26 Jul 2016 14:54:48 GMT

Connection: keep-alive

ETag: "579779b8-264"

Accept-Ranges: bytes

[root@node4 ~]# curl node4.example.com -I

HTTP/1.1 200 OK

Server: nginx/1.11.3

Date: Fri, 23 Sep 2016 15:08:52 GMT

Content-Type: text/html

Content-Length: 612

Last-Modified: Tue, 26 Jul 2016 14:54:48 GMT

Connection: keep-alive

ETag: "579779b8-264"

Accept-Ranges: bytes

可以看到,3个容器分别运行在node2,node3,node4上,直接访问对应节点的容器服务都是通的,但没有统一的入口,所以需要借助外部负载均衡器,官方文档是以haproxy为示例。这部分文档没有及时更新,说好的lvs呢

[root@node4 ~]# docker service inspect nginx

[

{

"ID": "a0jv3kz45qdvkq6qp70w6vu69",

"Version": {

"Index": 743

},

"CreatedAt": "2016-09-23T15:00:59.368270075Z",

"UpdatedAt": "2016-09-23T15:01:25.350289424Z",

"Spec": {

"Name": "nginx",

"TaskTemplate": {

"ContainerSpec": {

"Image": "192.168.8.254:5000/nginx",

"Mounts": [

{

"Type": "volume",

"Source": "data",

"Target": "/opt"

}

]

},

"Resources": {

"Limits": {},

"Reservations": {}

},

"RestartPolicy": {

"Condition": "any",

"MaxAttempts": 0

},

"Placement": {}

},

"Mode": {

"Replicated": {

"Replicas": 3

}

},

"UpdateConfig": {

"Parallelism": 1,

"FailureAction": "pause"

},

"Networks": [

{

"Target": "1mhcme8jygd43nf82yhbdda4x"

}

],

"EndpointSpec": {

"Mode": "vip",

"Ports": [

{

"Protocol": "tcp",

"TargetPort": 443,

"PublishedPort": 443

},

{

"Protocol": "tcp",

"TargetPort": 80,

"PublishedPort": 80

}

]

}

},

"Endpoint": {

"Spec": {

"Mode": "vip",

"Ports": [

{

"Protocol": "tcp",

"TargetPort": 443,

"PublishedPort": 443

},

{

"Protocol": "tcp",

"TargetPort": 80,

"PublishedPort": 80

}

]

},

"Ports": [

{

"Protocol": "tcp",

"TargetPort": 443,

"PublishedPort": 443

},

{

"Protocol": "tcp",

"TargetPort": 80,

"PublishedPort": 80

}

],

"VirtualIPs": [

{

"NetworkID": "1mhcme8jygd43nf82yhbdda4x",

"Addr": "10.255.0.2/16"

}

]

},

"UpdateStatus": {

"StartedAt": "0001-01-01T00:00:00Z",

"CompletedAt": "0001-01-01T00:00:00Z"

}

}

]

转载于:https://www.cnblogs.com/lixuebin/p/10814020.html

Docker-1.12 swarm模式相关推荐

  1. 使用Docker Swarm模式搭建Swarm集群

    转载:https://www.jianshu.com/p/df744c4e375e 目录 概述 创建和管理Swarm集群 Swarm集群的服务部署实践 1. 概述 Docker Swarm是原生的Do ...

  2. 在 Docker 中运行 MySQL:多主机网络下 Docker Swarm 模式的容器管理

    本文将以多主机网络环境为基础,探讨如何利用内置编排工具 Docker Swarm 模式对各主机上的容器加以管理. Docker Engine – Swarm 模式 在多台主机之上运行 MySQL 容器 ...

  3. docker swarm MySQL_容器与云|在 Docker 中运行 MySQL:多主机网络下 Docker Swarm 模式的容器管理...

    本文将以多主机网络环境为基础,探讨如何利用内置编排工具 Docker Swarm 模式对各主机上的容器加以管理. Docker Engine – Swarm 模式 在多台主机之上运行 MySQL 容器 ...

  4. docker 1.12 版本 docker swarm 集群

    docker 1.12 版本 的新特性 (1)docker swarm:集群管理,子命令有init, join, leave, update (2)docker service:服务创建,子命令有cr ...

  5. 多主机网络下 Docker Swarm 模式的容器管理

    导读 本文将以多主机网络环境为基础,探讨如何利用内置编排工具 Docker Swarm 模式对各主机上的容器加以管理. Docker Engine – Swarm 模式 在多台主机之上运行 MySQL ...

  6. Docker管理工具-Swarm部署记录

    Swarm介绍 Swarm是Docker公司在2014年12月初发布的一套较为简单的工具,用来管理Docker集群,它将一群Docker宿主机变成一个单一的,虚拟的主机.Swarm使用标准的Docke ...

  7. Docker基础30--6.4 Docker三剑客之Swarm

    6.4 Docker三剑客之Swarm Docker Swarm是Docker官方三剑客项目之一,提供Docker容器集群服务,是Docker官方对容器云生态进行支持的核心方案.使用它,用户可以将多个 ...

  8. 阿里云CentOS环境之-实战docker集群swarm(十五)

    前言 docker1.12版本之前版本配置 准备工作 开始 拉取swarm 开放2375远程访问端口 创建集群的token 向集群里添加结点 查看集群里有哪些结点 创建管理者容器 使用集群 离开集群 ...

  9. docker Swarm简介 新旧版本操作不一样docker run --rm swarm create和docker swarm --init

    https://www.cnblogs.com/franknihao/p/8490416.html https://cloud.tencent.com/developer/section/109194 ...

最新文章

  1. Wapiti一款小巧的开源安全测试漏洞检测工具
  2. matlab simulink 视频,使用 MATLAB 和 Simulink 让控制系统的开发更轻松
  3. [转]C#与数据结构--树论--平衡二叉树(AVL Tree)
  4. 关于 SAP Spartacus 电商云 UI feature level 的测试步骤
  5. linux传文件file,linux文件的传输与压缩快速入门
  6. 人从众!中秋小长假全国铁路预计发送旅客4600万人次
  7. C/C++ 常量的定义与应用(编程中的常量)
  8. 第14章 使用Kotlin 进行 Android 开发
  9. 推荐系统开源软件汇总和评点
  10. java 按字节读文件_JAVA按字节读取文件的简单实例
  11. 间接效应值大于1是正常的吗?Q群答疑20200405
  12. GPS-GGA数据格式
  13. 密码学 SM3算法 Python实现
  14. 新版火狐 拖 功能_Firefox 3:新功能,新功能和新功能
  15. Python学习之路-爬虫(四大名著)
  16. HR模块-组织信息类型创建-PP01
  17. 拼多多电商外部工具(浏览器插件)
  18. CAD的图导入PADS 做板框(转)
  19. 测试上线邮件书写规范
  20. AQI(空气质量指数)分析与预测(四)

热门文章

  1. IE浏览器被黑的有多惨,看看这些图片就知道了
  2. Ubuntu虚拟机不显示ip地址【已解决】
  3. 基于轻量级神经网络MobileNet V2的水果识别种类算法研究
  4. χ^2分布(卡方),t分布,F分布的表达式
  5. 贵阳学院c语言试卷,关于印发《贵阳学院硕士研究生入学考试(初试) 自命题工作管理办法(试行)》的通知...
  6. html图片自动循环,css实现图片循环的动画效果(代码)
  7. MPC5744-SIUL2
  8. chart.js使用学习——气泡图
  9. 那些你还不熟悉的---类的初始化和实例化的初始化过程
  10. cin和cout提速