etcd+calico集群的部署
>> from zhuhaiqing.info
etcd单机模式
设置环境变量
export HostIP="192.168.12.50"
执行如下命令,打开etcd的客户端连接端口4001和2379、etcd互联端口2380
如果是第一次执行此命令,docker会下载最新的etcd官方镜像
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \--name etcd quay.io/coreos/etcd \-name etcd0 \-advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \-initial-advertise-peer-urls http://${HostIP}:2380 \-listen-peer-urls http://0.0.0.0:2380 \-initial-cluster-token etcd-cluster-1 \-initial-cluster etcd0=http://${HostIP}:2380 \-initial-cluster-state new
选择上面2个端口中的任意一个,检测一下节点情况:
curl -L http://127.0.0.1:2379/v2/members
多节点etcd集群
配置多节点etcd集群和单节点类似,最主要的区别是-initial-cluster参数,它表示了各个成员的互联地址(peer url):
节点01执行如下命令:
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --restart=always \ --name etcd quay.io/coreos/etcd \ -name etcd01 \ -advertise-client-urls http://192.168.73.140:2379,http://192.168.73.140:4001 \ -listen-client-urls http://0.0.0.0:2379 \ -initial-advertise-peer-urls http://192.168.73.140:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster \ -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380" \ -initial-cluster-state new
节点02执行如下命令
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --restart=always \ --name etcd quay.io/coreos/etcd \ -name etcd02 \ -advertise-client-urls http://192.168.73.137:2379,http://192.168.73.137:4001 \ -listen-client-urls http://0.0.0.0:2379 \ -initial-advertise-peer-urls http://192.168.73.137:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster \ -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380" \ -initial-cluster-state new
检查集群连接情况,分别在各个节点执行如下命令:
curl -L http://127.0.0.1:2379/v2/members
如果正常,将看到2个节点的信息,且在各个节点看到的结果都应该是一样的:
{"members":[{"id":"2bd5fcc327f74dd5","name":"etcd01","peerURLs":["http://192.168.73.140:2380"],"clientURLs":["http://192.168.73.140:2379","http://192.168.73.140:4001"]},{"id":"c8a9cac165026b12","name":"etcd02","peerURLs":["http://192.168.73.137:2380"],"clientURLs":["http://192.168.73.137:2379","http://192.168.73.137:4001"]}]}
扩展etcd集群
在集群中的任何一台etcd节点上执行命令,将新节点注册到集群:
curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d '{"peerURLs": ["http://192.168.73.172:2380"]}'
在新节点上启动etcd容器,注意-initial-cluster-state参数为existing
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --restart=always \ --name etcd quay.io/coreos/etcd \ -name etcd03 \ -advertise-client-urls http://192.168.73.150:2379,http://192.168.73.150:4001 \ -listen-client-urls http://0.0.0.0:2379 \ -initial-advertise-peer-urls http://192.168.73.150:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster \ -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380,etcd03=http://192.168.73.150:2380" \ -initial-cluster-state existing
任意节点执行健康检查:
[root@docker01 ~]# etcdctl cluster-health member 2bd5fcc327f74dd5 is healthy: got healthy result from http://192.168.73.140:2379 member c8a9cac165026b12 is healthy: got healthy result from http://192.168.73.137:2379 cluster is healthy
calico部署
现在物理主机下载calicoctl,下载页面:
https://github.com/projectcalico/calico-containers/releases
并将下载的calicoctl复制到/usr/local/bin下面
在第一台etcd节点上执行如下命令:
[root@docker01 ~]# calicoctl node #如果是第一次执行该命令,会需要联网下载calico node镜像并启动 Running Docker container with the following command:docker run -d --restart=always --net=host --privileged --name=calico-node -e HOSTNAME=docker01 -e IP= -e IP6= -e CALICO_NETWORKING=true -e AS= -e NO_DEFAULT_POOLS= -e ETCD_AUTHORITY=127.0.0.1:2379 -e ETCD_SCHEME=http -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico calico/node:v0.18.0Calico node is running with id: 60b284221a94b418509f86d3c8d7073e11ab3c2a3ca17e4efd2568e97791ff33 Waiting for successful startup No IP provided. Using detected IP: 192.168.73.140 Calico node started successfully
在第二台etcd节点上执行:
[root@Docker01 ~]# calicoctl node --如果是第一次执行该命令,会需要联网下载calico node镜像 Running Docker container with the following command:docker run -d --restart=always --net=host --privileged --name=calico-node -e HOSTNAME=docker01 -e IP= -e IP6= -e CALICO_NETWORKING=true -e AS= -e NO_DEFAULT_POOLS= -e ETCD_AUTHORITY=127.0.0.1:2379 -e ETCD_SCHEME=http -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico calico/node:v0.18.0Calico node is running with id: 72e7213852e529a3588249d85f904e38a92d671add3cdfe5493687aab129f5e2 Waiting for successful startup No IP provided. Using detected IP: 192.168.73.137 Calico node started successfully
在任意一台calico节点上执行如下命令,配置地址资源池:
[root@Docker01 ~]# calicoctl pool remove 192.168.0.0/16 #删除默认资源池 [root@Docker01 ~]# calicoctl pool add 10.0.238.0/24 --nat-outgoing --ipip #添加新的IP资源池,支持跨子网的主机上的Docker间网络互通,需要添加--ipip参数;如果要Docker访问外网,需要添加--nat-outgoing参数 [root@docker01 ~]# calicoctl pool show #查看配置后的结果
在任意calico节点,检查Calico状态:
[root@docker01 ~]# calicoctl status calico-node container is running. Status: Up 3 hours Running felix version 1.4.0rc1IPv4 BGP status IP: 192.168.73.140 AS Number: 64511 (inherited) +----------------+-------------------+-------+----------+-------------+ | Peer address | Peer type | State | Since | Info | +----------------+-------------------+-------+----------+-------------+ | 192.168.73.137 | node-to-node mesh | up | 09:18:51 | Established | +----------------+-------------------+-------+----------+-------------+IPv6 BGP status No IPv6 address configured.
配置docker容器网络
分别在2个节点上启动业务一个容器,不加载网络驱动,后面网络让Calico来配置:
[root@docker01 ~]# docker run --name test01 -itd --log-driver none --net none daocloud.io/library/centos:6.6 /bin/bash [root@docker02 ~]# docker run --name test02 -itd --log-driver none --net none daocloud.io/library/centos:6.6 /bin/bash
在任意的calico节点创建Calico profile:
[root@docker01 ~]# calicoctl profile add starboss
通过Calico手动为容器指定ip,注意此ip需要符合calico pool的ip配置:
[root@docker01 ~]# calicoctl container add test01 10.0.238.10 IP 10.0.238.10 added to test01 [root@docker02 ~]# calicoctl container add test02 10.0.238.11 IP 10.0.238.10 added to test02
在各个calico节点上,分别将需要互相访问的节点加入同一个profile:
[root@docker01 ~]# calicoctl container test01 profile set starboss Profile(s) set to starboss. [root@docker02 ~]# calicoctl container test02 profile set starboss Profile(s) set to starboss.
在任意节点查看Calico节点的配置情况:
[root@docker01 ~]# calicoctl endpoint show --detailed +----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+ | Hostname | Orchestrator ID | Workload ID | Endpoint ID | Addresses | MAC | Profiles | State | +----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+ | docker01 | docker | 8f935b0441739f52334e9f16099a2b52e2c982e3aef3190e02dd7ce67e61a853 | 75b0e79a022211e6975c000c29308ed8 | 192.168.0.10/32 | 1e:14:2d:bf:51:f5 | starboss | active | | docker02 | docker | 3d0a8f39753537592f3e38d7604b0b6312039f3bf57cf13d91e953e7e058263e | 8efb263e022211e6a180000c295008af | 192.168.0.11/32 | ee:2b:c2:5e:b6:c5 | starboss | active | +----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+
测试,在一台物理主机中ping另外一台主机中的容器:
[root@docker01 ~]# docker exec test01 ping 192.168.0.11 PING 192.168.0.11 (192.168.0.11) 56(84) bytes of data. 64 bytes from 192.168.0.11: icmp_seq=1 ttl=62 time=0.557 ms 64 bytes from 192.168.0.11: icmp_seq=2 ttl=62 time=0.603 ms 64 bytes from 192.168.0.11: icmp_seq=3 ttl=62 time=0.656 ms 64 bytes from 192.168.0.11: icmp_seq=4 ttl=62 time=0.386 ms
转载于:https://www.cnblogs.com/zhuhaiqing/p/5393548.html
etcd+calico集群的部署相关推荐
- k8s集群部署之环境介绍与etcd数据库集群部署
角色 IP 组件 配置 master-1 192.168.10.11 kube-apiserver kube-controller-manager kube-scheduler etcd 2c 2g ...
- ETCD 集群的部署
单Master架构图: Etcd集群证书准备 Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署 ...
- Kubernetes集群的部署方式及详细步骤
一.部署环境架构以及方式 第一种部署方式 1.针对于 master 节点 将 API Server.etcd.controller-manager.schedule各组件进行 yum install. ...
- k8s集群二进制部署 1.17.3
K8s简介 Kubernetes(简称k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容 ...
- Kubernetes的学习笔记总结之k8s集群安装部署
kubernets 集群安装部署. 安装 Docker 所有节点都需要安装 Docker. apt-get update && apt-get install docker.io 安装 ...
- Kubernetes——Kubernetes集群docekr部署
摘要 Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规 ...
- 使用Kubeadm创建k8s集群之部署规划(三十一)
前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...
- K8S高可用集群架构部署 dashborad插件部署 Nginx实现动静分离 K8S在线升级
K8S官方文档 注意:该集群每个master节点都默认由kubeadm生成了etcd容器,组成etcd集群.正常使用集群,etcd的集群不能超过一半为down状态. docker的namespace: ...
- centos7 下google Kubernetes(k8s)集群安装部署
centos7 下google Kubernetes(k8s)集群安装部署 简介 安装环境 安装前准备 ECTD集群配置 命令含义: master节点配置 1.安装kubernetes和etcd 2. ...
最新文章
- ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
- 台式电脑主板插线步骤图_风味台式烤肠#夏天夜宵High起来!#
- fgetc(),getc(),getchar()的用法
- 三维重建16:概率图模型 模板类编程
- linux命令之seq
- python源码包安装_源码包安装python2.7.6和ipython1.2.1
- javascript HTMLElement
- linux 系统命令总结之ubuntu 系列命令 持续更新中~
- php实现的进度条功能示例,PHP 进度条函数的简单实例
- MySQL双主高可用架构之MMM实战
- C10K 和 C1000K 回顾
- 基于MyEclipse+JSP+Mysql+Tomcat开发得塞北村镇旅游网站设计
- 陈省身文集53——大范围微分几何若干新观点
- 推荐几款2021好用的可视化报表工具
- Flux架构思想在度咔App中的实践
- 爱看小说手机网源码全站带3w数据带采集,ThinkPHP内核小说网站源码带听书等全部插件
- vue+canvas如何实现b站萌系登录界面
- Daily Scrum Meeting 10.31
- Linux之设备操作
- hihocode-2月29