Etcd教程 — 第二章 Etcd集群静态发现

  • 一、Etcd集群安装方式
  • 二、Etcd集群静态发现
    • 2.1 静态启动的方式
    • ※2.2 单机搭建Etcd集群
      • 2.2.1 安装 goreman工具
      • 2.2.2 编写Procfile文件
      • 2.2.3 集群配置参数说明
      • 2.2.3 执行 goreman启动命令
      • 2.2.4 查看Etcd启动情况
    • ※2.3 多机搭建Etcd集群
      • 2.3.1 开放端口
      • 2.3.2 下载安装Etcd
      • 2.3.3 修改配置
      • 2.3.4 建立etcd相关目录(即数据文件和配置文件的保存位置)
      • 2.3.5 创建etcd配置文件
      • 2.3.6 创建systemd配置文件
      • 2.3.7 启动/查看etcd服务
      • 2.3.8 各机器添加集群信息
      • 2.3.9 清除旧数据,重启服务
      • 2.3.10 为各节点创建etcd服务 :etcd.service
      • 2.3.11 集群测试
    • 2.4 Docker搭建Etcd集群
      • 2.4.1Docker下载Etcd
      • 2.4.2 配置集群

一、Etcd集群安装方式

在生产环境中,为了整个集群的高可用,etcd 正常都会集群部署,避免单点故障。本节将会介绍如何进行 etcd 集群部署。引导 etcd 集群的启动有以下三种机制:

静态发现etcd 集群要求提前知道集群中所有成员信息,在启动时通过 --initial-cluster 参数直接指定好etcd的各个节点地址。

Etcd动态发现:静态发现前提是在搭建集群之前已经提前知道各节点成员的信息,而实际应用中可能存在预先并不知道各节点ip的情况,这时可通过已搭建的etcd集群来辅助搭建新的etcd集群

通过把已有的etcd集群作为数据交互点,然后在扩展新的集群时,通过已有集群来进行服务发现的机制。比如官方提供的集群:discovery.etcd.io。

DNS发现:通过DNS查询方式获取其他节点地址信息。

二、Etcd集群静态发现

通常都是将集群节点部署为3,5,7,9个节点,为什么不能选择偶数个节点?

1.偶数个节点集群不可用风险更高,表现在选主过程中,有较大概率或等额选票,从而触发下一轮选举。
2.偶数个节点集群在某些网络分割的场景下无法正常工作。当网络分割发生后,将集群节点对半分割开。
此时集群将无法工作。按照RAFT协议,此时集群写操作无法使得大多数节点同意,从而导致写失败,集群无法正常工作。

2.1 静态启动的方式

  1. 单机上搭建Etcd集群
  2. 多机搭建Etcd集群
  3. docker 启动集群

※2.2 单机搭建Etcd集群

单台机器上搭建etcd集群,可以使用goreman工具来操作

注意:在单机上搭建etcd集群前应先在机器上安装Etcd软件,然后再使用goreman来搭建单机版的集群,否则会报未找到etcd命令

2.2.1 安装 goreman工具

goreman安装配置

2.2.2 编写Procfile文件


本文是在 /root/ 下 创建了gore目录,然后在gore下创建编写Procfile.learner文件:

# Use goreman to run `go get github.com/mattn/goreman`
# Change the path of bin/etcd if etcd is located elsewhereetcd1: etcd --name infra1 --listen-client-urls http://127.0.0.1:12379 --advertise-client-urls http://127.0.0.1:12379 --listen-peer-urls http://127.0.0.1:12380 --initial-advertise-peer-urls http://127.0.0.1:12380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --enable-pprofetcd2: etcd --name infra2 --listen-client-urls http://127.0.0.1:22379 --advertise-client-urls http://127.0.0.1:22379 --listen-peer-urls http://127.0.0.1:22380 --initial-advertise-peer-urls http://127.0.0.1:22380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --enable-pprofetcd3: etcd --name infra3 --listen-client-urls http://127.0.0.1:32379 --advertise-client-urls http://127.0.0.1:32379 --listen-peer-urls http://127.0.0.1:32380 --initial-advertise-peer-urls http://127.0.0.1:32380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --enable-pprof
#proxy: bin/etcd grpc-proxy start --endpoints=127.0.0.1:2379,127.0.0.1:22379,127.0.0.1:32379 --listen-addr=127.0.0.1:23790 --advertise-client-url=127.0.0.1:23790 --enable-pprof# A learner node can be started using Procfile.learner

2.2.3 集群配置参数说明

--name
etcd集群中的节点名,这里可以随意,可区分且不重复就行--listen-peer-urls
监听用于和节点之间通信的url,可监听多个,集群内部将通过这些url进行数据交互(如选举,数据同步等)--initial-advertise-peer-urls
建议用于和节点之间通信的url,节点间将以该值进行通信。--listen-client-urls
监听的用于和客户端通信的url,同样可以监听多个。--advertise-client-urls
建议使用的和客户端通信url,该值用于etcd代理或etcd成员与etcd节点通信。--initial-cluster-token etcd-cluster-1
节点的token值,设置该值后集群将生成唯一id,并为每个节点也生成唯一id,当使用相同配置文件再启动一个集群时,只要该token值不一样,etcd集群就不会相互影响。--initial-cluster
也就是集群中所有的initial-advertise-peer-urls 的合集--initial-cluster-state new
新建集群的标志,初始化状态使用 new,建立之后改此值为 existing

2.2.3 执行 goreman启动命令

/root/gore 下执行:

goreman -f ./Procfile.learner start

注意:如果报以下错误,则说明应先安装Etcd然后在使用goreman来配置集群

07:29:58 etcd1 | Starting etcd1 on port 5000
07:29:58 etcd2 | Starting etcd2 on port 5100
07:29:58 etcd3 | Starting etcd3 on port 5200
07:29:58 etcd1 | /bin/sh: etcd: 未找到命令
07:29:58 etcd2 | /bin/sh: etcd: 未找到命令
07:29:58 etcd3 | /bin/sh: etcd: 未找到命令
07:29:58 etcd1 | Terminating etcd1
07:29:58 etcd2 | Terminating etcd2
07:29:58 etcd3 | Terminating etcd3

2.2.4 查看Etcd启动情况

命令:

etcdctl --endpoints=http://localhost:22379  member list

集群内的成员:

8211f1d0f64f3269, started, infra1, http://127.0.0.1:12380, http://127.0.0.1:12379, false
91bc3c398fb3c146, started, infra2, http://127.0.0.1:22380, http://127.0.0.1:22379, false
fd422379fda50e48, started, infra3, http://127.0.0.1:32380, http://127.0.0.1:32379, false

至此,单机上搭建Etcd集群就完成了。

※2.3 多机搭建Etcd集群

多机搭建Etcd集群需要注意的是每个机器上的 2379、2380端口要保持开放,否则会搭建失败。

各主机信息

2.3.1 开放端口

开放每台机器上的2379、2380端口的命令:
2379:

firewall-cmd --zone=public --add-port=2379/tcp --permanent

2380:

firewall-cmd --zone=public --add-port=2380/tcp --permanent

重启防火墙:

firewall-cmd --reload

查看开放的端口:

firewall-cmd --list-port

关闭firewall:

systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动

2.3.2 下载安装Etcd

这里我们在 第一章 2.2 Linux安装Etcd 基础上进行改造从而实现Etcd多机集群,故不再单独在每台机器上下载安装Etcd。

2.3.3 修改配置

1、 在每台机器上 切换至 /tmp/etcd-download-test/ 目录,将 etcdetcdctl 这两个二进制文件复制到 /usr/local/bin 目录下,这样就可以在系统中直接调用etcd/etcdctl这两个程序了。

cp etcd etcdctl /usr/local/bin

2、输入命令etcd,即可启动一个单节点的etcd服务,ctrl+c即可停止服务。这里讲解一下etcd服务启动后控制台显示的各个参数的意义:

{"level":"info","ts":"2022-06-05T22:46:31.751+0800","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd"]}
{"level":"warn","ts":"2022-06-05T22:46:31.751+0800","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"}
{"level":"info","ts":"2022-06-05T22:46:31.751+0800","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"default.etcd","dir-type":"member"}
{"level":"info","ts":"2022-06-05T22:46:31.751+0800","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["http://localhost:2379"]}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":1,"max-cpu-available":1,"member-initialized":true,"name":"default","data-dir":"default.etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"default.etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://localhost:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"default.etcd/member/snap/db","took":"75.889µs"}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","commit-index":6}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 3"}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 3, commit: 6, applied: 0, lastindex: 6, lastterm: 3]"}
{"level":"warn","ts":"2022-06-05T22:46:31.754+0800","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2022-06-05T22:46:31.754+0800","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2022-06-05T22:46:31.755+0800","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2022-06-05T22:46:31.755+0800","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-06-05T22:46:31.757+0800","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://localhost:2379"],"listen-metrics-urls":[]}
{"level":"info","ts":"2022-06-05T22:46:31.757+0800","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"127.0.0.1:2380"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-05T22:46:33.453+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 3"}
{"level":"info","ts":"2022-06-05T22:46:33.453+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 3"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 3"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.474+0800","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://localhost:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}1.etcd-version:etcd的版本。
2.git-sha。
3.go-version:基于的go语言版本。
4.go-os:运行的系统。
5.go-arch:运行的系统架构。
6.max-cpu-set:设置的CPU数量。
7.max-cpu-available:最多可用的CPU数量。
8.member-initialized:集群成员是否初始化,默认false。9.name表示节点名称,默认为default。
2.data-dir 保存日志和快照的数据目录,默认为当前工作目录default.etcd/目录下。
3.在http://localhost:2380和集群中其他节点通信。
4.在http://localhost:2379提供和客户端交互。
5.heartbeat-interval:为100ms,该参数的作用是leader多久发送一次心跳到followers,默认值是100ms。
6.election-timeout:为1000ms,该参数的作用是重新投票的超时时间,如果follow在该时间间隔没有收到心跳包,会触发重新投票,默认为1000ms。
7.snapshot-count:为10000,该参数的作用是指定有多少事务被提交时,触发截取快照保存到磁盘。
8.集群和每个节点都会生成一个uuid,且固定不变,`cluster-id`:集群UUID,`local-member-id`:本机UUID。{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}9.启动的时候会运行raft,选举出leader

2.3.4 建立etcd相关目录(即数据文件和配置文件的保存位置)

mkdir -p /var/lib/etcd/ && mkdir -p /etc/etcd/

2.3.5 创建etcd配置文件

/etc/etcd/下创建。

vim /etc/etcd/etcd.conf

输入内容:

# 节点名称 注意每个机器上的节点名称应不重复
ETCD_NAME="etcd01"
# 指定数据文件存放位置
ETCD_DATA_DIR="/var/lib/etcd/"

2.3.6 创建systemd配置文件

vim /etc/systemd/system/etcd.service

输入内容:

[Unit]
Description=Etcd Server
After=network.target
Wants=network-online.target[Service]
User=root
Type=notify
## 根据实际情况修改WorkingDirectory、EnvironmentFile、ExecStart这三个参数值
## 1.WorkingDirectory:etcd数据存放的位置
WorkingDirectory=/var/lib/etcd/
## 2.EnvironmentFile即配置文件的位置,注意“-”不能少
EnvironmentFile=-/etc/etcd/etcd.conf
## 3.ExecStart即etcd启动程序位置
ExecStart=/usr/local/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

2.3.7 启动/查看etcd服务

systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd

附:

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
systemctl stop etcd
systemctl restart etcd

2.3.8 各机器添加集群信息

编辑各机器的 etcd.conf 文件,添加集群信息,注意修改对应的ip。

vim /etc/etcd/etcd.conf

节点1的etcd.conf文件内容:

#########################################################
########### 请根据各节点服务器实际情况修改配置 ############
#########################################################
#[Member]
#1.节点名称,必须唯一
ETCD_NAME="etcd01"#2.设置数据保存的目录
ETCD_DATA_DIR="/var/lib/etcd"#3.本节点机器用于监听其他节点的url,url是本机上的2380
ETCD_LISTEN_PEER_URLS="http://192.168.1.221:2380"#4.建议本节点用于和其他节点之间通信的url,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.221:2380"#5.本节点机器用于和客户端通信的url,url是本机上的 2379
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.221:2379,http://127.0.0.1:2379"#[Clustering]
#6.建议本节点和客户端通信使用的url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.221:2379"#7.集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.1.221:2380,etcd02=http://192.168.1.222:2380,etcd03=http://192.168.1.223:2380"#8.创建集群的token,这个值每个集群均相同
ETCD_INITIAL_CLUSTER_TOKEN="148e7b13-51fc-4cc8-965b-a7c9d58c18f5"#9.初始集群状态,新建集群的时候,这个值为new,后续再启动时需要将“new”更改为“existing”
ETCD_INITIAL_CLUSTER_STATE="new"#10.flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
#   为了兼容flannel,将默认开启v2版本,故配置文件中设置
ETCD_ENABLE_V2="true"

节点2的etcd.conf文件内容:

#########################################################
########### 请根据各节点服务器实际情况修改配置 ############
#########################################################
#[Member]
#1.节点名称,必须唯一
ETCD_NAME="etcd02"#2.设置数据保存的目录
ETCD_DATA_DIR="/var/lib/etcd"#3.本节点机器用于监听其他节点的url,url是本机上的2380
ETCD_LISTEN_PEER_URLS="http://192.168.1.222:2380"#4.建议本节点用于和其他节点之间通信的url,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.222:2380"#5.本节点机器用于和客户端通信的url,url是本机上的 2379
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.222:2379,http://127.0.0.1:2379"#[Clustering]
#6.建议本节点和客户端通信使用的url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.222:2379"#7.集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.1.221:2380,etcd02=http://192.168.1.222:2380,etcd03=http://192.168.1.223:2380"#8.创建集群的token,这个值每个集群均相同
ETCD_INITIAL_CLUSTER_TOKEN="148e7b13-51fc-4cc8-965b-a7c9d58c18f5"#9.初始集群状态,新建集群的时候,这个值为new,后续再启动时需要将“new”更改为“existing”
ETCD_INITIAL_CLUSTER_STATE="new"#10.flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
#   为了兼容flannel,将默认开启v2版本,故配置文件中设置
ETCD_ENABLE_V2="true"

节点3的etcd.conf文件内容:

#########################################################
########### 请根据各节点服务器实际情况修改配置 ############
#########################################################
#[Member]
#1.节点名称,必须唯一
ETCD_NAME="etcd03"#2.设置数据保存的目录
ETCD_DATA_DIR="/var/lib/etcd"#3.本节点机器用于监听其他节点的url,url是本机上的2380
ETCD_LISTEN_PEER_URLS="http://192.168.1.223:2380"#4.建议本节点用于和其他节点之间通信的url,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.223:2380"#5.本节点机器用于和客户端通信的url,url是本机上的 2379
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.223:2379,http://127.0.0.1:2379"#[Clustering]
#6.建议本节点和客户端通信使用的url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.223:2379"#7.集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.1.221:2380,etcd02=http://192.168.1.222:2380,etcd03=http://192.168.1.223:2380"#8.创建集群的token,这个值每个集群均相同
ETCD_INITIAL_CLUSTER_TOKEN="148e7b13-51fc-4cc8-965b-a7c9d58c18f5"#9.初始集群状态,新建集群的时候,这个值为new,后续再启动时需要将“new”更改为“existing”
ETCD_INITIAL_CLUSTER_STATE="new"#10.flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
#   为了兼容flannel,将默认开启v2版本,故配置文件中设置
ETCD_ENABLE_V2="true"

2.3.9 清除旧数据,重启服务

修改/etc/etcd/etcd.conf文件后,要先删除/var/lib/etcd目录下保存的数据,不然再重新启用服务会失败。

cd /var/lib/etcd && rm -rf *

2.3.10 为各节点创建etcd服务 :etcd.service

因为在 2.3.6 创建systemd配置文件 已完成,故不再重复操作。
刷新:

systemctl daemon-reload

2.3.11 集群测试

各节点配置完成后,systemctl start etcd 开启etcd服务,然后在任意节点执行etcdctl member list可列所有集群节点信息:
相关命令:

etcdctl member list
etcdctl member list -w table
etcdctl endpoint health
etcdctl endpoint status

集群节点信息:

3f165fc003d39e4, started, etcd02, http://192.168.1.222:2380, http://192.168.1.222:2379, false
5d2b35506710acb7, started, etcd03, http://192.168.1.223:2380, http://192.168.1.223:2379, false
803d149841581109, started, etcd01, http://192.168.1.221:2380, http://192.168.1.221:2379, false

服务相关指令:

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
systemctl stop etcd
systemctl restart etcd

至此,多机搭建Etcd集群就完成了。

2.4 Docker搭建Etcd集群

Docker搭建Etcd集群:需要在多台已安装Docker的机器上同时进行Etcd集群的配置。Docker 的安装步骤参见:centos7安装docker。

本次使用到的服务器节点IP:

2.4.1Docker下载Etcd

etcd 使用 gcr.io/etcd-development/etcd 作为容器的主要加速器,quay.io/coreos/etcd 作为辅助的加速器。但这两个下载的地址很有可能使用不了,所以我们使用 docker search etcd 搜索一下仓库里有哪些etcd

NAME                             DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
bitnami/etcd                     Bitnami etcd Docker Image                       126                  [OK]
elcolio/etcd                     Tiny Etcd Container with TLS support and etc…   88                   [OK]
microbox/etcd                    Trusted Automated etcd image (17MB size)        38                   [OK]
appcelerator/etcd                etcd                                            13                   [OK]
xieyanze/etcd3                   etcd v3.0.9 image                               12                   [OK]
monsantoco/etcd-aws-cluster                                                      12                   [OK]
evildecay/etcdkeeper             web ui client for etcd                          9
nikfoundas/etcd-viewer           etcd key-value store viewer and editor          8                    [OK]
rancher/etcd                                                                     4
ibmcom/etcd                      Docker Image for IBM Cloud private-CE (Commu…   2
pachyderm/etcd                                                                   1
mesosphere/etcd-mesos            etcd framework for mesos                        1
ibmcom/etcd-ppc64le              Docker Image for IBM Cloud Private-CE (Commu…   1
gasparekatapy/etcd               Updated version of splazit/etcd                 0
razorpay/etcd-backup             Container to take backups for etcd cluster w…   0
kope/etcd                                                                        0
docker/desktop-kubernetes-etcd   Mirrors some tags from k8s.gcr.io/etcd          0
etcdigital/node                  Node 10 Prepared                                0
etcdev/fork-geth                                                                 0
anldisr/etcdctl                  Etcdctl                                         0
splazit/etcd-swarm               Etcd cluster in docker swarm                    0                    [OK]
ibmcom/etcd-s390x                                                                0
ibmcom/etcd-amd64                                                                0
rancher/etcd-tools                                                               0
astraw99/etcd                    Image for etcd-operator: https://github.com/…   0

可以看到第一个是 bitnami/etcd,我们这里就使用 bitnami/etcd来替代 gcr.io/etcd-development/etcdquay.io/coreos/etcd ,版本和linux安装的版本一致,均是 3.5.4

docker pull bitnami/etcd:3.5.4

然后将拉取的镜像重新 tag:

docker image tag bitnami/etcd:3.5.4 quay.io/coreos/etcd:3.5.4

2.4.2 配置集群

镜像下载完毕后,我们就可以配置有3个节点的 etcd 集群了,我是在每个服务器节点的 /opt 目录下新建的 etcd 脚本文件,三个节点的etcd脚本命令分别如下:

节点1

REGISTRY=quay.io/coreos/etcd# 集群节点公共配置
ETCD_VERSION=3.5.4
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-0
NAME_2=etcd-node-1
NAME_3=etcd-node-2
HOST_1=192.168.1.221
HOST_2=192.168.1.222
HOST_3=192.168.1.223
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/var/lib/etcd# 节点1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
docker run \-p 2379:2379 \-p 2380:2380 \--volume=${DATA_DIR}:/etcd-data \--name etcd ${REGISTRY}:${ETCD_VERSION} \/usr/local/bin/etcd \--data-dir=/etcd-data --name ${THIS_NAME} \--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \--initial-cluster ${CLUSTER} \--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

节点2

REGISTRY=quay.io/coreos/etcd# 集群节点公共配置
ETCD_VERSION=3.5.4
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-0
NAME_2=etcd-node-1
NAME_3=etcd-node-2
HOST_1=192.168.1.221
HOST_2=192.168.1.222
HOST_3=192.168.1.223
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/var/lib/etcd## 节点2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
docker run \-p 2379:2379 \-p 2380:2380 \--volume=${DATA_DIR}:/etcd-data \--name etcd ${REGISTRY}:${ETCD_VERSION} \/usr/local/bin/etcd \--data-dir=/etcd-data --name ${THIS_NAME} \--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \--initial-cluster ${CLUSTER} \--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

节点3

REGISTRY=quay.io/coreos/etcd# 集群节点公共配置
ETCD_VERSION=3.5.4
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-0
NAME_2=etcd-node-1
NAME_3=etcd-node-2
HOST_1=192.168.1.221
HOST_2=192.168.1.222
HOST_3=192.168.1.223
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/var/lib/etcd# 节点3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
docker run \-p 2379:2379 \-p 2380:2380 \--volume=${DATA_DIR}:/etcd-data \--name etcd ${REGISTRY}:${ETCD_VERSION} \/usr/local/bin/etcd \--data-dir=/etcd-data --name ${THIS_NAME} \--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \--initial-cluster ${CLUSTER} \--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

上面的脚本是部署在三台机器上面,在每台机器的 /opt 目录下执行./etcd 即可,如果提示权限不够,chmod 777 etcd,然后再执行脚本 ./etcd
注意:Docker搭建Etcd集群时,同样出现了在单机版安装时出现的 Q2 问题,这里就暂时先搁置,以后找到了解决办法再继续完善。

etcd 13:19:53.20 Welcome to the Bitnami etcd container
etcd 13:19:53.20 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-etcd
etcd 13:19:53.20 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-etcd/issues
etcd 13:19:53.20
/opt/bitnami/scripts/etcd/entrypoint.sh: line 26: /usr/local/bin/etcd: No such file or directory

在运行时可以指定 API 版本:

docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar"

至此,Docker搭建Etcd集群就完成了。Docker方式比较简单。

Etcd教程 — 第二章 Etcd集群静态发现相关推荐

  1. Etcd教程 — 第一章 Etcd简介、Etcd单机安装

    Etcd教程 - 第一章 一.Etcd介绍 1.1 介绍 1.2 etcd特点 二. Etcd单机安装 2.1 安装方式 2.1.1 yum install etcd 2.1.2 通过系统工具安装et ...

  2. 第二章 DMDSC集群搭建

    第二章 DMDSC集群搭建 第一章 DMDSC集群介绍 文章目录 第二章 DMDSC集群搭建 五.DMDSC部署 5.1集群规划 5.2.存储规划 5.2.1划分存储(虚拟机一共分配了60G) 5.2 ...

  3. Hbase高手之路 -- 第二章 -- HBase集群的搭建

    Hbase高手之路 – 第二章 – HBase集群的搭建 一. 下载并安装 1. 下载安装包 2. 上传服务器 3. 解压 tar -zxvf hbase-2.4.10-bin.tar.gz -C . ...

  4. ActiveMQ broker 集群, 静态发现和动态发现

    下载 activemq 压缩包解压后,conf 目录下有各种示例配置文件,红线标出的是静态发现和动态发现的配置. 1. 静态配置 启动3个 broker,端口分别为61616,61618,61620, ...

  5. go-micro教程 — 第二章 go-micro v3 使用Gin、Etcd

    go-micro教程 - 第二章 go-micro v3 使用Gin.Etcd 前言 一.启动Etcd集群 二.创建项目并安装相关依赖 2.1 创建项目 2.2 初始化项目 2.3 安装 proto ...

  6. 基于已有集群动态发现方式部署 Etcd 集群

    https://www.hi-linux.com/posts/19457.html etcd提供了多种部署集群的方式,在「通过静态发现方式部署etcd集群」 一文中我们介绍了如何通过静态发现方式部署集 ...

  7. Hadoop教程(二)Hadoop伪集群环境安装

    Hadoop教程(二)Hadoop伪集群环境安装 本文链接:https://blog.csdn.net/yuan_xw/article/details/50039325 Hadoop教程(二)Hado ...

  8. 软考 程序员教程-第二章 操作系统基础知识

    软考 程序员教程-第二章 操作系统基础知识 第二章 操作系统基础知识 2.1.操作系统概述(第四版教程P44) 操作系统的4个特征:并发性.共享性.虚拟性.不确定性. 操作系统的5个功能:处理机管理. ...

  9. ElasticSearch入门 第二篇:集群配置

    这是ElasticSearch 2.4 版本系列的第二篇: ElasticSearch入门 第一篇:Windows下安装ElasticSearch ElasticSearch入门 第二篇:集群配置 E ...

最新文章

  1. 用JSP实现基于Web的RSS阅读器
  2. 迅雷离线工具 小众雷友 测试版
  3. randn函数加噪声_损失函数 (Loss Function)
  4. P1117-[NOI2016]优秀的拆分【SA】
  5. 如何在Java 8中使用LocalDateTime格式化/解析日期-示例教程
  6. cron表达式 每天0点10分和30分_查找特定时间段cron任务方法
  7. CentOS 7 安装无线驱动
  8. python中json.dumps使用的坑以及字符编码
  9. Luogu4402 机械排序
  10. vrrp协议_Keepalived的高可用基石 - VRRP协议
  11. 机器学习--组合分类方法之随机森林算法原理和实现(RF)
  12. 数学模型之最小二乘法
  13. 合肥Java面试常考题_java面试常考题-String
  14. 真·电子二胡 (ESP32配合库乐队APP实现的电子制作)
  15. 通过IP查找ip所对应网卡名
  16. Unity3D操作数据之Txt文档操作(创建、读取、写入、修改)
  17. 机械学习基础以及在pynq-Z2上部署Faster-RCNN的项目学习3
  18. 谈谈区块链的理解 -- 读《区块链:技术驱动金融》
  19. jmeter最大请求数_jmeter 测试某网页最大并发用户数;
  20. android 4.0板卡接收视频源,在Digilent Nexys Video板卡上实现HDMI I/O视频处理系统的软件配置介绍...

热门文章

  1. 人工智能 3.确定性推理方法
  2. 怎么才能使用计算机,电脑可以上微信吗,详细教您电脑怎么用微信
  3. catti 三笔 计算机专业,catti三级笔译含金量高吗
  4. Brave 安全隐私浏览器
  5. 正月初七 | 2月18日 星期四 | 京东物流在港交所提交招股书;字节跳动回应在美上市传闻;2021年中国电影总票房破百亿...
  6. 帝国搜索模板不支持php,解决帝国CMS搜索模板不支持灵动标签的方法
  7. 企业托管服务器有什么优势
  8. 默哀这一刻 我们的心紧紧相连
  9. 网络多媒体素材加工【1】
  10. 洛谷P1462 通往奥格瑞玛的道路 题解