在CentOS7上安装配置Corosync高可用集群过程全记录

一、环境、拓朴及其他准备工作:

1-1:准备网络YUM源环境;
All Nodes OS CentOS 7.3 x86_64:
# wget -O /etc/yum.repos.d/CentOS-Base-Ali.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base-Ali.repo
# wget -O /etc/yum.repos.d/CentOS-Base-163.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo
# sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base-163.repo
# vim /etc/yum.repos.d/epel_ustc.repo
[epel_ustc]
name=epel_ustc
baseurl=https://mirrors.ustc.edu.cn/epel/7Server/x86_64/
enabled=1
gpgcheck=0
# yum groupinstall 'Development tools' -y
# yum install htop glances -y
# yum install wget vim-enhanced openssh-server openssh-clients ntpdate xfsprogs kmod-xfs util-linux util-linux-ng tree diffstat gcc gcc-c++ make cmake zlib-devel pcre pcre-devel openssl-devel glibc-common dos2unix unix2dos acl rsyslog bash-completion man man-pages -y

1-2:拓朴Topology:
Web Clients ----->Corosync HA Cluster(Node1、Node2、Node3)----->NFS_Shared_Store(Node4)
共有四个测试节点,前三个节点为Corosync高可用集群,提供Web(httpd)服务,最后一个节点为共享存储,各节点主机名分别node1.cured.net、node2.cured.net、node3.cured.net和node4.cured.net,IP地址分别为192.168.0.71、192.168.0.72、192.168.0.73、192.168.0.74;提供web服务的VIP为192.168.0.70。

1-3:其他准备工作:
1-3-1:各集群节点的hosts文件:
为了配置一台Linux主机成为HA的节点,通常需要做出如下的准备工作:集群所有节点的主机名称和对应的IP地址解析服务可以正常工作,且每个节点的主机名称需要跟"uname -n“命令的结果保持一致;因此,需要保证三个节点上的/etc/hosts文件均为下面的内容:
Node1-Node3:
# vim /etc/hosts
192.168.0.71 node1.cured.net node1
192.168.0.72 node2.cured.net node2
192.168.0.73 node3.cured.net node3

1-3-2:各集群节点的主机名;
为了使得重新启动系统后仍能保持如上的主机名称,还分别需要在各节点执行类似如下的命令:
Node1:
# hostnamectl set-hostname node1.cured.net
Node2:
# hostnamectl set-hostname node2.cured.net
Node3:
# hostnamectl set-hostname node3.cured.net
Node4:
# hostnamectl set-hostname node4.cured.net

1-3-3:集群节点间的免密码通信;
设定三个节点可以基于密钥进行ssh通信,这可以通过类似如下的命令实现:
法一:
Node1:
# ssh-keygen -t rsa -P ''
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2.cured.net
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node3.cured.net
Node2:
# ssh-keygen -t rsa -P ''
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1.cured.net
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node3.cured.net
Node3:
# ssh-keygen -t rsa -P ''
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1.cured.net
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2.cured.net
法二:
All Nodes:
# mkdir -pv /root/.ssh
Node1:
# ssh-keygen -t rsa -P ''
# cat .ssh/id_rsa.pub > .ssh/authorized_keys
# chmod 600 .ssh/authorized_keys
# scp -p .ssh/id_rsa .ssh/authorized_keys 192.168.0.72:/root/.ssh
# scp -p .ssh/id_rsa .ssh/authorized_keys 192.168.0.73:/root/.ssh

1-3-4:各集群节点的时间同步;
# date; ssh node2 'date'; ssh node3 'date'

1-3-5:准备共享存储;
Node4:
# yum install nfs-utils -y
# vim /etc/exports
/www/htdocs/ 192.168.0.0/24(rw)

# mkdir -pv /www/htdocs
# echo "<h1>Share_Store Of NFS On Node4</h1>" > /www/htdocs/index.html
# showmount -e 192.168.0.74
# systemctl start nfs.service

二、安装、启动及配置Corosync高可用集群;

2-1:安装并启动pcsd;
All Nodes:
# yum install -y pcs [psmisc policycoreutils-python pacemaker] ansible httpd
Node1:
# vim /etc/ansible/hosts
......
192.168.0.71
[ha]
192.168.0.71
192.168.0.72
192.168.0.73
# scp -p /etc/hosts 192.168.0.72:/etc/
# scp -p /etc/hosts 192.168.0.73:/etc/
# ansible ha -m service -a 'name=pcsd state=started enabled=yes'
# ansible ha -m service -a 'name=pacemaker state=started enabled=yes'
# ansible ha -m service -a 'name=httpd state=started enabled=yes'
# ss -tuanlp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 192.168.0.71:52695 *:* users:(("corosync",pid=31553,fd=16))
udp UNCONN 0 0 192.168.0.71:42006 *:* users:(("corosync",pid=31553,fd=17))
udp UNCONN 0 0 192.168.0.71:5405 *:* users:(("corosync",pid=31553,fd=9))
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=36758,fd=8))
udp UNCONN 0 0 127.0.0.1:892 *:* users:(("rpc.statd",pid=36756,fd=5))
udp UNCONN 0 0 *:893 *:* users:(("rpcbind",pid=36758,fd=9))
udp UNCONN 0 0 *:55166 *:* users:(("rpc.statd",pid=36756,fd=8))
udp UNCONN 0 0 192.168.0.71:52132 *:* users:(("corosync",pid=31553,fd=15))
udp UNCONN 0 0 :::37939 :::* users:(("rpc.statd",pid=36756,fd=10))
udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=36758,fd=10))
udp UNCONN 0 0 :::893 :::* users:(("rpcbind",pid=36758,fd=11))
tcp LISTEN 0 128 *:39118 *:* users:(("rpc.statd",pid=36756,fd=9))
tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=36758,fd=5),("systemd",pid=1,fd=53))
tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=874,fd=3))
tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=36758,fd=4),("systemd",pid=1,fd=43))
tcp LISTEN 0 128 :::80 :::* users:(("httpd",pid=38778,fd=4),("httpd",pid=38777,fd=4),("httpd",pid=38776,fd=4),("httpd",pid=38775,fd=4),("httpd",pid=38774,fd=4),("httpd",pid=38773,fd=4))
tcp LISTEN 0 128 :::2224 :::* users:(("pcsd",pid=31426,fd=7))
tcp LISTEN 0 128 :::22 :::* users:(("sshd",pid=874,fd=4))
tcp LISTEN 0 128 :::40799 :::* users:(("rpc.statd",pid=36756,fd=11))

# ansible ha -m shell -a 'echo "cured" | passwd --stdin hacluster'

OR:
All Nodes:
# systemctl start pcsd.service
# systemctl enable pcsd.service
# systemctl start httpd.service
# systemctl enable httpd.service
# echo 'cured' | passwd --stdin hacluster

2-2:创建集群;
Node1:
# pcs cluster auth node1.cured.net node2.cured.net node3.cured.net -u hacluster
Username: hacluster
Password:
node1.cured.net: Authorized
node2.cured.net: Authorized
node3.cured.net: Authorized

# pcs cluster setup --name corosync_hacluster node1.cured.net node2.cured.net node3.cured.net
Destroying cluster on nodes: node1.cured.net, node2.cured.net, node3.cured.net...
node1.cured.net: Stopping Cluster (pacemaker)...
node2.cured.net: Stopping Cluster (pacemaker)...
node3.cured.net: Stopping Cluster (pacemaker)...
node3.cured.net: Successfully destroyed cluster
node1.cured.net: Successfully destroyed cluster
node2.cured.net: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'node1.cured.net', 'node2.cured.net', 'node3.cured.net'
node1.cured.net: successful distribution of the file 'pacemaker_remote authkey'
node2.cured.net: successful distribution of the file 'pacemaker_remote authkey'
node2.cured.net: successful distribution of the file 'pacemaker_remote authkey'
node3.cured.net: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
node1.cured.net: Succeeded
node2.cured.net: Succeeded
node3.cured.net: Succeeded
Synchronizing pcsd certificates on nodes node1.cured.net, node2.cured.net, node3.cured.net...
node3.cured.net: Success
node2.cured.net: Success
node1.cured.net: Success
Restarting pcsd on the nodes in order to reload the certificates...
node3.cured.net: Success
node2.cured.net: Success
node1.cured.net: Success

2-3:启动集群;
Node1(任意集群节点):
# pcs cluster start --all
node1.cured.net: Starting Cluster...
node2.cured.net: Starting Cluster...
node3.cured.net: Starting Cluster...
注:上面的命令相当于在各节点分别执行如下命令:
# systemctl start corosync.service
# systemctl start pacemaker.service

2-4:检查集群状态;
2-4-1:检查各节点通信状态(显示为no faults即为OK):
# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.0.71
status = ring 0 active with no faults

2-4-2:检查集群成员关系及Quorum API;
# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.0.71)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.0.72)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(192.168.0.73)
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined

# pcs status corosync
Membership information
----------------------
Nodeid Votes Name
1 1 node1.cured.net (local)
2 1 node2.cured.net
3 1 node3.cured.net

2-4-3:检查pacemaker的启动状况;
# ps axf
…… ……
4777 ? Ss 0:00 /usr/sbin/pacemakerd -f
4778 ? Ss 0:00 \_ /usr/libexec/pacemaker/cib
4779 ? Ss 0:00 \_ /usr/libexec/pacemaker/stonithd
4780 ? Ss 0:00 \_ /usr/libexec/pacemaker/lrmd
4781 ? Ss 0:00 \_ /usr/libexec/pacemaker/attrd
4782 ? Ss 0:00 \_ /usr/libexec/pacemaker/pengine
4783 ? Ss 0:00 \_ /usr/libexec/pacemaker/crmd

2-4-4:查看pacemaker集群状态;
# pcs status
Cluster name: corosync_hacluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 00:16:51 2017
Last change: Sun Dec 17 00:15:05 2017 by hacluster via crmd on node1.cured.net
3 nodes configured
0 resources configured
Online: [ node1.cured.net node2.cured.net node3.cured.net ]
No resources
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

注:其中的WARNING信息是因为当前集群系统开启了stonith-enabled属性但却没有配置stonith设备所致。使用crm_verify命令也可检查出此错误。
# crm_verify -L -V
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity

注:可以使用如下命令禁用集群的stonith特性,而后再次检查将不再出现错误信息。
# pcs property set stonith-enabled=false
# crm_verify -L -V
# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: corosync_hacluster
dc-version: 1.1.16-12.el7_4.5-94ff4df
have-watchdog: false
stonith-enabled: false

2-5:pcs命令简介;
pcs命令可以对pacemaker集群进行全生命周期管理,每一种管理功能均通过相应的子命令实现。
cluster:Configure cluster options and nodes
resource:Manage cluster resources
stonith:Configure fence devices
constraint:Set resource constraints
property:Set pacemaker properties
acl:Set pacemaker access control lists
status:View cluster status
config:View and manage cluster configuration

2-6:配置集群属性;
corosync有许多全局配置属性,例如前面修改的stonith-enabled即为此类属性之一。pcs的property子命令可用于显示或设置集群的各属性。下面的命令可以获取其详细使用帮助。
# pcs property --help
其相关的管理命令有:
list|show [<property> | --all | --defaults]
set [--force] [--node <nodename>] <property>=[<value>]
unset [--node <nodename>] <property>

2-6-1:创建及删除VIP、共享存储、网络服务等资源;
# pcs status
# pcs resource show
# pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.70 cidr_netmask=32 nic=ens32 op monitor interval=30s
# pcs resource create VIP ocf:heartbeat:IPaddr2 ip=192.168.0.70 op monitor interval=20s timeout=10s
# pcs resource show
VirtualIP (ocf::heartbeat:IPaddr2): Started node1.cured.net
VIP (ocf::heartbeat:IPaddr2): Started node2.cured.net
# pcs resource delete VirtualIP
# pcs resource show
VIP (ocf::heartbeat:IPaddr2): Started node2.cured.net
# pcs resource create shared_store ocf:heartbeat:Filesystem device="192.168.0.74:/www/htdocs" directory="/var/www/html" fstype="nfs" op start timeout=60s op stop timeout=60s op monitor interval=20s timeout=40s
# pcs resource create webserver systemd:httpd op monitor interval=30s timeout=20s

注:资源平均分布:
# pcs resource show
VIP (ocf::heartbeat:IPaddr2): Started node1.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node2.cured.net
webserver (systemd:httpd): Started node3.cured.net

2-6-2:定义所有资源至同一个组,即同一个节点启动web服务、配置VIP、同时挂载共享存储;
# pcs resource group add webservice VIP shared_store webserver
# pcs status
Cluster name: corosync_hacluster
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 02:56:02 2017
Last change: Sun Dec 17 02:55:52 2017 by root via cibadmin on node1.cured.net
3 nodes configured
3 resources configured
Online: [ node1.cured.net node2.cured.net node3.cured.net ]
Full list of resources:
Resource Group: webservice
VIP (ocf::heartbeat:IPaddr2): Started node1.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node1.cured.net
webserver (systemd:httpd): Started node1.cured.net
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

2-6-3:标记Node1为standby,察看高可用集群的资源飘移;
# pcs cluster standby node1.cured.net
# pcs status
Cluster name: corosync_hacluster
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 02:59:52 2017
Last change: Sun Dec 17 02:59:10 2017 by root via cibadmin on node1.cured.net
3 nodes configured
3 resources configured
Node node1.cured.net: standby
Online: [ node2.cured.net node3.cured.net ]
Full list of resources:
Resource Group: webservice
VIP (ocf::heartbeat:IPaddr2): Started node2.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node2.cured.net
webserver (systemd:httpd): Started node2.cured.net
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

2-6-4:标记Node2也为standby,察看高可用集群的资源飘移;
# pcs cluster standby node2.cured.net
# pcs status
Cluster name: corosync_hacluster
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 03:01:44 2017
Last change: Sun Dec 17 03:00:35 2017 by root via cibadmin on node1.cured.net
3 nodes configured
3 resources configured
Node node1.cured.net: standby
Node node2.cured.net: standby
Online: [ node3.cured.net ]
Full list of resources:
Resource Group: webservice
VIP (ocf::heartbeat:IPaddr2): Started node3.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node3.cured.net
webserver (systemd:httpd): Started node3.cured.net
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

2-6-5:将standby状态的两个节点重新上线;
# pcs cluster unstandby node1.cured.net
# pcs cluster unstandby node2.cured.net
# pcs status
Cluster name: corosync_hacluster
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 03:02:27 2017
Last change: Sun Dec 17 03:02:24 2017 by root via cibadmin on node1.cured.net
3 nodes configured
3 resources configured
Online: [ node1.cured.net node2.cured.net node3.cured.net ]
Full list of resources:
Resource Group: webservice
VIP (ocf::heartbeat:IPaddr2): Started node3.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node3.cured.net
webserver (systemd:httpd): Started node3.cured.net
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

2-6-6:定义位置约束;
如当Node1性能较好时,可将资源位置粘性靠近Node1,在Node1修复上线后,自动切换至Node1提供Web服务:
# pcs constraint location show
Location Constraints:
# pcs constraint location add webservice_prefer_node1 webservice node1.cured.net 100
# pcs constraint location show
Location Constraints:
Resource: webservice
Enabled on: node1.cured.net (score:100)

# pcs status
Cluster name: corosync_hacluster
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 03:09:37 2017
Last change: Sun Dec 17 03:08:48 2017 by root via cibadmin on node1.cured.net
3 nodes configured
3 resources configured
Online: [ node1.cured.net node2.cured.net node3.cured.net ]
Full list of resources:
Resource Group: webservice
VIP (ocf::heartbeat:IPaddr2): Started node1.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node1.cured.net
webserver (systemd:httpd): Started node1.cured.net
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

# pcs constraint
Location Constraints:
Resource: webservice
Enabled on: node1.cured.net (score:100)
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

# pcs constraint remove webservice_prefer_node1
# pcs constraint
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

2-6-7:修改某个节点的资源粘性值:
集群中某个节点资源粘性值高于其他节点时,当该节点处于standby状态时,集群自动修复该节点,各资源不飘移。
Node1:
# pcs property set default-resource-stickiness=100
# pcs cluster standby node1.cured.net
# pcs status
Cluster name: corosync_hacluster
Stack: corosync
Current DC: node1.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sun Dec 17 03:39:24 2017
Last change: Sun Dec 17 03:38:23 2017 by root via cibadmin on node1.cured.net
3 nodes configured
3 resources configured
Node node1.cured.net: standby
Online: [ node2.cured.net node3.cured.net ]
Full list of resources:
Resource Group: webservice
VIP (ocf::heartbeat:IPaddr2): Started node2.cured.net
shared_store (ocf::heartbeat:Filesystem): Started node2.cured.net
webserver (systemd:httpd): Started node2.cured.net
......

2-6-8:其pcs及其子命令及配置文件记录参考:
pcs:
  cluster
    auth
    setup
  resource
    describe
    list
    create
    delete
  constraint
    colocation
    order
    location
  property
    list
    set
  status
  config

# pcs property list --all
# pcs resource list lsb
# pcs resource list systemd
# pcs resource list ocf
# pcs resource describe ocf:heartbeat:IPaddr

# grep -v '^[[:space:]]*#' /etc/corosync/corosync.conf
totem {
version: 2

crypto_cipher: none
crypto_hash: none

interface {
ringnumber: 0
bindnetaddr: 192.168.0.0
mcastaddr: 239.255.0.1
mcastport: 5405
ttl: 1
}
}

nodelist {
node {
ring0_addr: 192.168.0.71
nodeid: 1
}
node {
ring0_addr: 192.168.0.72
nodeid: 2
}
node {
ring0_addr: 192.168.0.73
nodeid: 3
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: no
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}

quorum {
provider: corosync_votequorum
}

三、crmsh工具简介:
Name : crmsh
Summary : Pacemaker netmand line interface
Description :
crm shell, a Pacemaker netmand line interface.
Pacemaker is an advanced, scalable High-Availability cluster resource manager for Heartbeat and/or Corosync.

All Nodes:
put crmsh-2.1.4-1.1.x86_64.rpm pssh-2.3.1-4.2.x86_64.rpm python-pssh-2.3.1-4.2.x86_64.rpm
# yum install *.rpm
# scp *.rpm node2:/root/
注:crmsh-2.1.4中很多子命令暂不支持CentOS7.3,可支持CentOS7.1,故这里不再列举其命令。

# crm_mon
Stack: corosync
Current DC: node2.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sat Dec 16 22:16:51 2017
Last change: Sat Dec 16 21:54:32 2017 by hacluster via crmd on node1.cured.net
3 nodes configured
0 resources configured
Online: [ node1.cured.net node2.cured.net node3.cured.net ]
No active resources

# crm_node -h

Node1:
# crm status
Stack: corosync
Current DC: node2.cured.net (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Sat Dec 16 03:47:01 2017
Last change: Sat Dec 16 03:12:11 2017 by root via cibadmin on node1.cured.net
2 nodes configured
0 resources configured
Online: [ node1.cured.net node2.cured.net ]
No active resources

四、练习Example(过程记录略):
web serivce:
vip: 192.168.0.70
httpd:
Node1-Node3:
# yum install httpd -y
# systemctl stop nginx
# echo "<h1>node1.cured.net</h1>" > /var/www/html/index.html (Node1)
# echo "<h1>node2.cured.net</h1>" > /var/www/html/index.html (Node2)
# echo "<h1>node3.cured.net</h1>" > /var/www/html/index.html (Node3)

转载于:https://www.cnblogs.com/cured/p/8051333.html

在CentOS7上安装配置Corosync高可用集群过程全记录相关推荐

  1. centOS 7下安装与配置heartbeat高可用集群

    Heartbeat项目是 Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统.心跳服务和集群通信是高可用集群的两个关键组件,在 Heartbeat 项目里,由 heartbeat 模块实 ...

  2. Kubernetes — 在 OpenStack 上使用 kubeadm 部署高可用集群

    目录 文章目录 目录 高可用集群部署拓扑 高可用集群网络拓扑 网络代理配置 Load Balancer 环境准备 Kubernetes Cluster 环境准备 安装 Container Runtim ...

  3. centos7 安装haproxy+rabbitmq高可用集群

    一,准备工作: 1,三台centos7虚拟机: 192.168.209.143  rabbitmq-node1+haproxy 192.168.209.147  rabbitmq-node2 192. ...

  4. Rancher安装k8s: rke高可用集群

    文章目录 1,单节点rancher 1.1,安装启动rancher 1.2,页面创建k8s集群 设置kubectl环境 1.3,rancher重置admin密码 2,高可用rancher 2.1,rk ...

  5. k8s集群部署 | 三节点(复用)高可用集群过程参考

    文章目录 1. kubeadm 部署三节点(复用)高可用 k8s 集群 1.1 环境规划阶段 1.1.1 实验架构图 1.1.2 系统版本说明 1.1.3 环境基本信息 1.1.4 k8s 网段划分 ...

  6. rhce linux下如何配置lvs高可用集群

    LVS 功能 : 有实现三种IP负载均衡技术和八种连接调度算法的IPVS软件.在IPVS内部实现上,采用了高效的Hash函数和垃圾回收机制,能正确处理所调度报文相关的ICMP消息(有些商品化的系统反而 ...

  7. Nginx(六):配置nginx高可用集群

    我们知道在我们使用nginx代理多态tomcat服务器时,如果某台tomcat服务器发生宕机,那么nginx的分配机制可以自动将其剔除.但是如果发生了nginx的宕机状况,又该如何解决呢. 1.配置高 ...

  8. pecamaker+corosync高可用集群的搭建

    实验环境:两台虚拟机 server1:172.25.50.1 #master主机 server2:172.25.50.2 #slave主机 iptables:off  selinux:disabled ...

  9. HDFS高可用配置及其高可用集群搭建

    高可用 高可用背景 单点故障.高可用 实现高可用 主备集群 Active.Standby 可用性评判标准- x个9 HA系统设置核心问题 1.脑裂问题 2.数据同步问题 HDFS NameNode单点 ...

最新文章

  1. 最大民科组织被取缔,鸡蛋返生、推翻相对论、量子速读都是他们干的
  2. Python垂直翻转图像(Vertically Flip Image)
  3. 官方全面解读“5G+工业互联网”
  4. 2016年EDM营销新年策略分享
  5. 跨平台视频通信项目-OpenTok
  6. win10安装mysql5.6,mysql启动时,闪退
  7. Linux/Unix下tar命令详解
  8. layui绑定json_JSON绑定:概述系列
  9. 起步前要做哪些准备?
  10. Netty工作笔记0052---Pipeline组件剖析
  11. 动态代理 and Listener监听器
  12. node.js开发中使用Node Supervisor实现监测文件修改并自动重启应用(转)
  13. 020-python函数和常用模块-文件操作
  14. java监控网卡流量_流量监控脚本监控网卡
  15. Window系统怎么如何激活?详细版
  16. 单数据库,多数据库,单实例,多实例不同情况下的数据访问效率测试
  17. 【Python】随机森林算法——东北大学大数据班数据挖掘实训四
  18. 基于51单片机的无线测温系统
  19. HTML+CSS打造简单的横向时间轴
  20. Maven读书系列:Maven仓库

热门文章

  1. ARM全新Armv9架构:10年最大更新、增强AI和security能力
  2. 基于ROS的移动机器人开发:视觉、语音、导航
  3. 深度学习(三十一)基于深度矩阵分解的属性表征学习
  4. 快捷指令_iOS快捷指令中心,太实用啦
  5. Self Attention和Multi-Head Attention的原理和实现
  6. 2016版系统集成项目管理工程师下午案例分析考试范围
  7. Python学习入门9:如何高效学Python?
  8. Java基础---方法的重载(overload)+ 优先向上匹配原则
  9. 出大问题!webpack 多入口html模板在后端
  10. 「NOIP 2013」 货车运输