FENCE设备可以分为两种:内部FENCE和外部FENCE,常用的内部FENCE有IBM RSAII卡,HP的iLO卡,还有IPMI的设备等,外部fence设备有UPS、SANSWITCH、NETWORKSWITCH等

本示例针对libvirtd的虚拟机,宿主机为linux。

一 os环境

虚拟机系统为openEuler 20.03 LTS SP1 aarch64

# cat /etc/openEuler-release
openEuler release 20.03 (LTS-SP1)
# uname -r
4.19.90-2012.4.0.0053.oe1.aarch64

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.72.223 hatest1
172.16.72.224 hatest2
172.16.72.229 server

关闭selinux,firewalld,设置ntp,设置everything和EPOL的yum源

二 宿主机设置

宿主机  172.16.72.229
# dnf search fence
# dnf install fence-virt fence-virtd相关包

# mkdir /etc/cluster
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
    (if=/dev/urandom输出随机数,导入一个字节数为128随机数作为密钥)
# fence_virtd -c  (注意此步骤)
# systemctl restart fence_virtd.service
# systemctl status fence_virtd.service
# systemctl enable fence_virtd.service
# systemctl disable --now firewalld

示例:
# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:

Available backends:
    libvirt 0.3
Available listeners:
    multicast 1.2
    tcp 0.1
    serial 0.4

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0  #此处是虚拟机的网关设备,即宿主机与虚拟机通讯使用设备

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br1";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

# fence_xvm -h
usage: fence_xvm [args]
  -d                    Specify (stdin) or increment (command line) debug level
  -i <family>           IP Family ([auto], ipv4, ipv6)                         
  -a <address>          Multicast address (default=225.0.0.12 / ff05::3:1)     
  -p <port>             TCP, Multicast, VMChannel, or VM socket port           
                        (default=1229)                                         
  -r <retrans>          Multicast retransmit time (in 1/10sec; default=20)     
  -C <auth>             Authentication (none, sha1, [sha256], sha512)          
  -c <hash>             Packet hash strength (none, sha1, [sha256], sha512)    
  -k <file>             Shared key file (default=/etc/cluster/fence_xvm.key)   
  -H <domain>           Virtual Machine (domain name) to fence                 
  -u                    Treat [domain] as UUID instead of domain name. This is
                        provided for compatibility with older fence_xvmd       
                        installations.                                         
  -o <operation>        Fencing action (null, off, on, [reboot], status, list,
                        list-status, monitor, validate-all, metadata)          
  -t <timeout>          Fencing timeout (in seconds; default=30)               
  -?                    Help (alternate)                                       
  -h                    Help                                                   
  -V                    Display version and exit                               
  -w <delay>            Fencing delay (in seconds; default=0)

With no command line argument, arguments are read from standard input.
Arguments read from standard input take the form of:

arg1=value1
    arg2=value2

debug                 Specify (stdin) or increment (command line) debug level
  ip_family             IP Family ([auto], ipv4, ipv6)                         
  multicast_address     Multicast address (default=225.0.0.12 / ff05::3:1)     
  ipport                TCP, Multicast, VMChannel, or VM socket port           
                        (default=1229)                                         
  retrans               Multicast retransmit time (in 1/10sec; default=20)     
  auth                  Authentication (none, sha1, [sha256], sha512)          
  hash                  Packet hash strength (none, sha1, [sha256], sha512)    
  key_file              Shared key file (default=/etc/cluster/fence_xvm.key)   
  port                  Virtual Machine (domain name) to fence                 
  use_uuid              Treat [domain] as UUID instead of domain name. This is
                        provided for compatibility with older fence_xvmd       
                        installations.                                         
  action                Fencing action (null, off, on, [reboot], status, list,
                        list-status, monitor, validate-all, metadata)          
  timeout               Fencing timeout (in seconds; default=30)               
  delay                 Fencing delay (in seconds; default=0)

三 虚拟机设置

172.16.72.223/224

mkdir /etc/cluster
scp root@172.16.72.229:/etc/cluster/* /etc/cluster

可以在节点上查看可用的fence
 stonith_admin -I
 stonith_admin -M -a fence_xvm 查看fence_xvm的详细信息

# which fence_xvm
/usr/sbin/fence_xvm
# rpm -qf /usr/sbin/fence_xvm
fence-virt-1.0.0-1.oe1.aarch64

四 配置集群

hatest1上执行:

认证节点
# systemctl start pcsd
# echo '111111"' | passwd --stdin hacluster
# pcs host auth hatest1 hatest2
Username: hacluster
Password:
hatest2: Authorized
hatest1: Authorized

创建集群
# pcs cluster setup hacluster hatest1 addr=172.16.72.223 hatest2 addr=172.16.72.224
Destroying cluster on hosts: 'hatest1', 'hatest2'...
hatest2: Successfully destroyed cluster
hatest1: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'hatest1', 'hatest2'
hatest1: successful removal of the file 'pcsd settings'
hatest2: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'hatest1', 'hatest2'
hatest2: successful distribution of the file 'corosync authkey'
hatest2: successful distribution of the file 'pacemaker authkey'
hatest1: successful distribution of the file 'corosync authkey'
hatest1: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'hatest1', 'hatest2'
hatest1: successful distribution of the file 'corosync.conf'
hatest2: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.

启动集群以及开机自启动
# pcs cluster start --all
hatest2: Starting Cluster...
hatest1: Starting Cluster...
# pcs cluster enable --all
hatest1: Cluster Enabled
hatest2: Cluster Enabled

设置集群属性
# pcs property set no-quorum-policy=ignore
# pcs property --all |grep stonith-enabled
 stonith-enabled: true

查看隔离资源类型信息
# pcs stonith describe fence_xvm
fence_xvm - Fence agent for virtual machines

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

Stonith options:
  debug: Specify (stdin) or increment (command line) debug level
  ip_family: IP Family ([auto], ipv4, ipv6)
  multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
  ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
  retrans: Multicast retransmit time (in 1/10sec; default=20)
  auth: Authentication (none, sha1, [sha256], sha512)
  hash: Packet hash strength (none, sha1, [sha256], sha512)
  key_file: Shared key file (default=/etc/cluster/fence_xvm.key)
  port: Virtual Machine (domain name) to fence
  use_uuid: Treat [domain] as UUID instead of domain name. This is provided for compatibility with older fence_xvmd installations.
  timeout: Fencing timeout (in seconds; default=30)
  delay: Fencing delay (in seconds; default=0)
  domain: Virtual Machine (domain name) to fence (deprecated; use port)
  pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. Eg. node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2
  pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list).
  pcmk_host_check: How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device via the 'list' command), static-list (check the pcmk_host_list attribute), status
                   (query the device via the 'status' command), none (assume every device can fence every machine)
  pcmk_delay_max: Enable a random delay for stonith actions and specify the maximum of random delay. This prevents double fencing when using slow devices such as sbd. Use this to enable a random delay for
                  stonith actions. The overall delay is derived from this random delay value adding a static delay so that the sum is kept below the maximum delay.
  pcmk_delay_base: Enable a base delay for stonith actions and specify base delay value. This prevents double fencing when different delays are configured on the nodes. Use this to enable a static delay for
                   stonith actions. The overall delay is derived from a random delay value adding this static delay so that the sum is kept below the maximum delay.
  pcmk_action_limit: The maximum number of actions can be performed in parallel on this device Cluster property concurrent-fencing=true needs to be configured first. Then use this to specify the maximum number
                     of actions can be performed in parallel on this device. -1 is unlimited.

Default operations:
  monitor: interval=60s

添加普通集群资源
# pcs resource create dummy ocf:heartbeat:Dummy
# pcs status resources
  * dummy    (ocf::heartbeat:Dummy):     Started hatest1

宿主机上执行命令:
查看
# fence_xvm -o list
hatest1                          8f38adce-fbbf-46ec-be0c-77a88a30a7e9 on
hatest2                          7d8e4f03-9d7e-4177-8e3f-98bf879e8ff3 on
此处名称为domain名称
确认:
on,off,reboot,status等操作
# fence_xvm -o off -H hatest1
# fence_xvm -o off -H hatest2
重启两个虚拟机

创建fencs_xvm类型的vmfence隔离资源

pcs stonith create vmfence fence_xvm pcmk_host_map="hatest1:1;hatest2:2,3" op monitor interval=30s # 对于没有主机名称的设备,可以指定哪些端口控制对应的主机
或者
pcs stonith create vmfence fence_xvm pcmk_host_map="hatest1:hatest1;hatest2:hatest2" op monitor interval=30s # 本示例中采用
注: 映射规则 "主机名: 域名",多个之间用; 隔开
域名是在宿主机上通过 fence_xvm -o list列出来的虚拟机的名称

# pcs stonith show vmfence
Warning: This command is deprecated and will be removed. Please use 'pcs stonith config' instead.
 Resource: vmfence (class=stonith type=fence_xvm)
  Attributes: pcmk_host_map=hatest1:hatest1;hatest2:hatest2
  Operations: monitor interval=30s (vmfence-monitor-interval-30s)

# pcs status
Cluster name: hacluster
Cluster Summary:
  * Stack: corosync
  * Current DC: hatest1 (version 2.0.4-6.oe1-2deceaa3ae) - partition with quorum
  * Last updated: Mon Feb  1 11:27:14 2021
  * Last change:  Mon Feb  1 11:27:11 2021 by hacluster via crmd on hatest2
  * 2 nodes configured
  * 2 resource instances configured

Node List:
  * Online: [ hatest1 hatest2 ]

Full List of Resources:
  * dummy    (ocf::heartbeat:Dummy):     Started hatest1
  * vmfence    (stonith:fence_xvm):     Starting hatest2

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/disabled

五 验证隔离设备

此时资源在hatest1节点上, 在hatest2通过fence_xvm -H node1使hatest1主机重启, 可以使资源漂移到hatest2节点上
-H 参数指定的是虚拟机的域名
# fence_xvm -H hatest1 -d -o reboot
-- args @ 0xffffe92dada8 --
  args->domain = hatest1
  args->op = 2
  args->mode = 0
  args->debug = 1
  args->timeout = 30
  args->delay = 0
  args->retr_time = 20
  args->flags = 0
  args->net.addr = 225.0.0.12
  args->net.ipaddr = (null)
  args->net.cid = 0
  args->net.key_file = /etc/cluster/fence_xvm.key
  args->net.port = 1229
  args->net.hash = 2
  args->net.auth = 2
  args->net.family = 2
  args->net.ifindex = 0
  args->serial.device = (null)
  args->serial.speed = 115200,8N1
  args->serial.address = 10.0.2.179
-- end args --

等待hatest1虚拟机启动后,可以查看状态如下:
# pcs status
Cluster name: hacluster
Cluster Summary:
  * Stack: corosync
  * Current DC: hatest2 (version 2.0.4-6.oe1-2deceaa3ae) - partition with quorum
  * Last updated: Mon Feb  1 11:59:54 2021
  * Last change:  Mon Feb  1 11:57:42 2021 by root via cibadmin on hatest1
  * 2 nodes configured
  * 2 resource instances configured

Node List:
  * Online: [ hatest1 hatest2 ]

Full List of Resources:
  * dummy    (ocf::heartbeat:Dummy):     Started hatest2
  * vmfence    (stonith:fence_xvm):     Started hatest1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

中断服务或者关闭网卡,虚拟机都会自动重启
强杀服务:kill -9 `pidof httpd`
业务资源所在节点:ifconfig enp1s0 down
宕掉内核:echo "c" > /proc/sysrq-trigger

现象: 业务所在节点自动重启,资源切换

pacemaker之fence_xvm:libvirtd相关推荐

  1. haproxy实现7层的负载均衡[haproxy+pacemaker+fencing]

    文章目录 1.环境配置 2.haproxy的配置及实验效果 2.1 安装及测试 2.2 设置日至信息的存放 2.3 设置访问密码 2.4 source模式 2.5 default_backend和us ...

  2. Cororsync+Pacemaker

    Corosync是OpenAIS发展到Wilson版本后衍生出来的开放性集群引擎工程.可以说Corosync是OpenAIS工程的一部分,Corosync执行高可用应用程序的通信组系统,它有以下特征: ...

  3. Haproxy + Pacemaker 实现高可用负载均衡(二)

    Pacemaker server1 和 server2 均安装pacemaker 和 corosync server1 和 server2 作相同配置 [root@server1 ~]# yum in ...

  4. HAProxy实现负载均衡及高可用集群(corosync+pacemaker)

    一.haproxy haproxy 是一款提供高可用性.负载均衡以及基于TCP(第四层)和HTTP(第七层)应用的代理软件,支持虚拟主机,它是免费.快速并且可靠的一种解决方案. HAProxy特别适用 ...

  5. pacemaker+corosync

    Pacemaker Server1 172.25.23.1节点1 Server2 172.25.23.2节点2 Server3 172.25.23.3做存储分离的 VIP 172.25.23.100 ...

  6. pacemaker+corosync实现集群管理

    前言: 高可用集群,是指以减少服务中断(如因服务器宕机等引起的服务中断)时间为目的的服务器集群技术.简单的说,集群就是一组计算机,它们作为一个整体向用户提供一组网络资源.这些单个的计算机系统就是集群的 ...

  7. pacemaker+mysql+drbd

    pacemaker+mysql+drbd 使用pacemaker创建一个主/备模式的集群,并且创建一个存储(drbd) 会使用到以下软件: corosync:作为通信层和提供关系管理服务,心跳引擎,检 ...

  8. Linux之企业实训篇——haproxy与pacemaker实现高可用负载均衡

    注:haproxy与fence的相关配置可以参照一下我之前写的博客 >_< ~~~ 一.简介 Pacemaker是一个集群资源管理器.它利用集群基础构件(OpenAIS,heartbeat ...

  9. 利用pcs+pacemaker+corosync实现(HA)高可用集群

    实验环境搭建 创建一台操作系统是rhel7.6的虚拟机node,配置好网络仓库,解析,网卡设置,关闭火墙和selinux后封装 克隆node虚拟机,虚拟机域名为node1,node2,node3,主机 ...

最新文章

  1. 如何在系统崩溃时从C++中获取函数调用栈信息?
  2. Android——apk反编译
  3. GitHub添加SSH keys报错Key is invalid. It must begin with 'ssh-ed25519', 'ssh-rsa', 'ssh-dss', 'ecdsa-sha
  4. python selenium对浏览器自动截图
  5. Spring请求参数校验
  6. android ios web兼容,js与android iOS 交互兼容
  7. eclipse java import_java – Eclipse:将源代码的import文件夹导入...
  8. centos 文件夹网络连接_CentOS的网络配置的命令详解
  9. 详解网站WEB日志格式
  10. iphone更改照片分辨率?手机怎样修改图片分辨率?
  11. 中断驱动的自行车码表
  12. 基于模拟退火优化算法的的并行车间机器优化调度附Matlab代码
  13. 怎样提高平面设计色彩表现力
  14. 数据挖掘实战应用案例精讲-【概念篇】数据湖(补充篇)(Data Lake )
  15. 闭式系统蒸汽管径推荐速度_蒸汽管道的设计选型
  16. 文献解读 | 科学家发现代谢调控促进肿瘤转移新机制
  17. 软件工程职业方向有哪些
  18. Word中的图片设置嵌入式之后显示不全问题
  19. 存储格式在Hive的应用
  20. Java“彭于晏,kafka教程

热门文章

  1. 5.3 Tabu Search and Related Algorithms Tabu搜索和相关算法
  2. HTTP协议浅谈(一)之TCP长连接
  3. 移动端安全框架:MobSF:概要与使用
  4. iOS如何关闭文本框输入键盘
  5. OpenCV浅析与相关资源
  6. htmlD的textArea如何去掉前面存在的N个空格问题
  7. 解析HTML简历Java_教大家一个用html5写简历的方法
  8. snagit截图工具的官方路径
  9. Reflector工具使用
  10. voip电话解决方案