:haproxy与fence的相关配置可以参照一下我之前写的博客 >_< ~~~

一、简介

  • Pacemaker是一个集群资源管理器。它利用集群基础构件(OpenAIS,heartbeat或corosync)提供的消息和成员管理能力来探测并从节点或资源级别的故障中恢复,以实现群集服务(亦称资源)的最大可用性。
  • Corosync是集群管理套件的一部分,它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。
    [1]
    Corosync是集群管理套件的一部分,通常会与其他资源管理器一起组合使用它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。它是一个新兴的软件,2008年推出,但其实它并不是一个真正意义上的新软件,在2002年的时候有一个项目Openais
    , 它由于过大,分裂为两个子项目,其中可以实现HA心跳信息传输的功能就是Corosync ,它的代码60%左右来源于Openais.
    Corosync可以提供一个完整的HA功能,但是要实现更多,更复杂的功能,那就需要使用Openais了。Corosync是未来的发展方向。在以后的新项目里,一般采用Corosync,而hb_gui可以提供很好的HA管理功能,可以实现图形化的管理。另外相关的图形化有RHCS的套件luci+ricci,当然还有基于java开发的LCMC集群管理工具。

二、实验环境

  • haproxy与pacemaker服务器 :

    • server1: 172.25.2.1/24
    • server2 : 172.25.2.2/24
  • 后端服务器:

    • server:3: 172.25.2.3/24
    • server4 : 172.25.2.4/24
  • 物理主机:172.25.2.250/24

此次试验所用的安装包链接: https://pan.baidu.com/s/1nCyPkqyomRDHjWG__X0lcw 密码: wmxq

三、实验

3.1 Pacemaker+Corosync配置:

server2于server1环境应该完全相同,haproxy的配置参数可查看上篇博客

3.1.1 配置server2于server1相同的haproxy环境。
[root@server2 x86_64]# rpm -ivh haproxy-1.6.11-1.x86_64.rpm
Preparing...                ########################################### [100%]1:haproxy                ########################################### [100%]
[root@server1 ~]# scp /etc/haproxy/haproxy.cfg server2:/etc/haproxy/haproxy.cfg
root@server2's password:
haproxy.cfg                                                                       100% 1897
[root@server2 x86_64]# /etc/init.d/haproxy start
Starting haproxy:                                          [  OK  ]
3.1.2 安装pacemaker与corosync软件
[root@server2 ~]# yum install -y pacemaker corosync
[root@server1 ~]# cd /etc/corosync/
[root@server1 corosync]# ls
corosync.conf.example  corosync.conf.example.udpu  service.d  uidgid.d
[root@server1 corosync]# cp corosync.conf.example corosync.conf
//拷贝配置文件
[root@server1 corosync]# vim corosync.conf


[root@server1 corosync]# scp corosync.conf server2:/etc/corosync/  //拷贝配置文件
root@server2s password:
corosync.conf                                                    100%  480     0.5KB/s   00:00
[root@server1 ~]# yum install crmsh-1.2.6-0.rc2.2.1.x86_64.rpm  pssh-2.3.1-2.1.x86_64.rpm  -y         //安装管理工具
3.1.3 pacemaker配置参数查看
[root@server1 ~]# crm      //进入管理界面
crm(live)# configure
crm(live)configure# show    //查看默认配置
node server1
node server2
property $id="cib-bootstrap-options" \dc-version="1.1.10-14.el6-368c726" \cluster-infrastructure="classic openais (with plugin)" \expected-quorum-votes="2"
crm(live)configure# 

在另一台服务器上我们也可以实施监控查看

Server4:

[root@server1 ~]# crm_mon   //调出监控
Last updated: Sat Aug  4 15:07:13 2018
Last change: Sat Aug  4 15:00:04 2018 via crmd on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
//ctrl+c退出监控
3.1.4 stonith禁用
[root@server1 ~]# crm      //进入管理界面
crm(live)# configure
crm(live)configure# property stonith-enabled=false
//corosync默认启用了stonith,而当前集群并没有相应的stonith设备  我们里可以通过如下命令先禁用stonith:
crm(live)configure# commit   //保存

注意:每次修改完策略都必须保存一下,否则不生效

3.1.5 添加vip
[root@server2 rpmbuild]# crm_verify -VL  //检查语法
[root@server2 ~]# crm      //进入管理界面
crm(live)# configure
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=172.25.2.100 cidr_netmask=24 op monitor interval=1min
//添加VIP
crm(live)configure# commit    //保存

//监控
Last updated: Sat Aug  4 15:26:06 2018
Last change: Sat Aug  4 15:25:34 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configuredOnline: [ server1 server2 ]vip     (ocf::heartbeat:IPaddr2):   Started server2  //此时vip已添加在server2上

[root@server2~]# /etc/init.d/corosync stop   //server2关闭服务
Starting Corosync Cluster Engine (corosync):               [  OK  ]Server1:
Last updated: Sat Aug  4 15:28:31 2018
Last change: Sat Aug  4 15:25:34 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition WITHOUT quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configuredOnline: [ server1 ]
OFFLINE: [ server2 ]vip     (ocf::heartbeat:IPaddr2):   Started server1
[root@server2 ~]# /etc/init.d/corosync start  //打开服务
Starting Corosync Cluster Engine (corosync):               [  OK  ]Server1:
Last updated: Sat Aug  4 15:31:27 2018
Last change: Sat Aug  4 15:25:34 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configuredOnline: [ server1 server2 ]   //server2 onlinevip     (ocf::heartbeat:IPaddr2):   Started server1
3.1.6 设置投票
[root@server2 x86_64]# crm
crm(live)# configure
crm(live)configure# show
node server1
node server2
primitive vip ocf:heartbeat:IPaddr2 \params ip="172.25.2.100" cidr_netmask="24" \op monitor interval="1min"
property $id="cib-bootstrap-options" \dc-version="1.1.10-14.el6-368c726" \cluster-infrastructure="classic openais (with plugin)" \expected-quorum-votes="2" \stonith-enabled="false"
crm(live)configure# property no-quorum-policy=ignore  v//设置投票
crm(live)configure# verify   //检查语法
crm(live)configure# commit  //保存
3.1.7 添加haproxy:
[root@server1 ~]# crm
crm(live)# configure
crm(live)configure# primitive haproxy lsb:haproxy op monitor interval=1min
crm(live)configure# commit
Last updated: Sat Aug  4 15:45:04 2018
Last change: Sat Aug  4 15:44:58 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configuredOnline: [ server1 server2 ]vip     (ocf::heartbeat:IPaddr2):   Started server2
haproxy (lsb:haproxy):  Started server1 //监控,此时haproxy在server1上运行
3.1.8 添加组,组合vip与haproxy
[root@server1 ~]# crm
crm(live)# configure
crm(live)configure# group hagroup vip haproxy
crm(live)configure# commit 
Last updated: Sat Aug  4 15:46:21 2018
Last change: Sat Aug  4 15:46:02 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configuredOnline: [ server1 server2 ]Resource Group: hagroup   //hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2haproxy    (lsb:haproxy):  Started server1

3.1.9 测试
Last updated: Sat Aug  4 15:46:21 2018
Last change: Sat Aug  4 15:46:02 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configuredOnline: [ server1 server2 ]Resource Group: hagroup   //hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2haproxy    (lsb:haproxy):  Started server1[root@server1 ~]# crm node standby   //强制关闭当前服务器Last updated: Sat Aug  4 16:05:30 2018
Last change: Sat Aug  4 16:05:26 2018 via crm_attribute on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configuredNode server1: standby
Online: [ server2 ]     //仅仅server1在线Resource Group: hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2 haproxy    (lsb:haproxy):  Started server2  //运行在server2

[root@server1 ~]# crm node online   //激活服务Last updated: Sat Aug  4 16:15:27 2018
Last change: Sat Aug  4 16:15:11 2018 via crm_attribute on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured[root@server2 ~]# crm_mon
Online: [ server1 server2 ]    // 均在线Resource Group: hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2haproxy    (lsb:haproxy):  Started server2

3.2 配置Fence:

1.Server1与server2

[root@server1 ~]# yum install fence-virt-0.2.3-15.el6.x86_64 -y  //安装fence

物理机:

[root@foundation2 cluster]# scp  fence_xvm.key server1:/etc/cluster/
root@server1's password:    //传秘钥
fence_xvm.key                                                                     100%  128     0.1

2.打开stonith支持fence

[root@server1 ~]# crm
crm(live)# configure
crm(live)configure# show
node server1 \attributes standby="off"
node server2
primitive haproxy lsb:haproxy \op monitor interval="1min"
primitive vip ocf:heartbeat:IPaddr2 \params ip="172.25.2.100" cidr_netmask="24" \op monitor interval="1min"
group hagroup vip haproxy
property $id="cib-bootstrap-options" \dc-version="1.1.10-14.el6-368c726" \cluster-infrastructure="classic openais (with plugin)" \expected-quorum-votes="2" \stonith-enabled="false" \no-quorum-policy="ignore"
crm(live)configure# property stonith-enabled=true  //打开fence
crm(live)configure# commit

2.创建vmfence

[root@server2 ~]# crm
crm(live)# configure
crm(live)configure# primitive vmfence
lsb:      ocf:      service:  stonith:
crm(live)configure# primitive vmfence stonith:fence_
fence_legacy   fence_pcmk     fence_virt     fence_xvm
crm(live)configure# primitive vmfence stonith:fence_xvm
meta     op       params
crm(live)configure# primitive vmfence stonith:fence_xvm params
action=                ipport=                pcmk_list_action=      pcmk_off_retries=      pcmk_status_timeout=
auth=                  key_file=              pcmk_list_retries=     pcmk_off_timeout=      port=
debug=                 multicast_address=     pcmk_list_timeout=     pcmk_reboot_action=    priority=
delay=                 pcmk_host_argument=    pcmk_monitor_action=   pcmk_reboot_retries=   retrans=
domain=                pcmk_host_check=       pcmk_monitor_retries=  pcmk_reboot_timeout=   stonith-timeout=
hash=                  pcmk_host_list=        pcmk_monitor_timeout=  pcmk_status_action=    timeout=
ip_family=             pcmk_host_map=         pcmk_off_action=       pcmk_status_retries=   use_uuid=crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map="server1:westos1;server2:westos2;" op monitor interval=1min
//这里的server1后跟的名称必须与物理机上虚拟机名称一样crm(live)configure# commit 


监控:

Last updated: Sat Aug  4 16:36:52 2018
Last change: Sat Aug  4 16:36:00 2018 via cibadmin on server2
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configuredOnline: [ server1 server2 ]Resource Group: hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2haproxy    (lsb:haproxy):  Started server2
vmfence (stonith:fence_xvm):    Started server1 / /添加

3.3 测试

[root@server2 ~]# echo c >/proc/sysrq-trigger  //内核崩溃Last updated: Sat Aug  4 16:39:20 2018
Last change: Sat Aug  4 16:36:00 2018 via cibadmin on server2
Stack: classic openais (with plugin)
Current DC: server2 - partition WITHOUT quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configuredOnline: [ server2 ]
OFFLINE: [ server1 ]Resource Group: hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2haproxy    (lsb:haproxy):  Started server2
vmfence (stonith:fence_xvm):    Started server2  //转移


server1会重启

[root@server1 ~]# /etc/init.d/corosync start  //开启服务
Starting Corosync Cluster Engine (corosync):               [  OK  ]Last updated: Sat Aug  4 16:40:58 2018
Last change: Sat Aug  4 16:36:00 2018 via cibadmin on server2
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configuredOnline: [ server1 server2 ]  Resource Group: hagroupvip        (ocf::heartbeat:IPaddr2):   Started server2haproxy    (lsb:haproxy):  Started server2
vmfence (stonith:fence_xvm):    Started server1  //转移

4.访问vip

Linux之企业实训篇——haproxy与pacemaker实现高可用负载均衡相关推荐

  1. Haproxy + Pacemaker 实现高可用负载均衡(一)

    pacemaker+corosync+haproxy > 高可用架构由两个核心部分组成,一个是心跳检测,判断服务器是否正常运行:一个是资源转移,用来将公共资源在正常服务器和故障服务器之间搬动.两 ...

  2. 利用keepalived和haproxy配置mysql的高可用负载均衡

    http://www.cnblogs.com/tae44/p/4717334.html 实验系统:CentOS 6.6_x86_64(2.6.32-504.30.3.el6.x86_64) 实验前提: ...

  3. Haproxy + Pacemaker 实现高可用负载均衡(二)

    Pacemaker server1 和 server2 均安装pacemaker 和 corosync server1 和 server2 作相同配置 [root@server1 ~]# yum in ...

  4. 基于linux下的 Pacemaker+Haproxy高可用负载均衡架构

    corosync + pacemaker + crmsh 高可用集群 corosync提供集群的信息层(messaging layer)的功能,传递心跳信息和集群事务信息,多台机器之间通过组播的方式监 ...

  5. RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03

    服务器IP hostname 节点说明 端口 管控台地址 账号 密码 192.168.0.115 mq-01 rabbitmq master 5672 http://192.168.0.115:156 ...

  6. 用 Keepalived+HAProxy 实现高可用负载均衡的配置方法

    1. 概述 软件负载均衡技术是指可以为多个后端服务器节点提供前端IP流量分发调度服务的软件技术.Keepalived和HAProxy是众多软负载技术中的两种,其中Keepalived既可以实现负载均衡 ...

  7. RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成负载均衡组件 Ha-Proxy_02

    服务器IP hostname 节点说明 端口 管控台地址 账号 密码 192.168.0.115 mq-01 rabbitmq master 5672 http://192.168.0.115:156 ...

  8. RabbitMQ+haproxy+keeplived 高可用负载均衡高可用镜像集群队列_01

    文章目录 一.RabbitMQ 集群节点说明 二.服务器hosts文件统一修改 三.RabbitMQ 镜像集群队列搭建部署 一.RabbitMQ 集群节点说明 服务器IP hostname 节点说明 ...

  9. 基于HAProxy+Keepalived高可用负载均衡web服务的搭建

    一 原理简介 1.HAProxy HAProxy提供高可用性.负载均衡以及基于TCP和HTTP应用的代理,支持虚拟主机,它是免费.快速并且可靠的一种解决方案.HAProxy特别适用于那些负载特大的we ...

最新文章

  1. SQLite学习手册(内存数据库)
  2. 谨慎全面地对待“滞销”
  3. docker容器cpu高问题排查_干货详解:一文教你如何利用阿里开源工具,排查线上CPU居高问题...
  4. 收藏 | 万字长文带你理解Pytorch官方Faster RCNN代码
  5. sql2008 服务器未响应,sql配置管理器,SQL server (MSSQLSERVER)开启不了,请求失败或服务器未响应....
  6. 感觉越来越多的人开始向往农村生活,你怎么看?
  7. 【李宏毅2020 ML/DL】P112-114 Q-Learning: Introduction Tips Continuous Actions
  8. Android Sensors (3) 传感器坐标系统
  9. PHP 统计一个字符串,在另一个字符串中出现次数
  10. HDFS常用命令(总结)
  11. C语言写俄罗斯方块,可上机运行
  12. Android音频之多设备同时输出-cast通路分析
  13. 棋盘中正方形,长方形个数
  14. JavaScript数据源版省市县三级联动
  15. 盘点世界上千奇百怪的数据中心选址,这些地方你一定想不到!
  16. Android高德地图marker和InfoWindow的使用
  17. 测试知识 - 关于电脑
  18. 【Unity】窗口失去焦点后继续游戏处理
  19. 深度解读互联网新时代:Web3.0
  20. 四旋翼飞行器8——APM飞控资料

热门文章

  1. 批量修改文件内容(Python版)
  2. C++ 之九阴真经系列
  3. 问答ChatGPT-4:探索未来微数据中心IDC的发展趋势
  4. PAT_乙级_1011_筱筱
  5. pythorch 基本学习
  6. Visual Studio开发环境介绍 及控件属性、事件
  7. 请收下这 72 个炫酷的 CSS 技巧
  8. 如何注册和设置 zoom Background
  9. 微信设置文字大小影响网页布局
  10. 通过 debug 检测屏幕颜色显示坏点、低格硬盘等等技巧