1、准备环境

1.1、配置yum源配置文件

mount -t auto /dev/cdrom  /mntrm -rf /etc/yum.repos.d/
mkdir -p /etc/yum.repos.d/cat >> /etc/yum.repos.d/rhel-Media.repo<<EOF
# rhel-Media.repo
#
#  This repo can be used with mounted DVD media, verify the mount point for
#  rhel-7.  You can use this repo and yum to install items directly off the
#  DVD ISO that we release.
#
# To use this repo, put in your DVD and use it with the other repos too:
#  yum --enablerepo=c7-media [command]
#
# or for ONLY the media repo, do this:
#
#  yum --disablerepo=\* --enablerepo=rhel7-media [command][rhel7-media]
name=rhel-$releasever - Media
baseurl=file:///mnt/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rhel-7
EOFyum clean all
yum makecache

1.2、安装依赖包

mount -t auto /dev/cdrom  /mnt yum -y install gcc make binutils gcc-c++ compat-libstdc++-33 elfutils-libelf-devel elfutils-libelf-devel-static elfutils-libelf-devel ksh libaio libaio-devel numactl-devel sysstat unixODBC unixODBC-devel pcre-devel libXext* unzip chrony pacemaker pcs fence-agents-all httpd sbd device-mapper device-mapper-multipathyum install pcs-0.9.162-5.el7.x86_64.rpm  pacemaker-1.1.18-11.el7.x86_64.rpm  pacemaker-cli-1.1.18-11.el7.x86_64.rpm  pacemaker-cluster-libs-1.1.18-11.el7.x86_64.rpm   pacemaker-cluster-libs-1.1.18-11.el7.x86_64.rpm  corosync-2.4.3-2.el7.x86_64.rpm  corosynclib-2.4.3-2.el7.x86_64.rpm  pacemaker-libs-1.1.18-11.el7.x86_64.rpm  resource-agents-3.9.5-124.el7.x86_64.rpm  python-clufter-0.77.0-2.el7.noarch.rpm   clufter-bin-0.77.0-2.el7.x86_64.rpm  clufter-common-0.77.0-2.el7.noarch.rpm 

1.3、关闭防火墙以及selinux

sed -i 's/=enforcing/=disabled/g' /etc/selinux/config
setenforce 0
getenforcesystemctl stop firewalld
systemctl disable firewalld

1.4、绑定网卡名称

cat >/etc/udev/rules.d/70-persistent-net.rules<<EOF
SUBSYSTEM=="net",ACTION=="add",DRIVERS=="?*",ATTR{address}=="00:0c:29:46:37:a8",ATTR{type}=="1",KERNEL=="eth*",NAME="eth0"
SUBSYSTEM=="net",ACTION=="add",DRIVERS=="?*",ATTR{address}=="00:0c:29:46:37:b2",ATTR{type}=="1",KERNEL=="eth*",NAME="eth1"
SUBSYSTEM=="net",ACTION=="add",DRIVERS=="?*",ATTR{address}=="00:0c:29:46:37:bc",ATTR{type}=="1",KERNEL=="eth*",NAME="eth2"
SUBSYSTEM=="net",ACTION=="add",DRIVERS=="?*",ATTR{address}=="00:0c:29:46:37:c6",ATTR{type}=="1",KERNEL=="eth*",NAME="eth3"
EOFsed -i 's/quiet/quiet net.ifnames=0 biosdevname=0/g'  /etc/default/grub
cat  /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg

1.5、配置网卡bond

    systemctl stop  NetworkManager
systemctl disable   NetworkManager cat >/etc/sysconfig/network-scripts/ifcfg-bond0<<EOF1
BOOTPROTO=static
DEVICE=bond0
IPADDR=192.168.33.10
PREFIX=255.255.255.0
GATEWAY=192.168.33.1
USERCTL=no
ONBOOT=yes
EOF1
cat >/etc/sysconfig/network-scripts/ifcfg-eth0<<EOF2
DEVICE=eth0
PREFIX=24
BOOTPROTO=static
MASTER=bond0
SLAVE=yes
ONBOOT=yes
EOF2
cat >/etc/sysconfig/network-scripts/ifcfg-eth2<<EOF3
DEVICE=eth2
PREFIX=24
BOOTPROTO=static
MASTER=bond0
SLAVE=yes
ONBOOT=yes
EOF3cat >/etc/sysconfig/network-scripts/ifcfg-bond1<<EOF1
BOOTPROTO=static
DEVICE=bond1
IPADDR=192.168.1.10
PREFIX=255.255.255.0
USERCTL=no
ONBOOT=yes
EOF1
cat >/etc/sysconfig/network-scripts/ifcfg-eth1<<EOF2
DEVICE=eth1
PREFIX=24
BOOTPROTO=static
MASTER=bond1
SLAVE=yes
ONBOOT=yes
EOF2
cat >/etc/sysconfig/network-scripts/ifcfg-eth3<<EOF3
DEVICE=eth3
PREFIX=24
BOOTPROTO=static
MASTER=bond1
SLAVE=yes
ONBOOT=yes
EOF3cat >/etc/sysconfig/network-scripts/ifcfg-bond2<<EOF1
BOOTPROTO=static
DEVICE=bond2
IPADDR=19.21.68.10
PREFIX=255.255.255.0
USERCTL=no
ONBOOT=yes
EOF1
cat >/etc/sysconfig/network-scripts/ifcfg-eth4<<EOF2
DEVICE=eth4
PREFIX=24
BOOTPROTO=static
MASTER=bond2
SLAVE=yes
ONBOOT=yes
EOF2
cat >/etc/sysconfig/network-scripts/ifcfg-eth5<<EOF3
DEVICE=eth5
PREFIX=24
BOOTPROTO=static
MASTER=bond2
SLAVE=yes
ONBOOT=yes
EOF3cat >/etc/modprobe.d/bonding.conf<<EOF4
alias bond0 bonding
alias bond1 bonding
alias bond2 bonding
options bond0 miimon=100 mode=1 primary=eth0
options bond1 miimon=100 mode=1 primary=eth1
options bond2 miimon=100 mode=1 primary=eth4
EOF4
echo "ifenslave bond0 eth0 eth2">>/etc/rc.d/rc.local
echo "ifenslave bond1 eth1 eth3">>/etc/rc.d/rc.local
echo "ifenslave bond2 eth4 eth5">>/etc/rc.d/rc.localecho "/etc/init.d/network restart">>/etc/rc.d/rc.local
chmod 550 /etc/rc.d/rc.local
chmod 550 /etc/rc.localrebootcat /proc/net/bonding/bond0cat /proc/net/bonding/bond1cat /proc/net/bonding/bond2

1.6、配置/etc/hosts

hostnamectl set-hostname  xmcs01cat >>/etc/hosts<<EOF
192.168.33.10 xmcs01
192.168.33.11 xmcs02
192.168.33.12 ha-core-db
19.21.68.10 xmcs01hb
19.21.68.11 xmcs02hb
192.168.1.10 xmcs01fc
192.168.1.11 xmcs02fc
EOFcat /etc/hosts

1.7、配置共享盘

yum install device-mapper-multipath
设置开机启动
systemctl enable multipathd.service
添加配置文件
需要multipath正常工作只需要如下配置即可,如果想要了解详细的配置,请参考Multipath# vi /etc/multipath.conf
blacklist {devnode "^sda"
}
defaults {user_friendly_names yespath_grouping_policy multibusfailback immediateno_path_retry failfind_multipaths yesreservation_key 0x1
}
启动服务
systemctl restart multipathd.servicefdisk /dev/sdb---n---p---1----t---8e--w创建PV
pvcreate /dev/sdb1pvdisplay /dev/sdb1vgcreate vg01 /dev/sdb1创建LV
lvcreate -l 100%VG -n lvol1 vg01mkfs.xfs /dev/vg01/lvol1mkdir /datamount /dev/vg01/lvol1 /data

2、初始化cluster

2.1、启动服务

在每一台上启动服务systemctl enable chronydsystemctl start chronydsystemctl status chronydecho "redhat" |passwd --stdin haclustersystemctl start pcsdsystemctl enable pcsdsystemctl start pcsd.service    systemctl status pcsd.service 

2.2、验证hacluster用户

# 节点1    认证配置
pcs cluster auth xmcs01 xmcs02 -u hacluster -p redhat[root@xmcs01 ~]# pcs cluster auth xmcs01 xmcs02 -u hacluster -p redhat
xmcs01: Authorized
xmcs02: Authorized

2.3、创建cluster

# 节点1  生成集群文件,另一节点会自动生成
pcs cluster setup --name mycluster xmcs01 xmcs02 [root@xmcs01 ~]# pcs cluster setup --name mycluster xmcs01 xmcs02
Destroying cluster on nodes: xmcs01, xmcs02...
xmcs01: Stopping Cluster (pacemaker)...
xmcs02: Stopping Cluster (pacemaker)...
xmcs01: Successfully destroyed cluster
xmcs02: Successfully destroyed clusterSending 'pacemaker_remote authkey' to 'xmcs01', 'xmcs02'
xmcs01: successful distribution of the file 'pacemaker_remote authkey'
xmcs02: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
xmcs01: Succeeded
xmcs02: SucceededSynchronizing pcsd certificates on nodes xmcs01, xmcs02...
xmcs01: Success
xmcs02: Success
Restarting pcsd on the nodes in order to reload the certificates...
xmcs01: Success
xmcs02: Success[root@xmcs01 ~]# cat /etc/corosync/corosync.conf
totem {version: 2cluster_name: myclustersecauth: offtransport: udpu
}nodelist {node {ring0_addr: xmcs01nodeid: 1}node {ring0_addr: xmcs02nodeid: 2}
}quorum {provider: corosync_votequorumtwo_node: 1
}logging {to_logfile: yeslogfile: /var/log/cluster/corosync.logto_syslog: yes
}

2.4、配置web登录

vi /usr/lib/pcsd/ssl.rbwebrick_options = {:Port               => 2224,:BindAddress        => '0.0.0.0',:Host               => '0.0.0.0',:SSLEnable          => true,:SSLVerifyClient    => OpenSSL::SSL::VERIFY_NONE,:SSLCertificate     => OpenSSL::X509::Certificate.new(crt),:SSLPrivateKey      => OpenSSL::PKey::RSA.new(key),:SSLCertName        => [[ "CN", server_name ]],:SSLOptions         => get_ssl_options(),}scp /usr/lib/pcsd/ssl.rb xmcs02:/usr/lib/pcsd/ssl.rbpcs cluster start --all
pcs cluster enable --allsystemctl enable pcsdsystemctl restart pcsd# netstat -tunlp     # 查看 2224 端口是否启用# web 端登录  https://192.168.33.10:2224# 输入账号密码  hacluster / redhat

配置多心跳


[root@xmcs01 ~]# cat /etc/corosync/corosync.conf
totem {version: 2cluster_name: myclustersecauth: offtransport: udpu
rrp_mode: passive
interface {
ringnumber: 0
bindnetaddr: 192.168.33.0
broadcast: yes
mcastport: 5546
}
interface {
ringnumber: 1
bindnetaddr: 19.21.68.0
broadcast: yes
mcastport: 5547
}}nodelist {node {ring0_addr: xmcs01nodeid: 1}node {ring0_addr: xmcs02nodeid: 2}
}quorum {provider: corosync_votequorumtwo_node: 1
}logging {to_logfile: yeslogfile: /var/log/cluster/corosync.logto_syslog: yes
}scp /etc/corosync/corosync.conf xmcs02:/etc/corosync/corosync.confpcs cluster stop --all
pcs cluster sync
pcs cluster start --all[root@xmcs01 corosync]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0id      = 192.168.68.11status  = ring 0 active with no faults
RING ID 1id      = 17.21.68.11status  = ring 1 active with no faults

开放lvmha功能 (sheng)

lvmconf --enable-halvm --services --startstopservices[root@xmcs01 ~]# lvmconf --enable-halvm --services --startstopservices
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by:lvm2-lvmetad.socket
Removed symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.socket.
[root@xmcs02 u01]# lvmconf --enable-halvm --services --startstopservices
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by:lvm2-lvmetad.socket
Removed symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.socket[root@xmcs02 lvm]# cp /etc/lvm/lvm.conf /etc/lvm/lvm.confbak
[root@xmcs02 lvm]# pwd
/etc/lvmvi /etc/lvm/lvm.confvolume_list =[ "centos"]scp /etc/lvm/lvm.conf xmcs02:/etc/lvm/lvm.confcp /boot/initramfs-3.10.0-862.el7.x86_64.img  /root/.
dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
rebootlvcreate -Zn -n lvdata -l +100%FREE  vg01

3、配置cluster

pcs resource create ha-core-db ocf:heartbeat:IPaddr2 ip=192.168.33.12 cidr_netmask=32 op monitor interval=30s
pcs resource create oracle-lvm LVM volgrpname=vg01 exclusive=yes
pcs resource create  oracle-fs Filesystem device='/dev/mapper/data-data_lv' directory=/data fstype=xfs op monitor interval=10s
pcs resource create oracledb ocf:heartbeat:oracle sid=oab home=/u01/app/oracle/product/12.2/db clear_backupmode=1 shutdown_method=immediate --group oracle_group op monitor interval=10s
pcs resource create oralsn ocf:heartbeat:oralsnr sid=oab home=/u01/app/oracle/product/12.2/db --group oracle_group  op monitor interval=10sop monitor interval=10s
op defaults timeout=10s防止资源回切
pcs resource defaults resource-stickiness=100
pcs resource defaults 设置资源超时间
pcs resource op defaults timeout=10s
pcs resource op defaults投票属性
pcs property set no-quorum-policy=ignore集群故障时候服务迁移
pcs resource defaults migration-threshold=1约束
pcs constraint colocation add  ha-core-db oracle-lvm ora_lsnr oracle-fs oracle-oab
pcs constraint order  oracle-fs then oracle-oab
pcs constraint资源位置
pcs constraint location oracle-oab  perfers xmcs01 score# 资源粘性fence配置
无fencepcs property set stonith-enabled=falsepcs resource defaults resource-stickiness=0
pcs resource defaults fence_scsi配置
pcs stonith create scsi fence_scsi pcmk_host_list="xmcs01 xmcs02" pcmk_reboot_action="off" devices="/dev/mapper/mpathb" meta provides="unfencing" --forcefence_mpath配置fence_mpath --devices="/dev/mapper/mpathb" --key=1 --action=on -v
fence_mpath --devices="/dev/mapper/mpathb" --key=2 --action=on -vpcs stonith create xmcs01-mpath fence_mpath key=1 pcmk_host_list="xmcs01" pcmk_reboot_action="reboot" devices="/dev/mapper/mpathb" meta provides="unfencing"pcs stonith create xmcs02-mpath fence_mpath key=2 pcmk_host_list="xmcs02" pcmk_reboot_action="reboot" devices="/dev/mapper/mpathb" meta provides="unfencing"mpathpersist -i -k -d /dev/mapper/mpathbpcs stonith showpcs property set stonith-enabled=true #很重要!
pcs resource cleanup   oracle_group

手工切换

 pcs resource move oracle_group  xmcs01

测试:

ifdown bond0-切换

ifup bond0 -自动回切

rhel7 pcs pacemaker corosync配置主从高可用相关推荐

  1. 高可用集群下的负载均衡(8):pacemaker + corosync + haproxy 实现高可用

    实验环境 server1 和 server2 是调度器,server3 和 server4 是服务器 [1]调度器server1 server2 关闭 keepalived 和 httpd,并打开pc ...

  2. 【高可用HA】Centos7.0下通过Corosync+pacemaker+pcs+drbd实现mariadb的高可用

    作者:吴业亮 博客:https://wuyeliang.blog.csdn.net/ 一.操作系统配置 1.1.准备: 两个节点ha-node1和ha-node2均按照centos7.0系统,每个节点 ...

  3. Redhat7.5上使用Pacemaker和Corosync搭建Nginx高可用系统

    前言 最近在搭建RDQM的集群时用到了Pacemaker,出于对此的兴趣,自己来验证一下Pacemaker下Nginx的高可用. 提示:以下是本篇文章正文内容,下面案例可供参考 一.概述 本文记录了通 ...

  4. corosync+pacemaker+drbd构建mysql高可用平台的简单案例

    写在前面:如果此文有幸被某位朋友看见并发现有错的地方,希望批评指正.如有不明白的地方,愿可一起探讨. 案例拓扑图 说明: ansible主机主要作用在于配置和安装两台corosync+pacemake ...

  5. corosync+pacemaker+nfs提供mysql高可用

    corosync/openais+pacemaker+nfs提供mariadb高可用 节点一, one, 172.16.249.122/16, 172.16.0.1, CentOS6.6, maria ...

  6. mysql pacemaker_集群:corosync+pacemaker实现MySQL服务高可用

    高可用集群是指以减少服务中断时间为目的的服务器集群技术.它通过保护用户的业务程序对外不间断提供的服务,把因软件/硬件/人为造成的故障对业务的影响降低到最小程度.高可用集群的应用系统有多样化发展趋势,用 ...

  7. 实现基于Keepalived主从高可用集群网站架构

    背景 上一期我们实现了基于lvs负载均衡集群的电商网站架构,随着业务的发展,网站的访问量越来越大,网站访问量已经从原来的1000QPS,变为3000QPS,目前业务已经通过集群LVS架构可做到随时拓展 ...

  8. MySQL高可用群集------配置MMM高可用架构

    MMM简介: MMM(Master-Master replication manager for Mysql,Mysql 主主复制管理器)是一套支持双主故障切换和双主日常管理的脚本程序.MMM使用Pe ...

  9. spring boot配置ip_Spring Cloud 配置中心高可用搭建

    本文通过config server连接git仓库来实现配置中心,除了git还可以使用svn或者系统本地目录都行. 引入依赖 <dependencies><dependency> ...

最新文章

  1. 更换XP SN的vbs
  2. 计算机视觉开源库OpenCV绘制轮廓,并将轮廓排序~
  3. 浮点数c语言,C语言浮点数运算
  4. shell中$*和$@ 两个都区别
  5. scss-!optional
  6. append生成新变量的时候,没有如预期(It's a feature,not a bug?)
  7. 独家总结 | KNN算法Python实现(附代码详解及注释)
  8. 一文读懂 etcd 的 mvcc 实现
  9. RabbitMQ-理解消息通信-虚拟主机和隔离
  10. 如何查看cudnn当前版本_当前版本的花木兰,如何成为边路战神?
  11. HTML:网页设计案例1
  12. 快递单号查询国外公司编码汇总_快递鸟
  13. tk/tkx canvas区域放大的代码
  14. JS图片img旋转、放大、缩小
  15. 进口计算机软件,关于软件进口报关手续及流程介绍【进口知识】
  16. 计算圆、圆球和圆锥的面积和体积
  17. 高并发网络连接数因端口数量受限问题
  18. python matplotlib的常见参数以及画图示例
  19. Python快速上手系列--类--详解篇
  20. 中科曙光新型算力,构建数字设施大动脉

热门文章

  1. 【用matplotlib,wordcloud和pyecharts分析三国的分词,词频,词性,小说人物出场次数排序小说人物关系】
  2. Fabric交易流程
  3. 文章列表的显示 以及创建文章 还有文章详情的基本--react项目5
  4. 可口可乐和百事可乐查出杀虫剂成分!
  5. k860i 4.2root成功,用root大师20130705
  6. Docker - 单独搭建部署应用服务(Nginx+Php+Mysql+Redis)
  7. Java变态题目(持续更新)
  8. Goodfellow花书笔记--神经网络的应用
  9. IDEA更改启动界面背景图片
  10. stm32l151 ADC通过DMA通道定时采样电池电量