之前的文章介绍了LVS负载均衡-基础知识梳理, 下面记录下LVS+Keepalived高可用环境部署梳理(主主和主从模式)的操作流程:

一、LVS+Keepalived主从热备的高可用环境部署

1)环境准备

LVS_Keepalived_Master      182.148.15.237
LVS_Keepalived_Backup      182.148.15.236
Real_Server1               182.148.15.233
Real_Server2               182.148.15.238
VIP                        182.148.15.239系统版本都是centos6.8特别注意:
Director Server与Real Server必须有一块网卡连在同一物理网段上!否则lvs会转发失败!
在远程telnet vip port会报错:
"telnet: connect to address *.*.*.*: No route to host"

基本的网络拓扑图如下:

2)LVS_keepalived_Master和LVS_keepalived_Backup两台服务器上安装配置LVS和keepalived的操作记录:

1)关闭 SElinux、配置防火墙(在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上都要操作)
[root@LVS_Keepalived_Master ~]# vim /etc/sysconfig/selinux
#SELINUX=enforcing                #注释掉
#SELINUXTYPE=targeted             #注释掉
SELINUX=disabled                  #增加[root@LVS_Keepalived_Master ~]# setenforce 0      #临时关闭selinux。上面文件配置后,重启机器后就永久生效。注意下面182.148.15.0/24是服务器的公网网段,192.168.1.0/24是服务器的私网网段
一定要注意:加上这个组播规则后,MASTER和BACKUP故障时,才能实现VIP资源的正常转移。其故障恢复后,VIP也还会正常转移回来。
[root@LVS_Keepalived_Master ~]# vim /etc/sysconfig/iptables
.......
-A INPUT -s 182.148.15.0/24 -d 224.0.0.18 -j ACCEPT      #允许组播地址通信。
-A INPUT -s 192.168.1.0/24 -d 224.0.0.18 -j ACCEPT
-A INPUT -s 182.148.15.0/24 -p vrrp -j ACCEPT            #允许 VRRP(虚拟路由器冗余协)通信
-A INPUT -s 192.168.1.0/24 -p vrrp -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT[root@LVS_Keepalived_Master ~]# /etc/init.d/iptables restart----------------------------------------------------------------------------------------------------------------------
2)LVS安装(在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上都要操作)
需要安装以下软件包
[root@LVS_Keepalived_Master ~]# yum install -y libnl* popt*查看是否加载lvs模块
[root@LVS_Keepalived_Master src]# modprobe -l |grep ipvs下载并安装LVS
[root@LVS_Keepalived_Master ~]# cd /usr/local/src/
[root@LVS_Keepalived_Master src]# wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
解压安装
[root@LVS_Keepalived_Master src]# ln -s /usr/src/kernels/2.6.32-431.5.1.el6.x86_64/ /usr/src/linux
[root@LVS_Keepalived_Master src]# tar -zxvf ipvsadm-1.26.tar.gz
[root@LVS_Keepalived_Master src]# cd ipvsadm-1.26
[root@LVS_Keepalived_Master ipvsadm-1.26]# make && make installLVS安装完成,查看当前LVS集群
[root@LVS_Keepalived_Master ipvsadm-1.26]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn----------------------------------------------------------------------------------------------------------------------
3)编写LVS启动脚本/etc/init.d/realserver(在Real_Server1 和Real_Server2上都要操作,realserver脚本内容是一样的)
[root@Real_Server1 ~]# vim /etc/init.d/realserver
#!/bin/sh
VIP=182.148.15.239
. /etc/rc.d/init.d/functionscase "$1" in
# 禁用本地的ARP请求、绑定本地回环地址
start)/sbin/ifconfig lo down/sbin/ifconfig lo upecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce/sbin/sysctl -p >/dev/null 2>&1/sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up     #在回环地址上绑定VIP,设定掩码,与Direct Server(自身)上的IP保持通信/sbin/route add -host $VIP dev lo:0echo "LVS-DR real server starts successfully.\n";;
stop)/sbin/ifconfig lo:0 down/sbin/route del $VIP >/dev/null 2>&1echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n";;
status)isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`isRoOn=`/bin/netstat -rn | grep "$VIP"`if [ "$isLoON" == "" -a "$isRoOn" == "" ]; thenecho "LVS-DR real server has run yet."elseecho "LVS-DR real server is running."fiexit 3;;
*)echo "Usage: $0 {start|stop|status}"exit 1
esac
exit 0将lvs脚本加入开机自启动
[root@Real_Server1 ~]# chmod +x /etc/init.d/realserver
[root@Real_Server1 ~]# echo "/etc/init.d/realserver start" >> /etc/rc.d/rc.local启动LVS脚本(注意:如果这两台realserver机器重启了,一定要确保service realserver start  启动了,即lo:0本地回环上绑定了vip地址,否则lvs转发失败!)
[root@Real_Server1 ~]# service realserver start
LVS-DR real server starts successfully.\n查看Real_Server1服务器,发现VIP已经成功绑定到本地回环口lo上了
[root@Real_Server1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 52:54:00:D1:27:75inet addr:182.148.15.233  Bcast:182.148.15.255  Mask:255.255.255.224inet6 addr: fe80::5054:ff:fed1:2775/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:309741 errors:0 dropped:0 overruns:0 frame:0TX packets:27993954 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:37897512 (36.1 MiB)  TX bytes:23438654329 (21.8 GiB)lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)lo:0      Link encap:Local Loopbackinet addr:182.148.15.239  Mask:255.255.255.255UP LOOPBACK RUNNING  MTU:65536  Metric:1----------------------------------------------------------------------------------------------------------------------
4)安装Keepalived(LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器都要操作)
[root@LVS_Keepalived_Master ~]# yum install -y openssl-devel
[root@LVS_Keepalived_Master ~]# cd /usr/local/src/
[root@LVS_Keepalived_Master src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz
[root@LVS_Keepalived_Master src]# tar -zvxf keepalived-1.3.5.tar.gz
[root@LVS_Keepalived_Master src]# cd keepalived-1.3.5
[root@LVS_Keepalived_Master keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived
[root@LVS_Keepalived_Master keepalived-1.3.5]# make && make install[root@LVS_Keepalived_Master keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@LVS_Keepalived_Master keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@LVS_Keepalived_Master keepalived-1.3.5]# mkdir /etc/keepalived/
[root@LVS_Keepalived_Master keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@LVS_Keepalived_Master keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@LVS_Keepalived_Master keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local[root@LVS_Keepalived_Master keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived      #添加执行权限
[root@LVS_Keepalived_Master keepalived-1.3.5]# chkconfig keepalived on                   #设置开机启动
[root@LVS_Keepalived_Master keepalived-1.3.5]# service keepalived start                   #启动
[root@LVS_Keepalived_Master keepalived-1.3.5]# service keepalived stop                    #关闭
[root@LVS_Keepalived_Master keepalived-1.3.5]# service keepalived restart                 #重启----------------------------------------------------------------------------------------------------------------------
5)接着配置LVS+Keepalived配置现在LVS_Keepalived_Master和LVS_Keepalived_Backup两台机器上打开ip_forward转发功能
[root@LVS_Keepalived_Master ~]# echo "1" > /proc/sys/net/ipv4/ip_forwardLVS_Keepalived_Master机器上的keepalived.conf配置:
[root@LVS_Keepalived_Master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_Master
}vrrp_instance VI_1 {state MASTER               #指定instance初始状态,实际根据优先级决定.backup节点不一样interface eth0             #虚拟IP所在网virtual_router_id 51       #VRID,相同VRID为一个组,决定多播MAC地址priority 100               #优先级,另一台改为90.backup节点不一样advert_int 1               #检查间隔authentication {auth_type PASS         #认证方式,可以是pass或haauth_pass 1111         #认证密码}virtual_ipaddress {182.148.15.239         #VIP}
}virtual_server 182.148.15.239 80 {delay_loop 6               #服务轮询的时间间隔lb_algo wrr                #加权轮询调度,LVS调度算法 rr|wrr|lc|wlc|lblc|sh|shlb_kind DR                 #LVS集群模式 NAT|DR|TUN,其中DR模式要求负载均衡器网卡必须有一块与物理网卡在同一个网段#nat_mask 255.255.255.0persistence_timeout 50     #会话保持时间protocol TCP              #健康检查协议## Real Server设置,80就是连接端口real_server 182.148.15.233 80 {weight 3  ##权重TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 182.148.15.238 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}启动keepalived
[root@LVS_Keepalived_Master ~]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ][root@LVS_Keepalived_Master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ffinet 182.48.115.237/27 brd 182.48.115.255 scope global eth0inet 182.48.115.239/32 scope global eth0inet6 fe80::5054:ff:fe68:dcb6/64 scope linkvalid_lft forever preferred_lft forever注意此时网卡的变化,可以看到虚拟网卡已经分配到了realserver上。
此时查看LVS集群状态,可以看到集群下有两个Real Server,调度算法,权重等信息。ActiveConn代表当前Real Server的活跃连接数。
[root@LVS_Keepalived_Master ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  182.48.115.239:80 wrr persistent 50-> 182.48.115.233:80            Route   3      0          0       -> 182.48.115.238:80            Route   3      0          0 -------------------------------------------------------------------------
LVS_Keepalived_Backup机器上的keepalived.conf配置:
[root@LVS_Keepalived_Backup ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_Backup
}vrrp_instance VI_1 {state BACKUP           interface eth0          virtual_router_id 51    priority 90            advert_int 1           authentication {auth_type PASS      auth_pass 1111      }virtual_ipaddress {182.148.15.239      }
}virtual_server 182.148.15.239 80 {delay_loop 6           lb_algo wrr            lb_kind DR              persistence_timeout 50  protocol TCP          real_server 182.148.15.233 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 182.148.15.238 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}[root@LVS_Keepalived_Backup ~]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]查看LVS_Keepalived_Backup机器上,发现VIP默认在LVS_Keepalived_Master机器上,只要当LVS_Keepalived_Backup发生故障时,VIP资源才会飘到自己这边来。
[root@LVS_Keepalived_Backup ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:7c:b8:f0 brd ff:ff:ff:ff:ff:ffinet 182.48.115.236/27 brd 182.48.115.255 scope global eth0inet 182.48.115.239/27 brd 182.48.115.255 scope global secondary eth0:0inet6 fe80::5054:ff:fe7c:b8f0/64 scope linkvalid_lft forever preferred_lft forever
[root@LVS_Keepalived_Backup ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  182.48.115.239:80 wrr persistent 50-> 182.48.115.233:80            Route   3      0          0       -> 182.48.115.238:80            Route   3      0          0

3)后端两台Real Server上的操作

在两台Real Server上配置好nginx,nginx安装配置过程省略。
分别在两台Real Server上配置两个域名www.wangshibo.com和www.guohuihui.com。在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上要能正常访问这两个域名
[root@LVS_Keepalived_Master ~]# curl http://www.wangshibo.com
this is page of Real_Server1:182.148.15.238 www.wangshibo.com
[root@LVS_Keepalived_Master ~]# curl http://www.guohuihui.com
this is page of Real_Server2:182.148.15.238 www.guohuihui.com[root@LVS_Keepalived_Backup ~]# curl http://www.wangshibo.com
this is page of Real_Server1:182.148.15.238 www.wangshibo.com
[root@LVS_Keepalived_Backup ~]# curl http://www.guohuihui.com
this is page of Real_Server2:182.148.15.238 www.guohuihui.com关闭182.148.15.238这台机器(即Real_Server2)的nginx,发现对应域名的请求就会到Real_Server1上
[root@Real_Server2 ~]# /usr/local/nginx/sbin/nginx -s stop
[root@Real_Server2 ~]# lsof -i:80
[root@Real_Server2 ~]# 再次在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上访问这两个域名,就会发现已经负载到Real_Server1上了
[root@LVS_Keepalived_Master ~]# curl http://www.wangshibo.com
this is page of Real_Server1:182.148.15.233 www.wangshibo.com
[root@LVS_Keepalived_Master ~]# curl http://www.guohuihui.com
this is page of Real_Server1:182.148.15.233 www.guohuihui.com[root@LVS_Keepalived_Backup ~]# curl http://www.wangshibo.com
this is page of Real_Server1:182.148.15.233 www.wangshibo.com
[root@LVS_Keepalived_Backup ~]# curl http://www.guohuihui.com
this is page of Real_Server1:182.148.15.233 www.guohuihui.com另外,设置这两台Real Server的iptables,让其80端口只对前面的两个vip资源开放
[root@Real_Server1 ~]# vim /etc/sysconfig/iptables
......
-A INPUT -s 182.148.15.239 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
[root@Real_Server1 ~]# /etc/init.d/iptables restart

4)测试
将www.wangshibo.com和www.guohuihui.com测试域名解析到VIP:182.148.15.239,然后在浏览器里是可以正常访问的。

1)测试LVS功能(上面Keepalived的lvs配置中,自带了健康检查,当后端服务器的故障出现故障后会自动从lvs集群中踢出,当故障恢复后,再自动加入到集群中)
先查看当前LVS集群,如下:发现后端两台Real Server的80端口都运行正常
[root@LVS_Keepalived_Master ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  182.148.15.239:80 wrr persistent 50-> 182.148.15.233:80            Route   3      0          0        -> 182.148.15.238:80            Route   3      0          0现在测试关闭一台Real Server,比如Real_Server2
[root@Real_Server2 ~]# /usr/local/nginx/sbin/nginx -s stop过一会儿再次查看当前LVS集群,如下:发现Real_Server2已经被踢出当前LVS集群了
[root@LVS_Keepalived_Master ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  182.148.15.239:80 wrr persistent 50-> 182.148.15.233:80            Route   3      0          0最后重启Real_Server2的80端口,发现LVS集群里又再次将其添加进来了
[root@Real_Server2 ~]# /usr/local/nginx/sbin/nginx[root@LVS_Keepalived_Master ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  182.148.15.239:80 wrr persistent 50-> 182.148.15.233:80            Route   3      0          0        -> 182.148.15.238:80            Route   3      0          0  以上测试中,http://www.wangshibo.com和http://www.guohuihui.com域名访问都不受影响。2)测试Keepalived心跳测试的高可用
默认情况下,VIP资源是在LVS_Keepalived_Master上
[root@LVS_Keepalived_Master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ffinet 182.148.15.237/27 brd 182.148.15.255 scope global eth0inet 182.148.15.239/32 scope global eth0inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0inet6 fe80::5054:ff:fe68:dcb6/64 scope linkvalid_lft forever preferred_lft forever然后关闭LVS_Keepalived_Master的keepalived,发现VIP就会转移到LVS_Keepalived_Backup上。
[root@LVS_Keepalived_Master ~]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]
[root@LVS_Keepalived_Master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ffinet 182.148.15.237/27 brd 182.148.15.255 scope global eth0inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0inet6 fe80::5054:ff:fe68:dcb6/64 scope linkvalid_lft forever preferred_lft forever查看系统日志,能查看到LVS_Keepalived_Master的VIP的移动信息
[root@LVS_Keepalived_Master ~]# tail -f /var/log/messages
.............
May  8 10:19:36 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: TCP connection to [182.148.15.233]:80 failed.
May  8 10:19:39 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: TCP connection to [182.148.15.233]:80 failed.
May  8 10:19:39 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: Check on service [182.148.15.233]:80 failed after 1 retry.
May  8 10:19:39 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: Removing service [182.148.15.233]:80 from VS [182.148.15.239]:80[root@LVS_Keepalived_Backup ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:7c:b8:f0 brd ff:ff:ff:ff:ff:ffinet 182.148.15.236/27 brd 182.148.15.255 scope global eth0inet 182.148.15.239/32 scope global eth0inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0inet6 fe80::5054:ff:fe7c:b8f0/64 scope linkvalid_lft forever preferred_lft forever接着再重新启动LVS_Keepalived_Master的keepalived,发现VIP又转移回来了
[root@LVS_Keepalived_Master ~]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]
[root@LVS_Keepalived_Master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ffinet 182.148.15.237/27 brd 182.148.15.255 scope global eth0inet 182.148.15.239/32 scope global eth0inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0inet6 fe80::5054:ff:fe68:dcb6/64 scope linkvalid_lft forever preferred_lft forever查看系统日志,能查看到LVS_Keepalived_Master的VIP转移回来的信息
[root@LVS_Keepalived_Master ~]# tail -f /var/log/messages
.............
May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239
May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.148.15.239
May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239
May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239
May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239
May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239

二、LVS+Keepalived主主热备的高可用环境部署

主主环境相比于主从环境,区别只在于:
1)LVS负载均衡层需要两个VIP。比如182.148.15.239和182.148.15.235
2)后端的realserver上要绑定这两个VIP到lo本地回环口上
3)Keepalived.conf的配置相比于上面的主从模式也有所不同主主架构的具体配置如下:
1)编写LVS启动脚本(在Real_Server1 和Real_Server2上都要操作,realserver脚本内容是一样的)由于后端realserver机器要绑定两个VIP到本地回环口lo上(分别绑定到lo:0和lo:1),所以需要编写两个启动脚本[root@Real_Server1 ~]# vim /etc/init.d/realserver1
#!/bin/sh
VIP=182.148.15.239
. /etc/rc.d/init.d/functionscase "$1" instart)/sbin/ifconfig lo down/sbin/ifconfig lo upecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce/sbin/sysctl -p >/dev/null 2>&1/sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up   /sbin/route add -host $VIP dev lo:0echo "LVS-DR real server starts successfully.\n";;
stop)/sbin/ifconfig lo:0 down/sbin/route del $VIP >/dev/null 2>&1echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n";;
status)isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`isRoOn=`/bin/netstat -rn | grep "$VIP"`if [ "$isLoON" == "" -a "$isRoOn" == "" ]; thenecho "LVS-DR real server has run yet."elseecho "LVS-DR real server is running."fiexit 3;;
*)echo "Usage: $0 {start|stop|status}"exit 1
esac
exit 0[root@Real_Server1 ~]# vim /etc/init.d/realserver2
#!/bin/sh
VIP=182.148.15.235
. /etc/rc.d/init.d/functionscase "$1" instart)/sbin/ifconfig lo down/sbin/ifconfig lo upecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce/sbin/sysctl -p >/dev/null 2>&1/sbin/ifconfig lo:1 $VIP netmask 255.255.255.255 up    /sbin/route add -host $VIP dev lo:1echo "LVS-DR real server starts successfully.\n";;
stop)/sbin/ifconfig lo:1 down/sbin/route del $VIP >/dev/null 2>&1echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n";;
status)isLoOn=`/sbin/ifconfig lo:1 | grep "$VIP"`isRoOn=`/bin/netstat -rn | grep "$VIP"`if [ "$isLoON" == "" -a "$isRoOn" == "" ]; thenecho "LVS-DR real server has run yet."elseecho "LVS-DR real server is running."fiexit 3;;
*)echo "Usage: $0 {start|stop|status}"exit 1
esac
exit 0将lvs脚本加入开机自启动
[root@Real_Server1 ~]# chmod +x /etc/init.d/realserver1
[root@Real_Server1 ~]# chmod +x /etc/init.d/realserver2
[root@Real_Server1 ~]# echo "/etc/init.d/realserver1" >> /etc/rc.d/rc.local
[root@Real_Server1 ~]# echo "/etc/init.d/realserver2" >> /etc/rc.d/rc.local启动LVS脚本
[root@Real_Server1 ~]# service realserver1 start
LVS-DR real server starts successfully.\n[root@Real_Server1 ~]# service realserver2 start
LVS-DR real server starts successfully.\n查看Real_Server1服务器,发现VIP已经成功绑定到本地回环口lo上了
[root@Real_Server1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 52:54:00:D1:27:75inet addr:182.148.15.233  Bcast:182.148.15.255  Mask:255.255.255.224inet6 addr: fe80::5054:ff:fed1:2775/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:309741 errors:0 dropped:0 overruns:0 frame:0TX packets:27993954 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:37897512 (36.1 MiB)  TX bytes:23438654329 (21.8 GiB)lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)lo:0      Link encap:Local Loopbackinet addr:182.148.15.239  Mask:255.255.255.255UP LOOPBACK RUNNING  MTU:65536  Metric:1lo:1      Link encap:Local Loopbackinet addr:182.148.15.235  Mask:255.255.255.255UP LOOPBACK RUNNING  MTU:65536  Metric:12)Keepalived.conf的配置
LVS_Keepalived_Master机器上的Keepalived.conf配置
先打开ip_forward路由转发功能
[root@LVS_Keepalived_Master ~]# echo "1" > /proc/sys/net/ipv4/ip_forward[root@LVS_Keepalived_Master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_Master
}vrrp_instance VI_1 {state MASTER              interface eth0            virtual_router_id 51      priority 100              advert_int 1              authentication {auth_type PASS        auth_pass 1111        }virtual_ipaddress {182.148.15.239        }
}vrrp_instance VI_2 {state BACKUP           interface eth0          virtual_router_id 52   priority 90            advert_int 1           authentication {auth_type PASS      auth_pass 1111      }virtual_ipaddress {182.148.15.235    }
}virtual_server 182.148.15.239 80 {delay_loop 6              lb_algo wrr               lb_kind DR                #nat_mask 255.255.255.0persistence_timeout 50    protocol TCP             real_server 182.148.15.233 80 {weight 3 TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 182.148.15.238 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}virtual_server 182.148.15.235 80 {delay_loop 6              lb_algo wrr               lb_kind DR                #nat_mask 255.255.255.0persistence_timeout 50    protocol TCP             real_server 182.148.15.233 80 {weight 3 TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 182.148.15.238 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}LVS_Keepalived_Backup机器上的Keepalived.conf配置
[root@LVS_Keepalived_Backup ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_Backup
}vrrp_instance VI_1 {state Backup             interface eth0            virtual_router_id 51      priority 90              advert_int 1              authentication {auth_type PASS        auth_pass 1111        }virtual_ipaddress {182.148.15.239        }
}vrrp_instance VI_2 {state Master          interface eth0          virtual_router_id 52  priority 100           advert_int 1           authentication {auth_type PASS      auth_pass 1111      }virtual_ipaddress {182.148.15.235    }
}virtual_server 182.148.15.239 80 {delay_loop 6              lb_algo wrr               lb_kind DR                #nat_mask 255.255.255.0persistence_timeout 50    protocol TCP             real_server 182.148.15.233 80 {weight 3 TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 182.148.15.238 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}virtual_server 182.148.15.235 80 {delay_loop 6              lb_algo wrr               lb_kind DR                #nat_mask 255.255.255.0persistence_timeout 50    protocol TCP             real_server 182.148.15.233 80 {weight 3 TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 182.148.15.238 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}其他验证操作和上面主从模式一样~~~

转载于:https://www.cnblogs.com/kevingrace/p/5574486.html

LVS+Keepalived 高可用环境部署记录(主主和主从模式)相关推荐

  1. lvs服务器需要开启web服务么_Centos7搭建LVS+Keepalived高可用Web

    LVS + Keepalived 高可用集群 Keepalived的设计目标是构建高可用的LVS负载均衡的集群,可以调用ipvsadm工具创建虚拟机,不仅仅用作双机热备,还可以使用keepalived ...

  2. 实践 | Centos 7搭建LVS+Keepalived高可用Web服务群集群

    LVS + Keepalived 高可用集群 Keepalived的设计目标是构建高可用的LVS负载均衡的集群,可以调用ipvsadm工具创建虚拟机,不仅仅用作双机热备,还可以使用keepalived ...

  3. LVS+Keepalived高可用群集

    目录 一:Keepalived 二:keepalived实现原理剖析 三:vrrp虚拟路由冗余协议 四:Keepalived体系主要模块及其作用 4.1core模块 4.2vrrp模块 4.3chec ...

  4. LVS+Keepalived 高可用群集的介绍和搭建步骤

    文章目录 一.LVS+Keepalived 高可用群集 1.1 工作原理 1.2 Keepalived实现原理剖析 1.3 VRRP (虚拟路由冗余协议) 二.LVS+Keepalived 高可用群集 ...

  5. Lvs+Keepalived高可用负载均衡配置

    Lvs+Keepalived高可用负载均衡配置 环境介绍: vip=192.168.3.80   (负载均衡虚拟ip) lvs+keepalived_master          eth0:172. ...

  6. mysql+keepalived必须要lvs吗_MySQL 双主热备 + LVS + Keepalived 高可用操作记录

    MySQL复制能够保证数据的冗余的同时可以做读写分离来分担系统压力,如果是主主复制还可以很好的避免主节点的单点故障.然而MySQL主主复制存在一些问题无法满足我们的实际需要:未提供统一访问入口来实现负 ...

  7. mysql+keepalived必须要lvs吗_Mysql双主热备+LVS+Keepalived高可用操作记录

    MySQL复制能够保证数据的冗余的同时可以做读写分离来分担系统压力,如果是主主复制还可以很好的避免主节点的单点故障.然而MySQL主主复制存在一些问题无法满足我们的实际需要:未提供统一访问入口来实现负 ...

  8. LVS+Keepalived高可用群集(无论头上是怎样的天空,我准备着承受任何风暴)

    目录 前言 一.Keepalived实现原理剖析 1.1 VRRP(虚拟路由冗余协议) 1.2 Keepalived 原理 1.3 Keepalived 工具介绍 1.4 Keepalived 模块 ...

  9. 搭建:LVS+Keepalived高可用Web服务群集环境

    该服务涉及到的技术较多,相关技术文档的具体解释可以参考以下链接: Centos 7基于DR(直接路由)模式的负载均衡配置详解: Centos 7基于NAT(地址转换)模式的负载均衡配置详解: LVS负 ...

  10. LVS+keepalived高可用负载均衡集群部署(一) ----数据库的读写分离

    l  系统环境: RHEL7 l  硬件环境:虚拟机 l  项目描述:为解决网站访问压力大的问题,需要搭建高可用.负载均衡的 web集群. l  架构说明:整个服务架构采用功能分离的方式部署.后端采用 ...

最新文章

  1. python os.path模块学习(转)
  2. 推荐2一个在Java编码过程中得心应手的工具
  3. Windows下Python 3.6 + VS2017 + Anaconda 解决Unable to find vcvarsall.bat问题
  4. 利用bootstrap插件设置时间
  5. flash java 6,为Flash构建 Java WebService
  6. 原有ui项目调用qml_从0开始写前端UI框架:概述
  7. php顺序查找法,php二分查找、顺序查找算法
  8. PL-SLAM Real-time monocular visual SLAM with points and lines
  9. 【图像融合】主成分分析PCA
  10. win10读取linux硬盘,win10怎么读取lxext4格式硬盘
  11. linux 查看登录记录,Linux查看用户登陆历史记录
  12. 微信小程序点击事件传递自定义参数的方法
  13. 程序员做外包有前途吗?
  14. python中除法运算定律_数学有哪几种简便运算方法?(除了加、乘法交换、结合律,减、除法的性质)...
  15. 微信域名防封的新知识
  16. 经典软文是如何写作和推广的
  17. 服装办理erp体系的优点与选择
  18. 海康威视SDK控制台程序consoleDemo获取视频通道参数
  19. We‘re sorry but XXX doesn‘t work properly without JavaScript enabled. Please enable it to contin
  20. Python 绘画excel分组柱状图(懒人学习)

热门文章

  1. 小学生学计算机动画,小学生电脑绘画软件_电脑绘画之“卡通小熊”
  2. ARTS打卡计划第二周-Algorithm
  3. Android_撕衣服小案例
  4. 【Python】Django auth 修改密码如何实现?
  5. 2013年思杰合作伙伴移动性解决方案巡展
  6. 【转】请不要做浮躁的人。
  7. AMD:40年三个关键词
  8. 第三季-第11课-进程控制理论
  9. 如何删除gmail快捷链接?
  10. 多线程下不反复读取SQL Server 表的数据