keepalive之LVS-DR架构
author:JevonWei
版权声明:原创作品
Keepalive实战之LVS-DR
实验目的:构建LVS-DR架构,为了达到LVS的高可用目的,故在LVS-DR的Director端做Keepalive集群,在Director-A上做keepalive-A,在Director上做keepalive-B,LVS-RS1和LVS-RS2为后端的两台web服务器,通过在Director上做keepalive集群实现高可用的目的
网络拓扑图
实验环境(keepalive节点同时作为LVS的directory节点)
keepalive-A(Director-A) 172.16.253.108
keepalive-B(Director-A) 172.16.253.105
LVS-RS1 172.16.250.127
LVS-RS2 172.16.253.193
VIP 172.16.253.150
client 172.16.253.177
LVS-RS web集群
为了更好的观察实验结果,故在此将RS1和RS2的web页面内容设置不一致,以致可以更清晰的区分RS1服务端和RS2服务端
LVS-RS1
[root@LVS-RS1 ~]# systemctl restart chronyd \\多台服务器时间同步
[root@LVS-RS1 ~]# iptables -F
[root@LVS-RS1 ~]# setenforce 0
[root@LVS-RS1 ~]# yum -y install nginx
[root@LVS-RS1 ~]# vim /usr/share/nginx/html/index.html
<h1> Web RS1 </h1>
[root@LVS-RS1 ~]# systemctl start nginx修改内核参数并添加VIP地址
[root@LVS-RS1 ~]# vim lvs_dr.sh
#!/bin/bash
#
vip=172.16.253.150
mask=255.255.255.255
iface="lo:0"case $1 in
start)ifconfig $iface $vip netmask $mask broadcast $vip uproute add -host $vip dev $ifaceecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announceecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announce;;
stop)echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/all/arp_announceecho 0 > /proc/sys/net/ipv4/conf/lo/arp_announceifconfig $iface down;;
*)echo "Usage:$(basename $0) start|stop"exit 1;;
esac
[root@LVS-RS1 ~]# bash lvs_dr.sh start
[root@LVS-RS1 ~]# ifconfig
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 172.16.253.150 netmask 255.255.255.255loop txqueuelen 1 (Local Loopback)
LVS-RS2
[root@LVS-RS2 ~]# systemctl restart chronyd \\多台服务器时间同步
[root@LVS-RS2 ~]# iptables -F
[root@LVS-RS2 ~]# setenforce 0
[root@LVS-RS2 ~]# yum -y install nginx
[root@LVS-RS2 ~]# vim /usr/share/nginx/html/index.html
<h1> Web RS2 </h1>
[root@LVS-RS2 ~]# systemctl start nginx修改内核参数并添加VIP地址
[root@LVS-RS2 ~]# vim lvs_dr.sh
#!/bin/bash
#
vip=172.16.253.150
mask=255.255.255.255
iface="lo:0"case $1 in
start)ifconfig $iface $vip netmask $mask broadcast $vip uproute add -host $vip dev $ifaceecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announceecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announce;;
stop)echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/all/arp_announceecho 0 > /proc/sys/net/ipv4/conf/lo/arp_announceifconfig $iface down;;
*)echo "Usage:$(basename $0) start|stop"exit 1;;
esac
[root@LVS-RS1 ~]# bash lvs_dr.sh start
[root@LVS-RS1 ~]# ifconfig
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 172.16.253.150 netmask 255.255.255.255loop txqueuelen 1 (Local Loopback)
Keepalive集群
Director节点搭建
keepalive-A
[root@keepaliveA ~]# systemctl restart chronyd \\多台服务器时间同步
[root@keepaliveA ~]# yum -y install ipvsadm
keepalive-B
[root@keepaliveB ~]# systemctl restart chronyd \\多台服务器时间同步
[root@keepaliveB ~]# yum -y install ipvsadm
keepalive上配置web的sorry server
keepalive-A
[root@keepaliveA ~]# yum -y install nginx
[root@keepaliveA ~]# vim /usr/share/nginx/html/index.html
</h1> sorry from Director-A(keepalive-A) </h1>
[root@keepaliveA ~]# systemctl start nginx
keepalive-B
[root@keepalive-B ~]# yum -y install nginx
[root@keepalive-B ~]# vim /usr/share/nginx/html/index.html
</h1> sorry from Director-B(keepalive-B) </h1>
[root@keepaliveB ~]# systemctl start nginx
keepalive-A配置keepalive
keepalive-A
[root@keepalive-A ~]# iptables -F
[root@keepalive-A ~]# yum -y install keepalived
[root@keepaliveA ~]# vim /etc/keepalived/keepalived.conf
global_defs { notification_email { \\定义邮件通知设置jevon@danran.com \\定义邮件接收地址}notification_email_from ka_admin@danran.com \\邮件发送者smtp_server 127.0.0.1 \\邮件server服务器smtp_connect_timeout 30 \\连接超时router_id keepaliveA \\route的ID信息,自定义vrrp_mcast_group4 224.103.5.5 \\多播地址段,默认为224.0.0.18
}
vrrp_instance VI_A {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass qr8hQHuL}
virtual_ipaddress {172.16.253.150/32 dev ens33
}virtual_server 172.16.253.150 80 {delay_loop 6 \\服务轮询的时间间隔lb_algo rr \\定义调度方法;lb_kind DR \\集群的类型;protocol TCP \\服务协议,仅支持TCP;sorry_server 127.0.0.1 80 \\指定sorry server,且为本机的wen服务提供的web页面real_server 172.16.250.127 80 {weight 1 \\权重SSL_GET { \\应用层检测url {path / \\定义要监控的URL#digest ff20ad2481f97b1754ef3e12ecd3a9cc \\判断上述检测机制为健康状态的响应的内容的校验码;status_code 200 \\判断上述检测机制为健康状态的响应码}connect_timeout 3 \\连接请求的超时时长;nb_get_retry 3 \\重试次数delay_before_retry 1 \\重试之前的延迟时长}}real_server 172.16.253.193 80 {weight 1SSL_GET {url {path /#digest ff20ad2481f97b1754ef3e12ecd3a9cc status_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 1}}
}
[root@keepaliveA ~]# systemctl start keepalived
[root@keepaliveA ~]# ip a l
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:75:dc:3c brd ff:ff:ff:ff:ff:ffinet 172.16.253.150/32 scope global ens33valid_lft forever preferred_lft forever
[root@keepaliveA ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.253.150:80 rr-> 172.16.250.127:80 Route 1 0 0
-> 172.16.253.193:80 Route 1 0 0
keepalive-B配置keepalive
keepalive-B
[root@keepalive-B ~]# iptables -F
[root@keepalive-B ~]# yum -y install keepalived
[root@keepaliveA ~]# vim /etc/keepalived/keepalived.conf
global_defs { notification_email { \\定义邮件通知设置jevon@danran.com \\定义邮件接收地址}notification_email_from ka_admin@danran.com \\邮件发送者smtp_server 127.0.0.1 \\邮件server服务器smtp_connect_timeout 30 \\连接超时router_id keepaliveA \\route的ID信息,自定义vrrp_mcast_group4 224.103.5.5 \\多播地址段,默认为224.0.0.18
}
vrrp_instance VI_A {state BACKUPinterface ens33virtual_router_id 51priority 95advert_int 1authentication {auth_type PASSauth_pass qr8hQHuL}
virtual_ipaddress {172.16.253.150/32 dev ens33
}virtual_server 172.16.253.150 80 {delay_loop 6 \\服务轮询的时间间隔lb_algo rr \\定义调度方法;lb_kind DR \\集群的类型;protocol TCP \\服务协议,仅支持TCP;sorry_server 127.0.0.1 80 \\指定sorry server,且为本机的wen服务提供的web页面real_server 172.16.250.127 80 {weight 1 \\权重SSL_GET { \\应用层检测url {path / \\定义要监控的URL#digest ff20ad2481f97b1754ef3e12ecd3a9cc \\判断上述检测机制为健康状态的响应的内容的校验码;status_code 200 \\判断上述检测机制为健康状态的响应码}connect_timeout 3 \\连接请求的超时时长;nb_get_retry 3 \\重试次数delay_before_retry 1 \\重试之前的延迟时长}}real_server 172.16.253.193 80 {weight 1SSL_GET {url {path /#digest ff20ad2481f97b1754ef3e12ecd3a9cc status_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 1}}
}
[root@keepaliveB ~]# systemctl start keepalived
[root@keepalive-B ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.253.150:http rr-> 172.16.250.127:http Route 1 0 0 -> 172.16.253.193:http Route 1 0 0
访问测试
client测试
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
<h1> Web RS1 </h1>
<h1> Web RS2 </h1>
<h1> Web RS1 </h1>
<h1> Web RS2 </h1>
<h1> Web RS1 </h1>
当keepalive-A故障时
[root@keepaliveA ~]# systemctl stop keepalived
keepalive-B自动成为MASTER主节点,则LVS的director调度服务器切换为keepalive-B上,LVS-RS1和LVS-RS2的web服务正常使用
client访问测试
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
<h1> Web RS2 </h1>
<h1> Web RS1 </h1>
<h1> Web RS2 </h1>
<h1> Web RS1 </h1>
<h1> Web RS2 </h1>
当keepalive-A修恢复正常时,keepalive-A再次成为MASTER主节点
[root@keepaliveA ~]# systemctl start keepalived
[root@keepaliveA ~]# ip a l
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:75:dc:3c brd ff:ff:ff:ff:ff:ffinet 172.16.253.150/32 scope global ens33valid_lft forever preferred_lft forever
当LVS-RS1的web服务故障时
[root@LVS-RS1 ~]# iptables -A INPUT -p tcp --dport 80 -j REJECT
client访问
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
<h1> Web RS2 </h1>
<h1> Web RS2 </h1>
<h1> Web RS2 </h1>
<h1> Web RS2 </h1>
当LVS-RS1和LVS-RS2的web服务全部故障时
[root@LVS-RS1 ~]# iptables -A INPUT -p tcp --dport 80 -j REJECT
[root@LVS-RS2 ~]# iptables -A INPUT -p tcp --dport 80 -j REJECT
client访问到的时sorry server服务器,且sorry server服务器为keepalive-A
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
</h1> sorry from Director-A(keepalive-A) </h1>
</h1> sorry from Director-A(keepalive-A) </h1>
</h1> sorry from Director-A(keepalive-A) </h1>
</h1> sorry from Director-A(keepalive-A) </h1>
</h1> sorry from Director-A(keepalive-A) </h1>
当keepalive-A故障时
[root@keepaliveA ~]# systemctl stop keepalived.service
client访问sorry server服务页面,且sorry server服务器为keepalive-B
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
</h1> sorry from Director-B(keepalive-B) </h1>
</h1> sorry from Director-B(keepalive-B) </h1>
</h1> sorry from Director-B(keepalive-B) </h1>
</h1> sorry from Director-B(keepalive-B) </h1>
</h1> sorry from Director-B(keepalive-B) </h1>
LVS-RS1的web服务恢复正常后
[root@LVS-RS1 ~]# iptables -F
client访问测试
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
<h1> Web RS1 </h1>
<h1> Web RS1 </h1>
<h1> Web RS1 </h1>
<h1> Web RS1 </h1>
<h1> Web RS1 </h1>
LVS-RS1和LVS-RS2的web服务全部恢复正常后
[root@LVS-RS1 ~]# iptables -F [root@LVS-RS2 ~]# iptables -F
client访问测试
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done
<h1> Web RS2 </h1>
<h1> Web RS1 </h1>
<h1> Web RS2 </h1>
<h1> Web RS1 </h1>
<h1> Web RS2 </h1>
保存及重载规则
保存:建议保存至/etc/sysconfig/ipvsadm
ipvsadm-save > /PATH/TO/IPVSADM_FILE
ipvsadm -S > /PATH/TO/IPVSADM_FILE
systemctl stop ipvsadm.service
重载:
ipvsadm-restore < /PATH/FROM/IPVSADM_FILE
ipvsadm -R < /PATH/FROM/IPVSADM_FILE
systemctl restart ipvsadm.service
keepalive节点通过DNS域名解析指向实现
获取web主页面内容的校验码
[root@keepaliveA ~]# genhash -s 172.16.250.127 -p 80 -u /
转载于:https://www.cnblogs.com/JevonWei/p/7482483.html
keepalive之LVS-DR架构相关推荐
- LVS(DR)+Keepalive高可用+Zabbix监控脑裂
1 LVS(DR) DR模型中各主机上均需要配置VIP,解决地址冲突的方式有三种: (1) 在前端网关做静态绑定 (2) 在各RS使用arptables (3) 在各RS修改内核参数,来限制arp响应 ...
- Linux集群架构(LVS DR模式搭建、keepalived + LVS)
为什么80%的码农都做不了架构师?>>> LVS DR模式搭建 准备工作:三台机器 分发器,也叫调度器(简写为dir):192.168.248.128 rs1 :192.168 ...
- RHEL6.4 Keepalive+LVS(DR)部署文档
1.简介 LVS+Keepalived 能实现的功能:利用 LVS 控制器主备模式避免单点故障以及自动删除故障 WEB 服务器结点并当它恢复后再自动添加到群集中. 拓扑图: 2.系统环境 系统平台:R ...
- LVS (DR, NAT)模式应用
OS: Redhat AS4U4 内核:2.6.9-42 Server1: 192.168.1.91 (负载服务器) 虚拟服务IP: 192.168.1.99 Realserver: 192.16 ...
- 十六周一次课(4月11日) 学习完成 18.11 LVS DR模式搭建 18.12 keepalived + LVS
2019独角兽企业重金招聘Python工程师标准>>> 18.11 LVS DR模式搭建 准备工作:三台机器 分发器,也叫调度器(简写为dir) 192.134 rs1 192.13 ...
- LVS DR模式搭建,keepalived + LVS
2019独角兽企业重金招聘Python工程师标准>>> LVS DR模式搭建 准备工作 三台机器,只需要有公网IP 分发器,也叫调度器(简写为dir)IP:192.168.133.1 ...
- LVS DR模式搭建、keepalived+LVS
LVS DR 模式搭建 准备工作 三台机器,三台机器均有公网IP. 调度器(director) IP:192.168.159.131 real server 1 (real1) IP:192.168. ...
- Linux下部署LVS(DR)+keepalived+Nginx负载均衡
架构部署 LVS/keepalived(master):192.168.21.3 LVS/keepalived(Slave):192.168.21.6 Nginx1:192.168.21.4 N ...
- LVS DR模式搭建、keepalived + LVS
一. LVS DR模式搭建 1).准备工作: 三台机器 分发器,也叫调度器(简写为dir) 1.31 rs1 1.12 rs2 1.29 vip 1.200 2). dir上编写脚本 vim /usr ...
- LVS DR模式负载均衡
高并发场景 LVS 安装及高可用实现 分类: 运维基本功,故障解决 转载自 惨绿少年 https://www.cnblogs.com/clsn/p/7920637.html 1.1 负载均衡介绍 ...
最新文章
- Android+git+hudson+gradle持续集成
- linux 重启_四步见证linux系统重启过程,小心操作,防止后悔!
- [linux][MongoDB] mongodb学习(一):MongoDB安装、管理工具、
- mysql 随机分组_MySql分组后随机获取每组一条数据的操作
- (十七)WebGIS中距离及面积测量的原理和实现以及坐标转换的简单介绍
- dropbox_Google的新存储定价与Microsoft,Apple和Dropbox相比如何
- tcpdump 命令祥解
- go语言os.exit(1)_Go语言os包用法简述
- BZOJ2938:[Poi2000]病毒
- Mybatis3.3.x技术内幕(十一):执行一个Sql命令的完整流程
- [Leetcode] Climbing Stairs
- mini139聊天软件
- 随机生成验证码(JAVA代码)
- html简单登录页面代码
- sqlite3查询表中最后一条记录
- HDU 4607 Park Visit 两次DFS求树直径
- MTCNN中的IOU详解
- hive--解决使用not in之后返回数据为空的问题
- DMZ 主机的 Windows 共享文件夹
- APM 新版电机电调校准
热门文章
- 乐器演奏_深度强化学习代理演奏的蛇
- uboot中设置MAC地址,重启不丢失
- C++初始编程及相关的问题总结
- string函数_C++[06] string成员函数之删除函数erase
- 域服务器怎么修改管理员密码,域服务器更改客户端管理员的密码
- formatter java_Java编程中的Java Formatter是什么?
- jest测试ajax,ajax – 如何使用Jest来测试React呈现的异步数据?
- 工作中的javascript代码收集及整理
- parentNode,parentElement,childNodes,children的区别
- @Component 和 @Bean 的区别