keepalived相关说明

Keepalived是基于vrrp协议的一款高可用软件。Keepailived有一台主服务器和多台备份服务器,在主服务器和备份服务器上面部署相同的服务配置,使用一个虚拟IP地址对外提供服务,当主服务器出现故障时,虚拟IP地址会自动漂移到备份服务器。

VRRP(Virtual Router Redundancy Protocol,虚拟路由器冗余协议),VRRP是为了解决静态路由的高可用。VRRP的基本架构
虚拟路由器由多个路由器组成,每个路由器都有各自的IP和共同的VRID(0-255),其中一个VRRP路由器通过竞选成为MASTER,占有VIP,对外提供路由服务,其他成为BACKUP,MASTER以IP组播(组播地址:224.0.0.18)形式发送VRRP协议包,与BACKUP保持心跳连接,若MASTER不可用(或BACKUP接收不到VRRP协议包),则BACKUP通过竞选产生新的MASTER并继续对外提供路由服务,从而实现高可用。

vrrp协议的相关术语:

    虚拟路由器:Virtual Router 虚拟路由器标识:VRID(0-255)物理路由器:master  :主设备backup  :备用设备priority:优先级VIP:Virtual IP VMAC:Virutal MAC (00-00-5e-00-01-VRID)GraciousARP

安全认证:

简单字符认证、HMAC机制,只对信息做认证
MD5(leepalived不支持)

工作模式:

主/备:单虚拟路径器;
主/主:主/备(虚拟路径器),备/主(虚拟路径器)

工作类型:

抢占式:当出现比现有主服务器优先级高的服务器时,会发送通告抢占角色成为主服务器
非抢占式:

keepalived

核心组件

        vrrp stack:vrrp协议的实现ipvs wrapper:为集群内的所有节点生成IPVS规则checkers:对IPVS集群的各RS做健康状态检测控制组件:配置文件分析器,用来实现配置文件的分析和加载IO复用器内存管理组件,用来管理keepalived高可用是的内存管理

注意:

  1. 各节点时间必须同步
  2. 确保各节点的用于集群服务的接口支持MULTICAST通信(组播)

实验部分

实验任务
1、使用keepalived实现NAT模式的负载均衡集群的部署
2、使用keepalived实现DR模式的负载均衡集群的部署
3、使用keepalived实现TUN模式的负载均衡集群的部署
4、优化keepalived集群部署,实现keepalived无法连接到真实服务时,失去VIP占有能力
5、优化keepalived配置,实现VRRP状态变化时,通过邮件通知系统管理员
6、 优化keepalived配置,实现真实服务器状态变化时,通过微信消息通知系统管理员

环境准备

主机 内网ip mac 外网ip mac
lvs1 ens34:172.16.0.100/24 mac:00:0c:29:52:6e:9f ens33:10.0.0.100/24 mac:00:0c:29:52:6e:8b
lvs2 ens34:172.16.0.101/24 mac:00:0c:29:ad:07:71 ens33:10.0.0.101/24 mac:00:0c:29:ad:07:5d
real_server1 ens34:172.16.0.102/24 mac:00:0c:29:60:f9:ef ens33:10.0.0.102/24 mac:00:0c:29:60:f9:db
real_server2 ens34:172.16.0.103/24 mac:00:0c:29:a7:98:f0 ens33:10.0.0.103/24 mac:00:0c:29:a7:98:dc
客户端 10.0.0.1/24 mac:00:50:56:c0:00:08
#关闭防火墙
[root@lvs1 ~]# systemctl stop firewalld
#设置selinux模式为permissive
[root@lvs1 ~]# setenforce 0
#其余主机做同样操作

ipvsadm用法

语法格式:ipvsadm [参数]
-A/--add-service    添加一条新的虚拟服务
-E/--edit-service   编辑虚拟服务
-D/--delete-service 删除虚拟服务
-C/--clear  清除所有的虚拟服务规则
-R/--restore    恢复虚拟服务规则
-S/--save   保存虚拟服务器规则
-a/--add-server 在一个虚拟服务中添加一个新的真实服务器
-e/--edit-server    编辑某个真实服务器
-d/--delete-server  删除某个真实服务器
-L/-l/--list    显示内核中的虚拟服务规则
-Z/--zero   将转发消息的统计清零
--set tcp/tcpfin/udp    配置三个超时时间(tcp/tcpfin/udp)
--start-daemon  启动同步守护进程。
--stop-daemon   停止同步守护进程
-h/--help   显示帮助信息
-t/--tcp-service service-address    TCP协议的虚拟服务
-u/--udp-service service-address    UDP协议的虚拟服务
-f/--fwmark-service fwmark  说明是经过iptables 标记过的服务类型。
-s/--scheduler scheduler    使用的调度算法,有这样几个选项rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq,默认的调度算法是: wlc.
-p/--persistent [timeout]   持久稳固的服务。
-M/--netmask    指定客户地址的子网掩码
-r/--real-serverserver-address  真实的服务器
-g/--gatewaying 指定LVS 的工作模式为直接路由模式
-i/--ipip   指定LVS 的工作模式为隧道模式
-m/--masquerading   指定LVS 的工作模式为NAT 模式
-w/--weightweight   真实服务器的权值
--mcast-interface interface 指定组播的同步接口
-c/--connection 显示ipvs中目前存在的连接
-6:  如果fwmark用的是ipv6地址需要指定此选项。

tcpdump用法

tcpdump [ -DenNqvX ] [ -c count ] [ -F file ] [ -i interface ] [ -r file ][ -s snaplen ] [ -w file ] [ expression ]抓包选项:
-c:指定要抓取的包数量。注意,是最终要获取这么多个包。例如,指定"-c 10"将获取10个包,但可能已经处理了100个包,只不过只有10个包是满足条件的包。
-i interface:指定tcpdump需要监听的接口。若未指定该选项,将从系统接口列表中搜寻编号最小的已配置好的接口(不包括loopback接口,要抓取loopback接口使用tcpdump -i lo),:一旦找到第一个符合条件的接口,搜寻马上结束。可以使用'any'关键字表示所有网络接口。
-n:对地址以数字方式显式,否则显式为主机名,也就是说-n选项不做主机名解析。
-nn:除了-n的作用外,还把端口显示为数值,否则显示端口服务名。
-N:不打印出host的域名部分。例如tcpdump将会打印'nic'而不是'nic.ddn.mil'。
-P:指定要抓取的包是流入还是流出的包。可以给定的值为"in"、"out"和"inout",默认为"inout"。
-s len:设置tcpdump的数据包抓取长度为len,如果不设置默认将会是65535字节。对于要抓取的数据包较大时,长度设置不够可能会产生包截断,若出现包截断,:输出行中会出现"[|proto]"的标志(proto实际会显示为协议名)。但是抓取len越长,包的处理时间越长,并且会减少tcpdump可缓存的数据包的数量,:从而会导致数据包的丢失,所以在能抓取我们想要的包的前提下,抓取长度越小越好。输出选项:
-e:输出的每行中都将包括数据链路层头部信息,例如源MAC和目标MAC。
-q:快速打印输出。即打印很少的协议相关信息,从而输出行都比较简短。
-X:输出包的头部数据,会以16进制和ASCII两种方式同时输出。
-XX:输出包的头部数据,会以16进制和ASCII两种方式同时输出,更详细。
-v:当分析和打印的时候,产生详细的输出。
-vv:产生比-v更详细的输出。
-vvv:产生比-vv更详细的输出。其他功能性选项:
-D:列出可用于抓包的接口。将会列出接口的数值编号和接口名,它们都可以用于"-i"后。
-F:从文件中读取抓包的表达式。若使用该选项,则命令行中给定的其他表达式都将失效。
-w:将抓包数据输出到文件中而不是标准输出。可以同时配合"-G time"选项使得输出文件每time秒就自动切换到另一个文件。可通过"-r"选项载入这些文件以进行分析和打印。
-r:从给定的数据包文件中读取数据。使用"-"表示从标准输入中读取。

安装软件:

#lvs1软件安装          keepalived是一个管理工具,通过操作ipvsadm对linux内核进行相应操作
[root@lvs1 ~]# yum -y install ipvsadm
[root@lvs1 ~]# yum install -y gcc.x86_64
[root@lvs1 ~]# yum install -y openssl-devel.x86_64
[root@lvs1 ~]# yum install -y wget
[root@lvs1 ~]# yum install -y tcpdump
[root@lvs1 ~]# wget https://www.keepalived.org/software/keepalived-2.0.18.tar.gz      #下载源码包
[root@lvs1 ~]# tar -zxvf  keepalived-2.0.18.tar.gz        #解压源码包
[root@lvs1 ~]# cd keepalived-2.0.18/
[root@lvs1 keepalived-2.0.18]# ls
aclocal.m4  AUTHOR       build_setup  compile    configure.ac  COPYING  doc      INSTALL     keepalived          lib          Makefile.in  README.md  TODO
ar-lib      bin_install  ChangeLog    configure  CONTRIBUTORS  depcomp  genhash  install-sh  keepalived.spec.in  Makefile.am  missing      snap
[root@lvs1 keepalived-2.0.18]# ./configure --prefix=/    --datarootdir=/usr/local --pdfdir=/usr/local/share/keepalived --with-systemdsystemunitdir=/usr/lib/systemd/system
[root@lvs1 keepalived-2.0.18]# make install#lvs2操作相同  real_server1和real_server2安装httpd和抓包软件
[root@real_server1 ~]# yum -y install httpd
[root@real_server1 ~]# yum -y install tcpdump
[root@real_server2 ~]# yum -y install httpd
[root@real_server2   ~]# yum -y install tcpdump

NAT模式创建

nat模式说明

NAT 工作模式
(a)当用户请求到达Director Server,此时请求的数据报文会先到内核空间的PREROUTING链。 此时报文的源IP为CIP,目标IP为VIP
(b). PREROUTING检查发现数据包的目标IP是本机,将数据包送至INPUT链
(c). IPVS比对数据包请求的服务是否为集群服务,若是,修改数据包的目标IP地址为后端服务器IP,然后将数据包发至POSTROUTING链。 此时报文的源IP为CIP,目标IP为RIP
(d). POSTROUTING链通过选路,将数据包发送给Real Server
(e). Real Server比对发现目标为自己的IP,开始构建响应报文发回给Director Server。 此时报文的源IP为RIP,目标IP为CIP
(f). Director Server在响应客户端前,此时会将源IP地址修改为自己的VIP地址,然后响应给客户端。 此时报文的源IP为VIP,目标IP为CIP(a)当用户请求到达Director Server,此时请求的数据报文会先到内核空间的PREROUTING链。 此时报文的源IP为CIP,目标IP为VIP
(b). PREROUTING检查发现数据包的目标IP是本机,将数据包送至INPUT链
(c). IPVS比对数据包请求的服务是否为集群服务,若是,修改数据包的目标IP地址为后端服务器IP,然后将数据包发至POSTROUTING链。 此时报文的源IP为CIP,目标IP为RIP
(d). POSTROUTING链通过选路,将数据包发送给Real Server
(e). Real Server比对发现目标为自己的IP,开始构建响应报文发回给Director Server。 此时报文的源IP为RIP,目标IP为CIP
(f). Director Server在响应客户端前,此时会将源IP地址修改为自己的VIP地址,然后响应给客户端。 此时报文的源IP为VIP,目标IP为CIP

配置环节

配置lvs中的keepalived配置文件:

[root@lvs1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
#暂时不做修改
global_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LV1     #不能相同vrrp_skip_check_adv_addr#vrrp_strict                 #将严格遵守vrrp协议这一项关闭,否则会自动创建防火墙规则vrrp_iptables     #加入这条配置,不添加任何iptables规则vrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state MASTER        #设置成主节点interface ens33     #指定vip的绑定网卡virtual_router_id 51    #设置虚拟路由id,lvs1和lvs2必须相同priority 100      #设置vrrp的优先级为100advert_int 1     #组播包发送间隔authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.0.0.150        #指定vip}
}virtual_server 10.0.0.150 80 {    #创建虚拟服务器ip与真实服务器的映射关系delay_loop 6lb_algo rrlb_kind NAT             #设置模式为nat模式persistence_timeout 50protocol TCPreal_server 172.16.0.102 80 {weight 1HTTP_GET {       #设置检测模式为http,检测真实服务器的web的根路径,如果返回码为200则为正常url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}real_server 172.16.0.103 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}
}[root@lvs2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
#暂时不做修改
global_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS2vrrp_skip_check_adv_addr#vrrp_strict                 #将严格遵守vrrp协议这一项关闭,否则会自动创建防火墙规则vrrp_iptables     #加入这条配置,不添加任何iptables规则vrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state BACKUP      #设置成备节点interface ens33     #指定vip的绑定网卡virtual_router_id 51    #设置虚拟路由id,lvs1和lvs2必须相同priority 90      #设置vrrp的优先级为90advert_int 1    #组播包发送间隔authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.0.0.150        #指定vip}
}virtual_server 10.0.0.150 80 {    #创建虚拟服务器ip与真实服务器的映射关系delay_loop 6lb_algo rrlb_kind NAT             #设置模式为nat模式persistence_timeout 50protocol TCPreal_server 172.16.0.102 80 {weight 1HTTP_GET {       #设置检测模式为http,检测真实服务器的web的根路径,如果返回码为200则为正常url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}real_server 172.16.0.103 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}
}

配置真实服务器的httpd服务

[root@real_server1 ~]# cat >/var/www/html/index.html <<END
> we are real_server1
> END
#开启服务
[root@real_server1 ~]# systemctl start httpd[root@real_server2 ~]# cat >/var/www/html/index.html <<END
> we are real_server2
> END
#开启服务
[root@real_server2 ~]# systemctl start httpd#测试访问
[root@real_server1 ~]# curl localhost
we are real_server1
[root@real_server2 ~]# curl localhost
we are real_server2

启动keepalived服务

[root@lvs1 ~]# systemctl start keepalived.service
[root@lvs1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:8b brd ff:ff:ff:ff:ff:ffinet 10.0.0.100/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 10.0.0.150/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::762a:1b8c:1108:e457/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:9f brd ff:ff:ff:ff:ff:ffinet 172.16.1.100/24 brd 172.16.1.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::301a:2f1f:223b:ebe9/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::b58f:8285:15b2:3a16/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@lvs2 ~]# systemctl start keepalived
[root@lvs2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ad:07:5d brd ff:ff:ff:ff:ff:ffinet 10.0.0.101/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::762a:1b8c:1108:e457/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ad:07:71 brd ff:ff:ff:ff:ff:ffinet 172.16.1.101/24 brd 172.16.1.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::301a:2f1f:223b:ebe9/64 scope link noprefixroute valid_lft forever preferred_lft forever
#因为设置优先级是lvs1为100,lvs2为90,lvs1优先级较高,所以lvs1获取到vip#测试vrrp
#关闭lvs1的keepalived服务
[root@lvs1 ~]# systemctl stop keepalived.service
[root@lvs1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:8b brd ff:ff:ff:ff:ff:ffinet 10.0.0.100/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::762a:1b8c:1108:e457/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:9f brd ff:ff:ff:ff:ff:ffinet 172.16.1.100/24 brd 172.16.1.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::301a:2f1f:223b:ebe9/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::b58f:8285:15b2:3a16/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@lvs2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ad:07:5d brd ff:ff:ff:ff:ff:ffinet 10.0.0.101/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 10.0.0.150/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::762a:1b8c:1108:e457/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ad:07:71 brd ff:ff:ff:ff:ff:ffinet 172.16.1.101/24 brd 172.16.1.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::301a:2f1f:223b:ebe9/64 scope link noprefixroute valid_lft forever preferred_lft forever
#当lvs1服务关闭后,lvs2认为lvs1死掉,自动获取到vip

查看lvs和real_server的连接状态

[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr persistent 50-> 10.0.0.102:80                Masq    1      0          0         -> 10.0.0.103:80                Masq    1      0          0   [root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr persistent 50-> 10.0.0.102:80                Masq    1      0          0         -> 10.0.0.103:80                Masq    1      0          0

配置real_server

[root@real_server1 ~]# route add -net 10.0.0.0/24 gw 10.0.0.150
[root@real_server1 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         gateway         0.0.0.0         UG    102    0        0 ens33
10.0.0.0        10.0.0.150      255.255.255.0   UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     102    0        0 ens33
172.16.0.0      0.0.0.0         255.255.255.0   U     101    0        0 ens34[root@real_server2 ~]# route add -net 10.0.0.0/24 gw 10.0.0.150
[root@real_server2 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         gateway         0.0.0.0         UG    102    0        0 ens33
10.0.0.0        10.0.0.150      255.255.255.0   UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     102    0        0 ens33
172.16.0.0      0.0.0.0         255.255.255.0   U     101    0        0 ens34

修改lvs的运行模式

[root@lvs1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward[root@lvs2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

运行测试

原因分析

在网络架构中lvs做NAT地址转换功能,真实服务器必须将返回数据的路由重新指向lvs
分析原因为为真实服务器ens33网卡:1、因为ens33直连10.0.0.0网段而导致真实服务器返回lvs的流量不通过ens34走,而是通过ens33网卡回去的,导致lvs在进行反向路径校验时认为ens33网卡不是去172.16.0.0这个网段的最佳路径,所以数据会被丢弃2、lvs在转发客户端的http请求时,从ens34网卡到真实服务器的流量,真实服务器做反向路径校验时认为从自己的ens34到10.0.0.1不是最佳路径,数据包被丢弃解决方法:关闭所有lvs和real_server上的反向路径校验

关闭反向路径校验(所有服务器都进行相同操作)

[root@lvs2 ~]# sysctl -a |grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 1
net.ipv4.conf.ens34.arp_filter = 0
net.ipv4.conf.ens34.rp_filter = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0[root@lvs2 ~]# sysctl -w net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.rp_filter = 0
[root@lvs2 ~]# sysctl -w net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.rp_filter = 0
[root@lvs2 ~]# sysctl -w net.ipv4.conf.ens33.rp_filter=0
net.ipv4.conf.ens33.rp_filter = 0
[root@lvs2 ~]# sysctl -w net.ipv4.conf.ens34.rp_filter=0
net.ipv4.conf.ens34.rp_filter = 0[root@lvs2 ~]# sysctl -a |grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 0
net.ipv4.conf.ens34.arp_filter = 0
net.ipv4.conf.ens34.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0

测试运行结果

#整个访问过程为:客户端访问vip时,lvs对目的ip做地址转换,将目的ip换成real_server的ip地址,源mac为lvs的ens34的mac,目的mac为real_server的ens34网卡的mac,real_server收到数据后,进行反馈,目的ip为客户端ip,源ip为real_server的ens34网卡ip,因为设置了静态路由,数据通过real_server的ens34网卡转发给vip,lvs的ens34网卡收到数据后对数据的源ip做nat地址转换,目的ip不变,源ip变成vip,再由lvs的ens33网卡发出

浏览器访问vip

rp_filter说明
rp_filter参数用于控制系统是否开启对数据包源地址的校验。首先看一下Linux内核文档documentation/networking/ip-sysctl.txt中的描述:rp_filter - INTEGER0 - No source validation.1 - Strict mode as defined in RFC3704 Strict Reverse PathEach incoming packet is tested against the FIB and if the interfaceis not the best reverse path the packet check will fail.By default failed packets are discarded.2 - Loose mode as defined in RFC3704 Loose Reverse PathEach incoming packet's source address is also tested against the FIBand if the source address is not reachable via any interfacethe packet check will fail.Current recommended practice in RFC3704 is to enable strict modeto prevent IP spoofing from DDos attacks. If using asymmetric routingor other complicated routing, then loose mode is recommended.The max value from conf/{all,interface}/rp_filter is usedwhen doing source validation on the {interface}.Default value is 0. Note that some distributions enable itin startup scripts.即rp_filter参数有三个值,0、1、2,具体含义:0:不开启源地址校验。1:开启严格的反向路径校验。对每个进来的数据包,校验其反向路径是否是最佳路径。如果反向路径不是最佳路径,则直接丢弃该数据包。2:开启松散的反向路径校验。对每个进来的数据包,校验其源地址是否可达,即反向路径是否能通(通过任意网口),如果反向路径不同,则直接丢弃该数据包。

直连模式创建

直连模式说明

 DR模式是通过改写请求报文的目标MAC地址,将请求发给真实服务器的,而真实服务器响应后的处理结果直接返回给客户端用户。同TUN模式一样,DR模式可以极大的提高集群系统的伸缩性。而且DR模式没有IP隧道的开销,对集群中的真实服务器也没有必要必须支持IP隧道协议的要求。但是要求调度器LB与真实服务器RS都有一块网卡连接到同一物理网段上,必须在同一个局域网环境。DR模式是互联网使用比较多的一种模式。

配置环节

修改lvs的keepalived配置文件

[root@lvs1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS1vrrp_skip_check_adv_addrvrrp_strictvrrp_iptablesvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.0.0.150}
}virtual_server 10.0.0.150 80 {delay_loop 6lb_algo rrlb_kind DR            #将模式改为直连persistence_timeout 50protocol TCPreal_server 10.0.0.102 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}real_server 10.0.0.103 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}
}#lvs2同上

修改内核real_server的内核参数

[root@real_server1 ~]# echo 1 > /proc/sys/net/ipv4/conf/ens33/arp_ignore[root@real_server2 ~]# echo 1 > /proc/sys/net/ipv4/conf/ens33/arp_ignore

arp_ignore参数说明

arp_ignore - INTEGERDefine different modes for sending replies in response toreceived ARP requests that resolve local target IP addresses:0 - (default): reply for any local target IP address, configuredon any interface1 - reply only if the target IP address is local addressconfigured on the incoming interface2 - reply only if the target IP address is local addressconfigured on the incoming interface and both with thesender's IP address are part from same subnet on this interface3 - do not reply for local addresses configured with scope host,only resolutions for global and link addresses are replied4-7 - reserved8 - do not reply for all local addressesThe max value from conf/{all,interface}/arp_ignore is usedwhen ARP request is received on the {interface}arp_ignore参数的作用是控制系统在收到外部的arp请求时,是否要返回arp响应。arp_ignore参数常用的取值主要有0,1,2,3~8较少用到:0:响应任意网卡上接收到的对本机IP地址的arp请求(包括环回网卡上的地址),而不管该目的IP是否在接收网卡上。1:只响应目的IP地址为接收网卡上的本地地址的arp请求。2:只响应目的IP地址为接收网卡上的本地地址的arp请求,并且arp请求的源IP必须和接收网卡同网段。3:如果ARP请求数据包所请求的IP地址对应的本地地址其作用域(scope)为主机(host),则不回应ARP响应数据包,如果作用域为全局(global)或链路(link),则回应ARP响应数据包。4~7:保留未使用8:不回应所有的arp请求

关闭real_server1和real_server2上的反向路径校验

[root@real_server1 ~]# sysctl -a |grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 1
net.ipv4.conf.ens34.arp_filter = 0
net.ipv4.conf.ens34.rp_filter = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0[root@real_server1 ~]# sysctl -w net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.rp_filter = 0
[root@real_server1 ~]# sysctl -w net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.rp_filter = 0
[root@real_server1 ~]# sysctl -w net.ipv4.conf.ens33.rp_filter=0
net.ipv4.conf.ens33.rp_filter = 0
[root@real_server1 ~]# sysctl -w net.ipv4.conf.ens34.rp_filter=0
net.ipv4.conf.ens34.rp_filter = 0[root@real_server1 ~]# sysctl -a |grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 0
net.ipv4.conf.ens34.arp_filter = 0
net.ipv4.conf.ens34.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0

配置real_server

#添加虚ip
[root@real_server1 ~]# ip addr add 10.0.0.150/32 dev lo
#删除nat创建的静态路由
[root@real_server1 ~]# route del -net 10.0.0.0/24 gw 10.0.0.150
[root@real_server1 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         gateway         0.0.0.0         UG    102    0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     102    0        0 ens33
172.16.0.0      0.0.0.0         255.255.255.0   U     101    0        0 ens34
#real_server2同理

访问测试

抓包进行查看


#由此看出,客户端在访问vip时,lvs服务器将数据进行转发,源ip和目的ip不变,源mac为vip绑定的ens33网卡mac,目的mac为与lvs服务器直连的real_server上的ens34网卡mac,当real_server收到数据后,查看目的mac为自己这张网卡的mac,查看目的ip为lo网卡的ip,于是对数据进行处理,返回的数据的源ip为lo上的虚ip,目的ip为客户端ip,因为10.0.0.0网段在ens33上,所以会把数据从ens33网卡进行发出,通过交换机发给客户端

隧道模式

隧道模式说明

1、客户端发送请求至调度器(VIP),请求报文源地址是CIP,目标地址为VIP;
2、LVS调度器接收到请求,报文在PREROUTING链检查,确定目的IP是本机,于是将报文发送至INPUT链,ipvs内核模块确定请求的服务是配置的LVS集群服务,然后根据用户设定的均衡策略选择某台后端RS,并将目标MAC地址修改RIP的MAC地址。因为调度器和后端服务器RS在同个网段,因此直接二层互通,将请求发给选择的RS处理;
3、因为报文目的mac是本机,且RS上有配置VIP,因此RS能接收该报文。后端服务处理完请求后,将响应直接发往客户端,此时源IP地址为VIP,目标IP为CIP。

配置环节

real_server配置

[root@real_server1 ~]# modprobe ipip     #加载模块
[root@real_server1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:60:f9:db brd ff:ff:ff:ff:ff:ffinet 10.0.0.102/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:60:f9:ef brd ff:ff:ff:ff:ff:ffinet 172.16.0.102/24 brd 172.16.0.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::70c8:1e3e:d0e6:e2ad/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::6906:4baa:6f20:ac62/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000    #模块加载后自动生成的网卡link/ipip 0.0.0.0 brd 0.0.0.0
[root@real_server1 ~]# ip addr add 10.0.0.150/32 dev tunl0    #为tunl0添加ip地址,IP地址为vip地址
[root@real_server1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:60:f9:db brd ff:ff:ff:ff:ff:ffinet 10.0.0.102/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:60:f9:ef brd ff:ff:ff:ff:ff:ffinet 172.16.0.102/24 brd 172.16.0.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::70c8:1e3e:d0e6:e2ad/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::6906:4baa:6f20:ac62/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000link/ipip 0.0.0.0 brd 0.0.0.0inet 10.0.0.150/32 scope global tunl0valid_lft forever preferred_lft forever
[root@real_server1 ~]# ip link set up tunl0       #开启网卡#关闭反向路径校验
[root@real_server1 ~]# sysctl -a |grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 1
net.ipv4.conf.ens34.arp_filter = 0
net.ipv4.conf.ens34.rp_filter = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 1
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.ens33.stable_secret"
sysctl: reading key "net.ipv6.conf.ens34.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
sysctl: reading key "net.ipv6.conf.tunl0.stable_secret"
[root@real_server1 ~]# sysctl -w net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.rp_filter = 0
[root@real_server1 ~]# sysctl -w net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.rp_filter = 0
[root@real_server1 ~]# sysctl -w net.ipv4.conf.ens33.rp_filter=0
net.ipv4.conf.ens33.rp_filter = 0
[root@real_server1 ~]# sysctl -w net.ipv4.conf.ens34.rp_filter=0
net.ipv4.conf.ens34.rp_filter = 0#real_server2的操作相同

lvs配置(lvs1和lvs2)

[root@lvs1 ~]# modprobe ipip
[root@lvs1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:8b brd ff:ff:ff:ff:ff:ffinet 10.0.0.100/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::762a:1b8c:1108:e457/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:9f brd ff:ff:ff:ff:ff:ffinet 172.16.0.100/24 brd 172.16.0.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::b58f:8285:15b2:3a16/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::301a:2f1f:223b:ebe9/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5a8e:7498:4bc7:f708/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000link/ipip 0.0.0.0 brd 0.0.0.0
[root@lvs1 ~]# ip addr add 10.0.0.150/32 dev tunl0
[root@lvs1 ~]# ip link set up tunl0
[root@lvs1 ~]# ip addr del 10.0.0.150/32 dev ens33

lvs配置(通过修改keepalived配置文件)


#通过修改keepalived的配置文件进行隧道配置
[root@lvs1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS1   vrrp_iptablesvrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}
#去掉vrrp实例配置virtual_server 10.0.0.150 80 {delay_loop 6lb_algo rrlb_kind TUN    #将模式改成隧道persistence_timeout 50protocol TCPreal_server 172.16.0.102 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}real_server 172.16.0.103 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3}
}
[root@lvs1 ~]# systemctl restart keepalived.service     #重启服务
[root@lvs1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:8b brd ff:ff:ff:ff:ff:ffinet 10.0.0.100/24 brd 10.0.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8c08:d621:e4c3:f906/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::3b31:ed92:f7df:1c42/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::762a:1b8c:1108:e457/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:52:6e:9f brd ff:ff:ff:ff:ff:ffinet 172.16.0.100/24 brd 172.16.0.255 scope global noprefixroute ens34valid_lft forever preferred_lft foreverinet6 fe80::b58f:8285:15b2:3a16/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::301a:2f1f:223b:ebe9/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5a8e:7498:4bc7:f708/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000link/ipip 0.0.0.0 brd 0.0.0.0inet 10.0.0.150/32 scope global tunl0valid_lft forever preferred_lft forever

连接测试

lvs配置(通过ipvsadm命令)

[root@lvs1 ~]# ipvsadm -C      #清空集群信息
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@lvs1 ~]# ipvsadm -A -t 10.0.0.150:80 -s rr      #添加虚拟服务器
[root@lvs1 ~]# ipvsadm -a -t 10.0.0.150:80 -r 172.16.0.102 -i     #添加真实服务器,指定模式为隧道
[root@lvs1 ~]# ipvsadm -a -t 10.0.0.150:80 -r 172.16.0.103 -i
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr-> 172.16.0.102:80              Tunnel  1      0          0         -> 172.16.0.103:80              Tunnel  1      0          0         

连接测试

抓包查看


#由此可知,客户端访问10.0.0.150是从lvs转发到real_server1上进行处理的,再由real_server1上的ens33网卡发送回客户端

优化keepalived集群部署(vrrp主从切换+邮件提醒+企业微信提醒)

vrrp主从切换(lvs1和lvs2配置相同)

在lvs1和lvs2上创建检测脚本

[root@lvs1 ~]# cat /etc/keepalived/scripts/check_run.sh
#!/bin/bash
alive=`ipvsadm -Ln | grep "80" |grep -E -v "^(TCP|UDP)"|wc -l`
if [ ${alive} -eq 0 ];thenexit 1
elseexit 0
fi
[root@lvs1 ~]# chmod +x  /etc/keepalived/scripts/check_run.sh       #添加执行权限

lvs1和lvs2配置邮件服务(lvs1和lvs2配置相同)

按提示继续

#安装软件
[root@lvs1 ~]# yum -y install sendmail
[root@lvs1 ~]# yum install -y mailx
#启动服务
[root@lvs1 ~]# systemctl start sendmail
#如果启动失败,可能是系统中安装了postfix,停止该服务,然后重启sendmail:
systemctl stop postfix
systemctl restart sendmail#修改配置文件,添加如下参数
set from=xxxxxx@163.com(发送邮件的账号)
set smtp=smtp.163.com
set smtp-auth-user=xxxxxx@163.com(发送邮件的账号)
set smtp-auth-password=xxxxxx(此处是是授权码)
set smtp-auth=login

创建邮件提醒脚本

[root@lvs1 ~]# cat /etc/keepalived/scripts/master.sh
#!/bin/bash
name=`hostname`
ip=`ip a |sed -n '/ens33/p' |grep "^ " |head -n1|awk '{print $2}'`
`echo "${name}: ${ip} switch from backup to master " | mail -s 'vrrp_switch' 1502975943@qq.com`
[root@lvs1 ~]# chmod +x master.sh
[root@lvs1 ~]# cat /etc/keepalived/scripts/backup.sh
#!/bin/bash
name=`hostname`
ip=`ip a |sed -n '/ens33/p' |grep "^ " |head -n1|awk '{print $2}'`
`echo "${name}: ${ip} switch from master to backup " | mail -s 'vrrp_switch' 1502975943@qq.com`
[root@lvs1 ~]# chmod +x backup.sh

微信企业提醒(lvs1和lvs2操作相同)

注册微信企业号-----不做演示

登录企业微信

我们点击通讯录,添加一个组,或者添加一个成员。成员可以使用微信邀请或者短信邀请

点击企业应用,新增应用

设置应用

#添加脚本
[root@lvs1 scripts]# pwd
/etc/keepalived/scripts
[root@lvs1 scripts]# vim wechat.py
#!/usr/bin/python
#_*_coding:utf-8 _*_import urllib,urllib2
import json
import sys
import simplejsonreload(sys)
sys.setdefaultencoding('utf-8')def gettoken(corpid,corpsecret):gettoken_url = 'https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=' + corpid + '&corpsecret=' + corpsecret
#    print  gettoken_urltry:token_file = urllib2.urlopen(gettoken_url)except urllib2.HTTPError as e:print e.codeprint e.read().decode("utf8")sys.exit()token_data = token_file.read().decode('utf-8')token_json = json.loads(token_data)token_json.keys()token = token_json['access_token']return tokendef senddata(access_token,user,subject,content):send_url = 'https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=' + access_tokensend_values = {"touser":'yanmb',    #企业号中的用户帐号"toparty":"2",    #企业号中的部门id。        ##填写"msgtype":"text", #消息类型。        "agentid":"1000006",    #企业号中的应用id。     ##填写"text":{"content":subject + '\n' + content},"safe":"0"}
#    send_data = json.dumps(send_values, ensure_ascii=False)send_data = simplejson.dumps(send_values, ensure_ascii=False).encode('utf-8')print(send_data)send_request = urllib2.Request(send_url, send_data)response = json.loads(urllib2.urlopen(send_request).read())print str(response)if __name__ == '__main__':user = str(sys.argv[1])     #传过来的第一个参数subject = str(sys.argv[2])  #传过来的第二个参数content = str(sys.argv[3])  #传过来的第三个参数corpid =  'wXXXXXXXX'       #CorpID是企业号的标识      ##填写corpsecret = 'XXXXXXXXX'  #corpsecretSecret应用管理组凭证密钥     ##填写accesstoken = gettoken(corpid,corpsecret)senddata(accesstoken,user,subject,content)

以上脚本必须安装simplejson才能执行

[root@lvs1 scripts]# yum install python-simplejson -y
[root@lvs1 scripts]# chmod +x wechat.py     #添加执行权限
[root@lvs1 scripts]# ./wechat.py 1 2 3      #测试
{"text": {"content": "2\n3"}, "safe": "0", "msgtype": "text", "touser": "LiJun", "agentid": "1000005", "toparty": "3"}
{u'invaliduser': u'', u'errcode': 0, u'errmsg': u'ok'}

修改keepalived的配置文件

[root@lvs1 scripts]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS1   vrrp_iptablesvrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_script check_run {               #定义脚本script "/etc/keepalived/scripts/check_run.sh"interval 5           #5秒执行一次weight -20         #脚本返回码为非0时,修改vrrp优先级,减20rise 2            #执行两次成功才算成功fall 2           #执行两次失败算作失败
}
vrrp_instance VI_1 {# vrrp_iptables# strict_mode offstate MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.0.0.150}track_script {         #调用脚本check_run}notify_master "/etc/keepalived/scripts/master.sh"     #切换为master时执行notify_backup "/etc/keepalived/scripts/backup.sh"     #切换为backup时执行
}virtual_server 10.0.0.150 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPreal_server 172.16.0.102 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3notify_up "/etc/keepalived/scripts/wechat.py 1 from_lvs1_info--real_server1:status up"    #连接成功执行notify_down "/etc/keepalived/scripts/wechat.py 2 from_lvs1_info--real_server1:status down"   #连接失败执行}real_server 172.16.0.103 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3notify_up "/etc/keepalived/scripts/wechat 1 from_lvs1_info--real_server2:status up"notify_down "/etc/keepalived/scripts/wechat 2 from_lvs1_info--real_server2:status down"}
}#lvs2配置
[root@lvs2 scripts]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS2   vrrp_iptablesvrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_script check_run {               #定义脚本script "/etc/keepalived/scripts/check_run.sh"interval 5           #5秒执行一次weight -20         #脚本返回码为非0时,修改vrrp优先级,减20rise 2            #执行两次成功才算成功fall 2           #执行两次失败算作失败
}
vrrp_instance VI_1 {# vrrp_iptables# strict_mode offstate BACKUPinterface ens33virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.0.0.150}track_script {         #调用脚本check_run}notify_master "/etc/keepalived/scripts/master.sh"     #切换为master时执行notify_backup "/etc/keepalived/scripts/backup.sh"     #切换为backup时执行
}virtual_server 10.0.0.150 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPreal_server 172.16.0.102 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3notify_up "/etc/keepalived/scripts/wechat.py 1 from_lvs2_info--real_server1:status up"    #连接成功执行notify_down "/etc/keepalived/scripts/wechat.py 2 form-lvs2_info--real_server1:status down"   #连接失败执行}real_server 172.16.0.103 80 {weight 1HTTP_GET {url {path /state_code 200}}connect_timeout 3retry 3delay_before_retry 3notify_up "/etc/keepalived/scripts/wechat 1 from_lvs2_info--real_server2:status up"notify_down "/etc/keepalived/scripts/wechat 2 from_lvs2_info--real_server2:status down"}
}

启动服务

[root@lvs1 ~]# rmmod ipip    #卸载模块
[root@lvs1 ~]# systemctl start keepalived[root@lvs2 ~]# rmmod ipip
[root@lvs2 ~]# systemctl start keepalived

测试vrrp的自动切换

[root@lvs1 ~]# ip link set down ens34     #关闭ens34网卡
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr persistent 50

查看lvs1的日志文件

测试real_server状态变化脚本

[root@real_server1 ~]# systemctl stop httpd      #关闭real_server1上的httpd
#lvs2上进行查看
[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr persistent 50-> 172.16.0.103:80              Route   1      0          0    

启动服务

[root@lvs1 ~]# rmmod ipip    #卸载模块
[root@lvs1 ~]# systemctl start keepalived[root@lvs2 ~]# rmmod ipip
[root@lvs2 ~]# systemctl start keepalived

测试vrrp的自动切换

[root@lvs1 ~]# ip link set down ens34     #关闭ens34网卡
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr persistent 50

查看lvs1的日志文件

测试real_server状态变化脚本

[root@real_server1 ~]# systemctl stop httpd      #关闭real_server1上的httpd
#lvs2上进行查看
[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.150:80 rr persistent 50-> 172.16.0.103:80              Route   1      0          0

keepalived配置(lvs+NAT+直连+TUN隧道+服务切换企业微信邮件告警)相关推荐

  1. lvs+keepalived实现lvs nat模式热备配置

    lvs nat模式LB热备配置网上几乎找不到相关文档,找到几个但都不靠谱,做LB主备切换时都会出现问题,无奈方案之急需,自己参考研究半天,终于用lva+keepalived获得成功,现分享一下 环境: ...

  2. 配置lvs nat模式下real server服务器端lvsrs脚本

    因为lvs nat模式下,只有入站方向的流量经过lvs服务器,出站流量直接由Real server服务器响应,所以Real Server服务器必须做相应的配置才能响应客户数据包,即修改Real ser ...

  3. zabbix配置步骤、操作及使用个人邮箱、企业微信、钉钉报警的配置

    一.监控端配置 步骤: 1.去官网下载包,从而有zabbix.repo,更新yum源 2.yum安装zabbix-server-mysql zabbix-web-mysql zabbix-agent ...

  4. 配置文件详解+AlertManager微信邮件告警配置

    文章目录 前言 AlertManager告警简单部署 一.AlertManager告警简介 1.简介 2.告警规则组成 1)告警名称 2)告警规则 3.Alertmanager特性 1)分组 2)抑制 ...

  5. 配置Hi提醒 让提醒消息可以转发到企业微信

    Hi提醒支持的提醒通道现在有很多,之前只支持 微信公众号.短信.邮件和语音电话. 现在已经支持将提醒发到企业微信.钉钉和飞书了. 今天以企业微信为例,给大家讲解下设置. 配置企业微信群机器人消息教程 ...

  6. prometheus 通过企业微信接收告警 WeChat告警模版配置

    实现WeChat 告警-准备工作 step 1: 访问网站 注册企业微信账号(不需要企业认证). step 2: 访问apps 创建第三方应用,点击创建应用按钮 -> 填写应用信息: 部门ID: ...

  7. 使用nginx配置服务器80端口指向多个服务,解决微信公众号等平台只能绑定80端口问题。

    在大部分情况下,不管域名绑定也好,第三方公众号或小程序都会要求只能使用80端口. 只准使用80端口有几个意思 一.80端口是不需要显性添加的. 二.80端口比较安全,就怕用户使用21(FTP).22( ...

  8. Java实现企业微信回调配置

    在使用前阅读官方文档:回调配置文档 一.配置回调服务 一.在企业微信管理后台配置三个配置 分别是:URL, Token, EncodingAESKey.打开企业微信后台-->管理工具--> ...

  9. Prometheus配置企业微信报警

    Prometheus配置企业微信报警 更多技术博客,请关注微信公众号:运维之美 Prometheus被号称是下一代的监控,可以解决云上K8S集群的监控问题,搭配部署alertmanager,可以实现告 ...

最新文章

  1. Nginx rewrite正则匹配重写
  2. 大数据分布式集群搭建(3)
  3. CSDP是个好东西——CSDP 认证考试简介
  4. HTML5本地存储 localStorage
  5. bootstrap和圣杯布局
  6. MySQL 之 query cache
  7. Windows XP 禁用屏幕保护功能
  8. 阿里云盘tv版 v1.0.6电视版
  9. 购物网站 前台后台 思维导图_「培哥学堂」60套思维导图PPT送给你,让你的工作事半功倍!(建议收藏)...
  10. Emerging Properties in Self-Supervised Vision Transformers(2021)
  11. Vue2竖向文字滚动
  12. 51nod1635 第K个幸运排列
  13. 282.软件体系结构的风格与模式
  14. C++ 快速傅里叶变换
  15. JOOQ初学-简单的增删改查demo
  16. REST 架构指导方案
  17. (转)甲方乙方——赵民谈找工作
  18. 基于Java+SpringBoot制作一个校园圈子小程序
  19. 开发 Java 应用使用 TiDB 的最佳实践
  20. 电源中常见Sense接线端作用

热门文章

  1. 第十届中国大学生服务外包创新创业大赛总结
  2. 仿微信聊天气泡效果实现
  3. ps----制作双眼皮
  4. 这套制造行业数字化解决方案,超过一半的世界五百强的中国制造型企业都在用!
  5. jquery绿色版dreamweaver提示
  6. 阿里云短信使用简介demo
  7. 个体户们,再辛苦也要过来看看要交哪些税?怎么享受优惠?
  8. 【DE三维路径规划】基于matlab改进的差分算法多无人机协同三维路径规划【含Matlab源码 169期】
  9. vue使用自定义字体LcdD液晶字体
  10. 计算机组成原理——运算器实验