双机高可用、负载均衡、MySQL(读写分离、主从自动切换)架构设计
前几天网友来信说帮忙实现这样一个架构:只有两台机器,需要实现其中一台死机之后另一台能接管这台机器的服务,并且在两台机器正常服务时,两台机器都能用上。于是设计了如下的架构。
架构简介
此架构主要是由keepalived实现双机高可用,维护了一个外网VIP,一个内网VIP。正常情况时,外网VIP和内网VIP都绑定在server1服务器,web请求发送到server1的Nginx,nginx对于静态资源请求就直接在本机检索并返回,对于PHP的动态请求,则负载均衡到server1和server2。对于SQL请求,会将此类请求发送到Atlas mysql中间件,Atlas接收到请求之后,把涉及写操作的请求发送到内网VIP,读请求操作发送到server2,这样就实现了读写分离。
当主服务器server1宕机时,keepalived检测到后,立即把外网VIP和内网VIP绑定到server2,并把server2的mysql切换成主库。此时由于外网VIP已经转移到了server2,web请求将发送给server2的nginx。nginx检测到server1宕机,不再把请求转发到server1的php-fpm。之后的sql请求照常发送给本地的atlas,atlas把写操作发送给内网VIP,读操作发送给server2 mysql,由于内网VIP已经绑定到server2了,server2的mysql同时接受写操作和读操作。
当主服务器server1恢复后,keepalived不抢占server2的VIP,继续正常服务。我们可以把server1的mysql切换成主,也可以切换成从。
架构要求
要实现此架构,需要三个条件:
- 服务器可以设置内网ip,并且设置的内网IP互通;
- 服务器可以随意绑定IDC分配给我们使用的外网IP,即外网IP没有绑定MAC地址;
- MySQL服务器支持GTID,即MySQL-5.6.5以上版本。
环境说明
server1
eth0: 10.96.153.110(对外IP) eth1: 192.168.3.100(对内IP)
server2
eth0: 10.96.153.114(对外IP) eth1: 192.168.3.101(对内IP)
系统都是CentOS-6。
对外VIP: 10.96.153.239 对内VIP: 192.168.3.150
hosts设置
/etc/hosts: 192.168.3.100 server1 192.168.3.101 server2
Nginx PHP MySQL安装
这几个软件的安装推荐使用EZHTTP来完成。
Nginx配置
Server1配置
http { [...]upstream php-server {server 192.168.3.101:9000;server 127.0.0.1:9000;keepalive 100;} [...]server {[...]location ~ \.php$ {fastcgi_pass php-server;fastcgi_index index.php;fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;include fastcgi_params;}[...]} [...] }
Server2配置
http { [...]upstream php-server {server 192.168.3.100:9000;server 127.0.0.1:9000;keepalive 100;} [...]server {[...]location ~ \.php$ {fastcgi_pass php-server;fastcgi_index index.php;fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;include fastcgi_params;}[...]} [...] }
这两个配置主要的作用是设置php请求的负载均衡。
MySQL配置
mysql util安装
我们需要安装mysql util里的主从配置工具来实现主从切换。
cd /tmp wget http://dev.mysql.com/get/Downloads/MySQLGUITools/mysql-utilities-1.5.3.tar.gz tar xzf mysql-utilities-1.5.3.tar.gz cd mysql-utilities-1.5.3 python setup.py build python setup.py install
mysql my.cnf配置
server1:
[mysql] [...] protocol=tcp [...] [...] [mysqld] [...] # BINARY LOGGING # log-bin = /usr/local/mysql/data/mysql-bin expire-logs-days = 14 sync-binlog = 1 binlog-format=ROW log-slave-updates=true gtid-mode=on enforce-gtid-consistency =true master-info-repository=TABLE relay-log-info-repository=TABLE sync-master-info=1 server-id=1 report-host=server1 report-port=3306 [...]
server2:
[mysql] [...] protocol=tcp [...] [mysqld] [...] # BINARY LOGGING # log-bin = /usr/local/mysql/data/mysql-bin expire-logs-days = 14 sync-binlog = 1 binlog-format=ROW log-slave-updates=true gtid-mode=on enforce-gtid-consistency =true master-info-repository=TABLE relay-log-info-repository=TABLE sync-master-info=1 server-id=2 report-host=server2 report-port=3306 [...]
这两个配置主要是设置了binlog和启用gtid-mode,并且需要设置不同的server-id和report-host。
开放root帐号远程权限:
我们需要在两台mysql服务器设置root帐号远程访问权限。
mysql> grant all on *.* to 'root'@'192.168.3.%' identified by 'Xp29at5F37' with grant option; mysql> grant all on *.* to 'root'@'server1' identified by 'Xp29at5F37' with grant option; mysql> grant all on *.* to 'root'@'server2' identified by 'Xp29at5F37' with grant option; mysql> flush privileges;
设置mysql主从
在任意一台执行如下命令:
mysqlreplicate --master=root:Xp29at5F37@server1:3306 --slave=root:Xp29at5F37@server2:3306 --rpl-user=rpl:o67DhtaW# master on server1: ... connected. # slave on server2: ... connected. # Checking for binary logging on master... # Setting up replication... # ...done.
显示主从关系
mysqlrplshow --master=root:Xp29at5F37@server1 --discover-slaves-login=root:Xp29at5F37# master on server1: ... connected. # Finding slaves for master: server1:3306# Replication Topology Graph server1:3306 (MASTER) | +--- server2:3306 - (SLAVE)
检查主从状态
mysqlrplcheck --master=root:Xp29at5F37@server1 --slave=root:Xp29at5F37@server2# master on server1: ... connected. # slave on server2: ... connected. test Description Status --------------------------------------------------------------------------- Checking for binary logging on master [pass] Are there binlog exceptions? [pass] Replication user exists? [pass] Checking server_id values [pass] Checking server_uuid values [pass] Is slave connected to master? [pass] Check master information file [pass] Checking InnoDB compatibility [pass] Checking storage engines compatibility [pass] Checking lower_case_table_names settings [pass] Checking slave delay (seconds behind master) [pass] # ...done.
在server2建立主从切换脚本
vi /data/sh/mysqlfailover.sh#!/bin/bash mysqlrpladmin --slave=root:Xp29at5F37@server2:3306 failoverchmod +x /data/sh/mysqlfailover.sh
Keepalived配置
keepalived安装(两台都装)
yum -y install keepalived chkconfig keepalived on
keepalived配置(server1)
vi /etc/keepalived/keepalived.confvrrp_sync_group VG_1 { group { inside_network outside_network } }vrrp_instance inside_network { state BACKUP interface eth1 virtual_router_id 51 priority 101 advert_int 1 authentication { auth_type PASS auth_pass 3489 } virtual_ipaddress { 192.168.3.150/24 } nopreempt }vrrp_instance outside_network { state BACKUP interface eth0 virtual_router_id 50 priority 101 advert_int 1 authentication { auth_type PASS auth_pass 3489 } virtual_ipaddress { 10.96.153.239/24 } nopreempt }
keepalived配置(server2)
vrrp_sync_group VG_1 { group { inside_network outside_network } }vrrp_instance inside_network { state BACKUP interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 3489 } virtual_ipaddress { 192.168.3.150 } notify_master /data/sh/mysqlfailover.sh }vrrp_instance outside_network { state BACKUP interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 3489 } virtual_ipaddress { 10.96.153.239/24 } }
此keepalived配置需要注意的是:
- 两台server的state都设置为backup,server1增加nopreempt配置,并且server1 priority比server2高,这样用来实现当server1从宕机恢复时,不抢占VIP;
- server2设置notify_master /data/sh/mysqlfailover.sh,意味着server2接管server1后,执行这个脚本,以把server2的mysql提升为主。
Atlas设置
atlas安装
到这里下载最新版本,https://github.com/Qihoo360/Atlas/releases
cd /tmp wget https://github.com/Qihoo360/Atlas/releases/download/2.2.1/Atlas-2.2.1.el6.x86_64.rpm rpm -i Atlas-2.2.1.el6.x86_64.rpm
atlas配置
cd /usr/local/mysql-proxy/conf cp test.cnf my.cnf vi my.cnf
调整如下参数,
proxy-backend-addresses = 192.168.3.150:3306 proxy-read-only-backend-addresses = 192.168.3.101:3306 pwds = root:qtyU1btXOo074Itvx0UR9Q== event-threads = 8
注意:
proxy-backend-addresse
设置为内网VIP
proxy-read-only-backend-addresses
设置为server2的IP
root:qtyU1btXOo074Itvx0UR9Q==
设置数据库的用户和密码,密码是通过/usr/local/mysql-proxy/bin/encrypt Xp29at5F37
生成。更详细参数解释请查看,Atlas配置详解。
启动atlas
/usr/local/mysql-proxy/bin/mysql-proxy --defaults-file=/usr/local/mysql-proxy/conf/my.cnf
之后程序里配置mysql就配置127.0.0.1:1234就好。
server1主宕机测试
测试keepalived是否工作正常,我们来模拟server1宕机。在server1上执行shutdown关机命令。此时我们登录server2,执行ip addr命令,输出如下:
1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:81:9d:42 brd ff:ff:ff:ff:ff:ff inet 10.96.153.114/24 brd 10.96.153.255 scope global eth0 inet 10.96.153.239/24 scope global secondary eth0 inet6 fe80::20c:29ff:fe81:9d42/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:81:9d:4c brd ff:ff:ff:ff:ff:ff inet 192.168.3.101/24 brd 192.168.3.255 scope global eth1 inet 192.168.3.150/32 scope global eth1 inet6 fe80::20c:29ff:fe81:9d4c/64 scope link valid_lft forever preferred_lft forever 我们看到对外VIP 10.96.153.239和对内IP 192.168.3.150已经转移到server2了,证明keepalived运行正常。
测试是否自动切换了主从,登录server2的mysql服务器,执行show status;命令,如下:
mysql> show slave statusG Empty set (0.00 sec)
我们发现从状态已经为空,证明已经切换为主了。
测试server1是否抢占VIP,为什么要测试这个呢?如果server1恢复之后抢占了VIP,而我们的Atlas里后端设置的是VIP,这样server1启动之后,sql的写操作就会向server1的mysql发送,而server1的mysql数据是旧于server2的,所以这样会造成数据不一致,这个是非常重要的测试。
我们先来启动server1,之后执行ip addr,输出如下:
1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f1:4f:4e brd ff:ff:ff:ff:ff:ff inet 10.96.153.110/24 brd 10.96.153.255 scope global eth0 inet6 fe80::20c:29ff:fef1:4f4e/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f1:4f:58 brd ff:ff:ff:ff:ff:ff inet 192.168.3.100/24 brd 192.168.3.255 scope global eth1 inet6 fe80::20c:29ff:fef1:4f58/64 scope link valid_lft forever preferred_lft forever
我们看到,server1并没有抢占VIP,测试正常。不过另人郁闷的是,在虚拟机的环境并没有测试成功,不知道为什么。
如何恢复server1
设置server1 mysql为从,server1从宕机中恢复之后,mysql的数据已经旧于server2的数据了,这时我们先设置server1 mysql为从。
mysqlreplicate --master=root:Xp29at5F37@server2:3306 --slave=root:Xp29at5F37@server1:3306 --rpl-user=rpl:o67DhtaW# master on server2: ... connected. # slave on server1: ... connected. # Checking for binary logging on master... # Setting up replication... # ...done.
看到提示是设置成功了。
获取server1 mysql数据数据同步情况,server1 mysql刚从宕机恢复,有可能数据远远落后于server2 mysql,所以我们先查看它们之间的数据同步情况。登录server1 mysql,执行如下sql:
mysql> show slave statusG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server2 Master_User: rpl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000004 Read_Master_Log_Pos: 2894 Relay_Log_File: mysql-relay-bin.000002 Relay_Log_Pos: 408 Relay_Master_Log_File: mysql-bin.000004 Slave_IO_Running: yes Slave_SQL_Running: Yes
我们记下Read_Master_Log_Pos的值为2894,登录server2 mysql,执行如下sql:
mysql> show master statusG *************************** 1. row *************************** File: mysql-bin.000004 Position: 2894 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 9347e042-9044-11e4-b4f0-000c29f14f4e:1-7, f5bbfc15-904a-11e4-b519-000c29819d42:1-6 1 row in set (0.00 sec)
记下Position的值,并与Read_Master_Log_Pos比较,如果这两个值非常相近或相等,说明数据已经同步得差不多了,可以进行切换操作;如果差得很远,需要等待它们同步完成。
屏蔽mysql写操作
我们需要在切换时先禁止sql的写操作,如果不这样做,就会在切换时造成数据不一致的问题。屏蔽写操作我们在Atlas上操作。在server2执行登录Atlas命令:
mysql -h127.0.0.1 -P2345 -uuser -ppwdmysql> SELECT * FROM backends; +-------------+--------------------+-------+------+ | backend_ndx | address | state | type | +-------------+--------------------+-------+------+ | 1 | 192.168.3.150:3306 | up | rw | | 2 | 192.168.3.101:3306 | up | ro | +-------------+--------------------+-------+------+ 2 rows in set (0.00 sec)
执行SELECT * FROM backends;后我们看到backend id为1,所以我们执行SET OFFLINE 1;设置此后端下线。
mysql> SET OFFLINE 1; +-------------+--------------------+---------+------+ | backend_ndx | address | state | type | +-------------+--------------------+---------+------+ | 1 | 192.168.3.150:3306 | offline | rw | +-------------+--------------------+---------+------+ 1 row in set (0.00 sec)
mysql> SELECT * FROM backends; +-------------+--------------------+---------+------+ | backend_ndx | address | state | type | +-------------+--------------------+---------+------+ | 1 | 192.168.3.150:3306 | offline | rw | | 2 | 192.168.3.101:3306 | up | ro | +-------------+--------------------+---------+------+ 2 rows in set (0.00 sec)
这时客户端就无法写入数据了。
恢复server1 mysql为主
mysqlrpladmin --master=root:Xp29at5F37@server2:3306 --new-master=root:Xp29at5F37@server1:3306 --demote-master --discover-slaves-login=root:Xp29at5F37 switchover# Discovering slaves for master at server2:3306 # Discovering slave at server1:3306 # Found slave: server1:3306 # Checking privileges. # Performing switchover from master at server2:3306 to slave at server1:3306. # Checking candidate slave prerequisites. # Checking slaves configuration to master. # Waiting for slaves to catch up to old master. # Stopping slaves. # Performing STOP on all slaves. # Demoting old master to be a slave to the new master. # Switching slaves to new master. # Starting all slaves. # Performing START on all slaves. # Checking slaves for errors. # Switchover complete.
再次检查是否恢复成功.
mysqlrplcheck --master=root:Xp29at5F37@server1 --slave=root:Xp29at5F37@server2# master on server1: ... connected. # slave on server2: ... connected. Test Description Status --------------------------------------------------------------------------- Checking for binary logging on master [pass] Are there binlog exceptions? [pass] Replication user exists? [pass] Checking server_id values [pass] Checking server_uuid values [pass] Is slave connected to master? [pass] Check master information file [pass] Checking InnoDB compatibility [pass] Checking storage engines compatibility [pass] Checking lower_case_table_names settings [pass] Checking slave delay (seconds behind master) [pass] # ...done.
设置VIP回到server1,在server2机器上执行:
/etc/init.d/keepalived restart
然后在两台机器分别执行ip addr查看ip绑定状态。
设置server2 atlas后端上线
server2上执行mysql -h127.0.0.1 -P2345 -uuser -ppwd
登录,然后执行SET ONLINE 1;
设置上线(这里1是后端的id,可以使用SELECT * FROM backends;
查看)
mysql> SET ONLINE 1;
+-------------+--------------------+---------+------+
| backend_ndx | address | state | type |
+-------------+--------------------+---------+------+
| 1 | 192.168.3.150:3306 | unknown | rw |
+-------------+--------------------+---------+------+
1 row in set (0.00 sec)
mysql> SELECT * FROM backends; +-------------+--------------------+-------+------+ | backend_ndx | address | state | type | +-------------+--------------------+-------+------+ | 1 | 192.168.3.150:3306 | up | rw | | 2 | 192.168.3.101:3306 | up | ro | +-------------+--------------------+-------+------+ 2 rows in set (0.00 sec)
到这里server1就恢复为主了。
双机高可用、负载均衡、MySQL(读写分离、主从自动切换)架构设计相关推荐
- haproxy keepalived_详解mycat+haproxy+keepalived搭建高可用负载均衡mysql集群
概述 目前业界对数据库性能优化普遍采用集群方式,而oracle集群软硬件投入昂贵,mysql则比较推荐用mycat去搭建数据库集群,下面介绍一下怎么用mycat+haproxy+keepalived搭 ...
- MyCat2 mysql8 读写分离 主从自动切换
MyCat2数据中间件应用 mysql主从配置与自动切换 环境准备 三台虚拟机192.168.2.5(mycat).192.168.2.6(mysql1).192.168.2.7(mysql2) jd ...
- mycat定时向mysql存储数据_【实战演练】Linux操作系统20-MyCat实现Mysql数据库读写分离与自动切换...
#本文欢迎转载,转载请注明出处和作者. 理论部分,详见:繁星亮与鲍包包:[理论研究]业务系统高可用及负载均衡zhuanlan.zhihu.com 本篇主要实现"8.Mysql读写分离&qu ...
- LVS+keepalived高可用负载均衡集群部署(一) ----数据库的读写分离
l 系统环境: RHEL7 l 硬件环境:虚拟机 l 项目描述:为解决网站访问压力大的问题,需要搭建高可用.负载均衡的 web集群. l 架构说明:整个服务架构采用功能分离的方式部署.后端采用 ...
- haproxy负载均衡_基于mycat+haproxy+keepalived搭建mysql数据库高可用负载均衡
概述 目前业界对数据库性能优化普遍采用集群方式,而oracle集群软硬件投入昂贵,mysql则比较推荐用mycat去搭建数据库集群,下面介绍一下怎么用mycat+haproxy+keepalived搭 ...
- MySQL5.6基于GTID同步复制,与如何实现MySQL负载均衡、读写分离。
MySQL想必大家都不陌生,之前文章也有介绍同步复制与半同步复制,今天先来了解下什么是GTID. GTID(global transaction ID)全局事务ID,是由服务器的UUID+一段随机数事 ...
- 高可用+负载均衡 方案
高可用:keepalived / HeartBeat 负载: LVS / Haproxy 浅谈web应用的负载均衡.集群.高可用(HA)解决方案 <构建高性能web站点>笔记--基础架构篇 ...
- 汇总-13台虚拟机搭建一个高可用负载均衡集群架构
要求 用13台虚拟机搭建一个高可用负载均衡集群架构出来,并运行三个站点,具体需求如下. 设计你认为合理的架构,用visio把架构图画出来 搭建lnmp.tomcat+jdk环境 三个站点分别为:dis ...
- 【巨杉数据库SequoiaDB】巨杉 Tech | SequoiaDB SQL实例高可用负载均衡实践
1 前言 在应用程序中,应用配置连接的数据库IP地址和端口号都是固定一个的,当所属IP地址的服务器宕机后,需要人为手工更改IP地址切换数据库服务器.同时当应用接收到成千上万的并发 http 请求时,会 ...
最新文章
- linux如何编译tex,Linux下优秀的文本编辑器(Markdown、LaTeX、MathJax)
- 通常我们将python语言程序保存在一个后缀_ACAA网络设计师模拟题
- Geotools应用简要指南
- JDBC连接mysql--学习目录
- html实时时间代码_价值十万代码之三-获取全部历史数据
- 简单介绍蓝牙无线模块和手机进行的车数据交互技巧
- c#语言中的变量名,在C#中创建动态变量名
- Mac操作指南:Mac Monterey实况文本功能如何开启和关闭?
- Jupyter notebook 中使用pip install安装第三方Python包
- 【设计模式】享元模式(Flyweight)
- java进度条_「java进度条」Java JProgressBar类(进度条) - seo实验室
- UltraISO和rufus制作服务器U启动和下载步骤
- 关于oracle端口映射的远程连接
- 我的helper模块(Python)
- 【整理】信用卡分期或者蚂蚁花呗实际利息超级计算方法
- 计算机如何与机顶盒连接网络连接网络连接,如何将机顶盒连接到计算机
- Visual Studio2022 运行代码时“发生生成错误,是否继续并运行上次的成功生成”
- 【网络篇】TCP SYN Flood Attack(洪水攻击)
- HDU2066-一个人的旅行
- 2017百度之星 复赛
热门文章
- 春Phone计划 51cto沙龙上海站
- 实现“0”的突破:给一直没有对主机硬件进行过任何“保洁、养护”的网友“支两招”...
- 轻量型thttpd+php5
- python异常值处理实例_python-异常值:(“ 08001”,“ [08001] [unixODBC]...
- android 将.txt文件转化为.db并使用raw下的数据文件
- 正则表达式去除连续重复的字符
- 俯瞰大雾弥漫下的鄱阳湖二桥
- Beta冲刺——day6
- ngx-material中Datepicker的日期格式化和选择语系
- RxSwift中Observable的各种创建方法