高可用之MMM

  • 一、MMM简介
    • 二、CPAN模块安装
      • 三、MySQL主机配置
        • 四、mysql-mmm配置
          • 五、MMM高可用性测试

一、MMM简介

MMM即Multi-Master Replication Manager for MySQL:mysql多主复制管理器,基于perl实现,关于mysql主主复制配置的监控、故障转移和管理的一套可伸缩的脚本套件(在任何时候只有一个节点可以被写入),MMM也能对从服务器进行读负载均衡,可以用它来在一组用于复制的服务器启动虚拟ip,除此之外,它还有实现数据备份、节点之间重新同步功能的脚本。
MySQL本身没有提供replication failover的解决方案,通过MMM方案能实现服务器的故障转移,从而实现mysql的高可用。MMM不仅能提供浮动IP的功能,如果当前的主服务器挂掉后,会将后端的从服务器自动转向新的主服务器进行同步复制,不用手工更改同步配置。这个方案是目前比较成熟的解决方案。
详情请看官网:

http://mysql-mmm.org


优点:高可用性,扩展性好,出现故障自动切换,对于主主同步,在同一时间只提供一台数据库写操作,保证数据的一致性。当主服务器挂掉后,另一个主立即接管,其他的从服务器能自动切换,不用人工干预。
缺点:monitor节点是单点,不过也可以结合keepalived或者haertbeat做成高可用;至少三个节点,对主机的数量有要求,需要实现读写分离,还需要在前端编写读写分离程序。在读写非常繁忙的业务系统下表现不是很稳定,可能会出现复制延时、切换失效等问题。
MMM方案并不太适应于对数据安全性要求很高,并且读、写繁忙的环境中。
适用场景:MMM的适用场景为数据库访问量大,并且能实现读写分离的场景。
Mmm主要功能由下面三个脚本提供

mmm_mond:负责所有监控工作的监控守护进程,决定节点的移除(mmm_mond进程定时心跳检测,失败则将write ip浮动到另外一台master)等。mmm_agentd:运行在mysql服务器上的代理守护进程,通过简单远程服务集提供给监控节点。mmm_control:通过命令行管理mmm_mond进程。

在整个监管过程中,需要在mysql中添加相关授权用户,授权的用户包括一个mmm_monitor用户和一个mmm_agent用户,如果想使用mmm的备份工具则还要添加一个mmm_tools用户。

二、CPAN模块安装

1.环境介绍
关闭selinux和防火墙

systemctl stop firewalld
setenforce 0

2.所有主机配置/etc/hosts文件

[root@master1 ~]# vim /etc/hosts
192.168.229.5 master1
192.168.229.6 master2
192.168.229.7 slave1
192.168.229.8 slave2
192.168.229.9 monitor
[root@master1 ~]# for i in master2 slave1 slave2 monitor;do scp /etc/hosts $i:/etc/;done

3.配置ntp同步时间

[root@master1 ~]# vim /etc/chrony.conf
修改如下:
server cn.pool.ntp.org iburst
allow 192.168.229.0/24
[root@master1 ~]# for i in master2 slave1 slave2 monitor;do scp /etc/chrony.conf $i:/etc/;done
[root@master1 ~]# systemctl restart chronyd

主机规划

角色 ip hostname server_id write vip read vip
master1 192.168.229.5 master1 1 192.168.229.50
master2 192.168.229.6 master2 2 192.168.229.60
slave1 192.168.229.7 slave1 3 192.168.229.70
slave2 192.168.229.8 slave2 4 192.168.229.80
monitor 192.168.229.9 monitor

4.所有主机安装需求包
perl、perl-devel、perl-CPAN、libart_lgpl.x86_64、rrdtool.x86_64、rrdtool-perl.x86_64包

[root@master1 ~]# yum -y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64

5.配置cpan源为国内源
(1)查看源配置

[root@master1 ~]# cpan
Terminal does not support AddHistory.cpan shell -- CPAN exploration and modules installation (v1.9800)
Enter 'h' for help.cpan[1]> o conf urllisturllist           0 [http://cpan.rinet.ru/]1 [http://cpan.pesat.net.id/]2 [http://cpan.mirror.choon.net/]
Type 'o conf' to view all configuration items

(2)移除源配置

cpan[2]> o conf urllist pop http://cpan.mirror.choon.net/
Please use 'o conf commit' to make the config permanent!cpan[3]> o conf urllist pop http://cpan.pesat.net.id/
Please use 'o conf commit' to make the config permanent!cpan[4]> o conf urllist pop http://cpan.rinet.ru/
Please use 'o conf commit' to make the config permanent!

(3)添加新的源配置

cpan[5]> o conf urllisturllist
Type 'o conf' to view all configuration itemscpan[6]> o conf urllist push http://mirrors.aliyun.com/CPAN/
Please use 'o conf commit' to make the config permanent!cpan[7]> o conf urllist push http://mirrors.163.com/cpan/
Please use 'o conf commit' to make the config permanent!cpan[8]> o conf urllist push ftp://mirrors.sohu.com/CPAN/
Please use 'o conf commit' to make the config permanent!cpan[9]> o conf commit
commit: wrote '/root/.cpan/CPAN/MyConfig.pm'cpan[10]> o conf urllisturllist           0 [http://mirrors.aliyun.com/CPAN/]1 [http://mirrors.163.com/cpan/]2 [ftp://mirrors.sohu.com/CPAN/]
Type 'o conf' to view all configuration items

6.在线安装perl的相关库

[root@master1 ~]# cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

出现错误后,剔除错误再执行,直到出现下面的内容。

CPAN: Storable loaded ok (v2.45)
Reading '/root/.cpan/Metadata'Database was generated on Mon, 08 Mar 2021 19:17:02 GMT
CPAN: Module::CoreList loaded ok (v5.20210220)
Algorithm::Diff is up to date (1.201).
Class::Singleton is up to date (1.6).
DBI is up to date (1.643).
DBD::mysql is up to date (4.050).
Log::Dispatch is up to date (2.70).
Log::Log4perl is up to date (1.54).
Mail::Send is up to date (2.21).
Proc::Daemon is up to date (0.23).
Time::HiRes is up to date (1.9764).
Params::Validate is up to date (1.30).

下面安装失败,进行手动安装。

DBD::mysql、Net::Ping、Net::ARP

7.手动安装失败的模块
(1)获取模块包

[root@master1 ~]# cd /usr/local/src/
[root@master1 src]# wget https://cpan.metacpan.org/authors/id/D/DV/DVEEDEN/DBD-mysql-4.050.tar.gz
[root@master1 src]# wget https://cpan.metacpan.org/authors/id/R/RU/RURBAN/Net-Ping-2.74.tar.gz
[root@master1 src]# wget https://cpan.metacpan.org/authors/id/C/CR/CRAZYDJ/Net-ARP-1.0.11.tgz
[root@master1 src]# ls
DBD-mysql-4.050.tar.gz  Net-ARP-1.0.11.tgz  Net-Ping-2.74.tar.gz

(2)安装模块

[root@master1 src]# tar zxf DBD-mysql-4.050.tar.gz
[root@master1 src]# cd DBD-mysql-4.050/
[root@master1 DBD-mysql-4.050]# perl Makefile.PL
[root@master1 DBD-mysql-4.050]# make install
[root@master1 DBD-mysql-4.050]# cd ..
[root@master1 src]# tar zxf Net-Ping-2.74.tar.gz
[root@master1 src]# cd Net-Ping-2.74/
[root@master1 Net-Ping-2.74]# perl Makefile.PL
[root@master1 Net-Ping-2.74]# make install
[root@master1 Net-Ping-2.74]# cd ..
[root@master1 src]# tar zxf Net-ARP-1.0.11.tgz
[root@master1 src]# cd Net-ARP-1.0.11/
[root@master1 Net-ARP-1.0.11]# perl Makefile.PL
[root@master1 Net-ARP-1.0.11]# make install
[root@master1 ~]# cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP
CPAN: Storable loaded ok (v2.45)
Reading '/root/.cpan/Metadata'Database was generated on Tue, 09 Mar 2021 00:41:02 GMT
CPAN: Module::CoreList loaded ok (v5.20210220)
Algorithm::Diff is up to date (1.201).
Class::Singleton is up to date (1.6).
DBI is up to date (1.643).
DBD::mysql is up to date (4.050).
Log::Dispatch is up to date (2.70).
Log::Log4perl is up to date (1.54).
Mail::Send is up to date (2.21).
Net::Ping is up to date (2.74).
Proc::Daemon is up to date (0.23).
Time::HiRes is up to date (1.9764).
Params::Validate is up to date (1.30).
Net::ARP is up to date (1.0.11).

注意:所有主机都必须装完所有模块,所以缺少哪个就手动安装哪个。

三、MySQL主机配置

1.修改MySQL主机配置文件
在master1、master2、slave1、slave2主机上安装mysql和配置复制master1和master2互为主从,slave1、slave2为master1的从,在每个mysql的配置文件/etc/my.cnf中加入以下内容,注意server-id不能重复。
若是克隆而来的主机,需要删除auto.cnf再重启。

[root@master1 ~]# rm -rf /usr/local/mysql/data/auto.cnf

(1)master1主机

[root@master1 ~]# vim /etc/my.cnf
添加:
log_bin=mysql-bin
binlog_format=mixed
server_id=1
relay_log=relay-bin
relay_log_index=slave-relay-bin.index
log_slave_updates=1
auto_increment_increment=2
auto_increment_offset=1

(2)master2主机

[root@master2 ~]# vim /etc/my.cnf
添加:
log_bin=mysql-bin
binlog_format=mixed
server_id=2
relay_log=relay-bin
relay_log_index=slave-relay-bin.index
log_slave_updates=1
auto_increment_increment=2
auto_increment_offset=2

(3)slave1主机

[root@slave1 ~]# vim /etc/my.cnf
添加:
server_id=3
relay_log=relay-bin
relay_log_index=slave-relay-bin.index
read_only=1

(4)slave2主机

[root@slave2 ~]# vim /etc/my.cnf
添加:
server_id=4
relay_log=relay-bin
relay_log_index=slave-relay-bin.index
read_only=1

(5)参数说明

log_slave_updates=1
master1和master2为主主复制,master1和slave1、slave2为主从复制。
(1)master1和master2的主主复制没有问题(master1写入数据能同步到master2,从master2写入数据能够同步到master1);(2)主从同步时,当从master1写入时,数据可以写入到slave1、slave2;(3)当从master2写入时,数据就不能写入到slave1、slave2;
而此参数就是为了解决这个问题的。

2.重启所有MySQL主机

[root@slave1 data]# systemctl restart mysqld

3.防火墙设置
若是防火墙关闭,则跳过此步骤。

firewall-cmd --permanent --add-port=3306/tcp
firewall-cmd --reload

4.主从配置
master1和master2配置成主主,slave1和slave2配置成master1的从。
(1)master1上授权

mysql> grant replication slave on *.* to rep@'192.168.229.%' identified by 'Test123!';
Query OK, 0 rows affected, 1 warning (0.00 sec)

(2)master2上授权

mysql> grant replication slave on *.* to rep@'192.168.229.%' identified by 'Test123!';
Query OK, 0 rows affected, 1 warning (0.00 sec)

(3)把master2、slave1和slave2配置成master1的从库
在master1上获取binlog文件和Position

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      453 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

(4)master2、slave1和slave2执行

mysql> change master to master_host='192.168.229.5',master_port=3306,master_user='rep',master_password='Test123!',master_log_file='mysql-bin.000001',master_log_pos=453;
Query OK, 0 rows affected, 2 warnings (0.02 sec)mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

(5)验证主从复制

#master2主机
mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.5Master_User: repMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 453Relay_Log_File: relay-bin.000003Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: Yes#slave1主机
mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.5Master_User: repMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 453Relay_Log_File: relay-bin.000003Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: Yes#slave2主机
mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.5Master_User: repMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 453Relay_Log_File: relay-bin.000003Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: Yes

如果Slave_IO_Running和Slave_SQL_Running都为yes,那么主从就已经配置好了。
(6)把master1配置成master2的从库
在master2上获取binlog文件和Position

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      453 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

(7)master1上执行

mysql> change master to master_host='192.168.229.6',master_port=3306,master_user='rep',master_password='Test123!',master_log_file='mysql-bin.000001',master_log_pos=453;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

(8)验证主从复制
master1主机

mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.6Master_User: repMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000002Read_Master_Log_Pos: 154Relay_Log_File: relay-bin.000002Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000002Slave_IO_Running: YesSlave_SQL_Running: Yes

如果Slave_IO_Running和Slave_SQL_Running都为yes,那么主从就已经配置好了。

四、mysql-mmm配置

1.在4台mysql节点创建用户
(1)创建代理账号

mysql> grant super,replication client,process on *.* to 'mmm_agent'@'192.168.229.%' identified by 'Test123!';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> select * from mysql.user where user='mmm_agent'\G
*************************** 1. row ***************************Host: 192.168.229.%User: mmm_agentSelect_priv: NInsert_priv: NUpdate_priv: NDelete_priv: NCreate_priv: NDrop_priv: NReload_priv: NShutdown_priv: NProcess_priv: YFile_priv: NGrant_priv: NReferences_priv: NIndex_priv: NAlter_priv: NShow_db_priv: NSuper_priv: YCreate_tmp_table_priv: NLock_tables_priv: NExecute_priv: NRepl_slave_priv: NRepl_client_priv: YCreate_view_priv: NShow_view_priv: NCreate_routine_priv: NAlter_routine_priv: NCreate_user_priv: NEvent_priv: NTrigger_priv: N
Create_tablespace_priv: Nssl_type: ssl_cipher: x509_issuer: x509_subject: max_questions: 0max_updates: 0max_connections: 0max_user_connections: 0plugin: mysql_native_passwordauthentication_string: *48B1BB7AD34484EF0632D4B9A748CC861DFBE88Bpassword_expired: Npassword_last_changed: 2021-03-09 22:31:47password_lifetime: NULLaccount_locked: N
1 row in set (0.00 sec)

(2)创建监控账号

mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.229.%' identified by 'Test123!';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> select * from mysql.user where user='mmm_monitor'\G
*************************** 1. row ***************************Host: 192.168.229.%User: mmm_monitorSelect_priv: NInsert_priv: NUpdate_priv: NDelete_priv: NCreate_priv: NDrop_priv: NReload_priv: NShutdown_priv: NProcess_priv: NFile_priv: NGrant_priv: NReferences_priv: NIndex_priv: NAlter_priv: NShow_db_priv: NSuper_priv: NCreate_tmp_table_priv: NLock_tables_priv: NExecute_priv: NRepl_slave_priv: NRepl_client_priv: YCreate_view_priv: NShow_view_priv: NCreate_routine_priv: NAlter_routine_priv: NCreate_user_priv: NEvent_priv: NTrigger_priv: N
Create_tablespace_priv: Nssl_type: ssl_cipher: x509_issuer: x509_subject: max_questions: 0max_updates: 0max_connections: 0max_user_connections: 0plugin: mysql_native_passwordauthentication_string: *48B1BB7AD34484EF0632D4B9A748CC861DFBE88Bpassword_expired: Npassword_last_changed: 2021-03-09 22:34:07password_lifetime: NULLaccount_locked: N
1 row in set (0.00 sec)

注意:因为前面已经配置了主从复制,以及主从已经是没问题的,所以只要在master1服务器执行可以了。
2.验证用户创建是否成功
检查master2和slave1、slave2三台db上是否都存在监控和代理账号。

mysql> select user,host from mysql.user where user in ('mmm_monitor','mmm_agent');
+-------------+---------------+
| user        | host          |
+-------------+---------------+
| mmm_agent   | 192.168.229.% |
| mmm_monitor | 192.168.229.% |
+-------------+---------------+
2 rows in set (0.00 sec)或者mysql> show grants for 'mmm_agent'@'192.168.229.%';
+--------------------------------------------------------------------------------+
| Grants for mmm_agent@192.168.229.%                                             |
+--------------------------------------------------------------------------------+
| GRANT PROCESS, SUPER, REPLICATION CLIENT ON *.* TO 'mmm_agent'@'192.168.229.%' |
+--------------------------------------------------------------------------------+
1 row in set (0.00 sec)mysql> show grants for 'mmm_monitor'@'192.168.229.%';
+------------------------------------------------------------------+
| Grants for mmm_monitor@192.168.229.%                             |
+------------------------------------------------------------------+
| GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.229.%' |
+------------------------------------------------------------------+
1 row in set (0.00 sec)

注意

mmm_monitor用户:mmm监控用于对mysql服务器进程健康检查。mmm_agent用户:mmm代理用来更改只读模式,复制的主服务器等。

3.mysql-mmm安装
monitor主机(192.168.229.9)上安装监控程序

[root@monitor ~]# cd /tmp
[root@monitor tmp]# wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
[root@monitor tmp]# tar -zxf mysql-mmm-2.2.1.tar.gz
[root@monitor tmp]# cd mysql-mmm-2.2.1/
[root@monitor mysql-mmm-2.2.1]# ls
bin  COPYING  etc  INSTALL  lib  Makefile  README  sbin  UPGRADE  VERSION
[root@monitor mysql-mmm-2.2.1]# make install

4.在所有MySQL节点安装代理

cd /tmp
wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install

5.配置mmm编写配置文件
五台主机必须一致,完成安装后,所有的配置文件都放到了/etc/mysql-mmm/下面。
管理服务器和数据库服务器上都要包含一个共同的文件mmm_common.conf,内容如下:

[root@monitor ~]# cd /etc/mysql-mmm/
[root@monitor mysql-mmm]# ls
mmm_agent.conf  mmm_common.conf  mmm_mon.conf  mmm_tools.conf
[root@monitor mysql-mmm]# vim mmm_common.conf
active_master_role      writer<host default>cluster_interface               ens33pid_path                                /var/run/mmm_agentd.pidbin_path                                /usr/lib/mysql-mmm/replication_user        repreplication_password    Test123!agent_user                              mmm_agentagent_password                  Test123!
</host><host master1>ip                                              192.168.229.5mode                                    masterpeer                                    master2
</host><host master2>ip                                              192.168.229.6mode                                    masterpeer                                    master1
</host><host slave1>ip                                              192.168.229.7mode                                    slave
</host><host slave2>ip                                              192.168.229.8mode                                    slave
</host><role writer>hosts                                   master1,master2ips                                             192.168.229.50mode                                    exclusive
</role><role reader>hosts                                   master2,slave1,slave2ips                                             192.168.229.60,192.168.229.70,192.168.229.80mode                                    balanced
</role>

相关参数解释

active_master_role  writer
#积极的master角色的标示,所有的db服务器要开启read_only参数,对于writer服务器监控代理会自动将read_only属性关闭。cluster_interface  ens33  #群集的网络接口pid_path  /var/run/mmm_agentd.pid  #pid路径bin_path  /usr/lib/mysql-mmm/  #可执行文件路径replication_user rep  #复制用户replication_password Test123!  #复制用户密码agent_user  mmm_agent  #代理用户agent_password Test123!  #代理用户密码<host master1>  #master1的hostnameip 192.168.229.5  #master1的ipmode master  #角色属性,master代表是主peer master2  #与master1对等的服务器的hostname,也就是master2的服务器hostname<host slave1>  #从库的hostname,如果存在多个从库可以重复一样的配置ip 192.168.229.7  #从的ipmode slave  #slave的角色属性代表当前host是从<role writer>  #writer角色配置hosts  master1,master2
#能进行写操作的服务器的host名,如果不想切换写操作,这里可以只配置master,这样也可以避免因为网络延时而进行write的切换,但是一旦master出现故障,那么当前的MMM就没有writer了,只有对外的read操作。
ips 192.168.229.50  #对外提供的写操作的虚拟IPmode exclusive  #exclusive代表只允许存在一个主,也就是只能提供一个写的IP<role reader>  #read角色配置hosts  master2,slave1,slave2
#对外提供读操作的服务器的host名,当然这里也可以把master加进来ips  192.168.229.60,192.168.229.70,192.168.229.80
#对外提供读操作的虚拟ip,这三个ip和host不是一一对应的,并且ips也hosts的数目也可以不相同,如果这样配置的话其中一个hosts会分配两个ip。mode balanced  #balanced代表负载均衡

将这个文件拷贝到其它的服务器,配置不变。

[root@monitor ~]# for i in master1 master2 slave1 slave2;do scp /etc/mysql-mmm/mmm_common.conf $i:/etc/mysql-mmm/;done

6.代理文件配置
编辑4台mysql节点机上的/etc/mysql-mmm/mmm_agent.conf。

[root@master1 mysql-mmm]# vim mmm_agent.conf
include mmm_common.conf
this master1

注意:这个配置只配置db服务器,监控服务器不需要配置,this后面的hostname改成当前服务器的主机名。
7.启动代理进程
4台MySQL主机,在/etc/init.d/mysql-mmm-agent的脚本文件里加入如下内容。

[root@master1 ~]# vim /etc/init.d/mysql-mmm-agent
...
# pidfile: /var/run/mmm_agentd.pid
source /root/.bash_profile
# Cluster name (it can be empty for default cases)
...

添加成系统服务并设置为自启动

chkconfig --add mysql-mmm-agent
chkconfig mysql-mmm-agent on
/etc/init.d/mysql-mmm-agent start
[root@master1 ~]# ss -lnt
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port
LISTEN     0      128       *:22                    *:*
LISTEN     0      10     192.168.229.5:9989                  *:*
LISTEN     0      128       *:111                   *:*
LISTEN     0      128    [::]:22                 [::]:*
LISTEN     0      80     [::]:3306               [::]:*
LISTEN     0      128    [::]:111                [::]:*

注意:添加source /root/.bash_profile目的是为了mysql-mmm-agent服务能开机自启。
自动启动和手动启动的唯一区别,就是激活一个console。
那么说明在做为服务启动时,可能是由于缺少环境变量服务启动失败,报错信息如下:

Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @INC (@INC contains: /root/perl5/lib/perl5 /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.
BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.
failed

解决方法:

cpan -i Proc::Daemon
cpan -i Log::Log4perl
/etc/init.d/mysql-mmm-agent start

若如报错,则进行下面的步骤。
8.配置防火墙
防火墙若已经关闭,则跳过此步骤。

firewall-cmd --permanent --add-port=9989/tcp
firewall-cmd --reload

9.编辑monitor主机配置文件

[root@monitor mysql-mmm]# vim mmm_mon.conf
include mmm_common.conf<monitor>ip                                              127.0.0.1pid_path                                /var/run/mmm_mond.pidbin_path                                /usr/lib/mysql-mmm/status_path                             /var/lib/misc/mmm_mond.statusping_ips                                192.168.229.5,192.168.229.6,192.168.229.7,192.168.229.8
auto_set_online 0
</monitor>
<check default>
check_period 5
trap_period 10
timeout 2
restart_after 10000
max_backlog 86400
</check>
<host default>monitor_user                    mmm_monitormonitor_password                Test123!
</host>debug 0

参数解释

ip 127.0.0.1  #为了安全性,设置只在本机监听,mmm_mond默认监听9988。ping_ips  192.168.229.5,192.168.229.6,192.168.229.7,192.168.229.8
#用于测试网络可用性IP地址列表,只要其中有一个地址ping通,就代表网络正常,这里不要写入本机地址,只写入MySQL节点的IP。添加:
auto_set_online 0  #设置自动online的时间,默认是超过60s就将它设置为online,默认是60s,这里将其设为0就是立即online。添加:
<check default>
check_period 5
trap_period 10
timeout 2
restart_after 10000
max_backlog 86400
</check>check_period
描述:检查周期默认为5s
默认值:5strap_period
描述:一个节点被检测不成功的时间持续trap_period秒,就慎重的认为这个节点失败了。
默认值:10stimeout
描述:检查超时的时间
默认值:2srestart_after
描述:在完成restart_after次检查后,重启checker进程。
默认值:10000max_backlog
描述:记录检查rep_backlog日志的最大次数。
默认值:60monitor_user  mmm_monitor  #监控db服务器的用户monitor_password  Test123!  #监控db服务器的密码debug 0  #debug 0正常模式,1为debug模式

10.启动监控进程
在/etc/init.d/mysql-mmm-monitor的脚本文件里加入如下内容:

[root@monitor ~]# vim /etc/init.d/mysql-mmm-monitor
...
# pidfile: /var/run/mmm_mond.pid
source /root/.bash_profile
# Cluster name (it can be empty for default cases)
...

11.添加成系统服务并设置为自启动

chkconfig --add mysql-mmm-monitor
chkconfig mysql-mmm-monitor on
/etc/init.d/mysql-mmm-monitor start
[root@monitor ~]# ss -lnt
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port
LISTEN     0      10     127.0.0.1:9988                  *:*
LISTEN     0      128       *:111                   *:*
LISTEN     0      128       *:22                    *:*
LISTEN     0      80     [::]:3306               [::]:*
LISTEN     0      128    [::]:111                [::]:*
LISTEN     0      128    [::]:22                 [::]:*

启动报错:

Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
/usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.
failed

解决方法:安装下列perl的库

cpan -i Proc::Daemon
cpan -i Log::Log4perl
/etc/init.d/mysql-mmm-monitor start

若无报错,则继续下面的操作。
注意:无论是在db端还是在监控端,如果有对配置文件进行修改操作都需要重启代理进程和监控进程。
注意:MMM启动顺序:先启动monitor,再启动agent检查集群状态。

[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/AWAITING_RECOVERY. Roles: master2(192.168.229.6) master/AWAITING_RECOVERY. Roles: slave1(192.168.229.7) slave/AWAITING_RECOVERY. Roles: slave2(192.168.229.8) slave/AWAITING_RECOVERY. Roles:

如果服务器状态不是ONLINE,可以用如下命令将服务器上线。

[root@monitor ~]# mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/ONLINE. Roles: writer(192.168.229.50)master2(192.168.229.6) master/AWAITING_RECOVERY. Roles: slave1(192.168.229.7) slave/AWAITING_RECOVERY. Roles: slave2(192.168.229.8) slave/AWAITING_RECOVERY. Roles:

注意:若是MySQL主机是HARD_OFFLINE,则进行如下操作。

[root@monitor ~]# vim /etc/ld.so.conf
/usr/local/mysql/lib
/usr/lib
[root@monitor ~]# ldconfig
[root@monitor ~]# for i in master1 master2 slave1 slave2;do scp /etc/ld.so.conf $i:/etc/;done

然后重启mmm服务就可以了。
全部设为online状态

[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/ONLINE. Roles: writer(192.168.229.50)master2(192.168.229.6) master/ONLINE. Roles: reader(192.168.229.70)slave1(192.168.229.7) slave/ONLINE. Roles: reader(192.168.229.60)slave2(192.168.229.8) slave/ONLINE. Roles: reader(192.168.229.80)

从上面的显示可以看到,写请求的VIP在master1上,所有从节点也都把master1当做主节点。
12.查看是否启用vip

[root@master1 ~]# ip a show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:05:f2:9f brd ff:ff:ff:ff:ff:ffinet 192.168.229.5/24 brd 192.168.229.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.229.50/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c8a4:4ff4:6676:26ef/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@master2 ~]# ip a show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:47:f8:4a brd ff:ff:ff:ff:ff:ffinet 192.168.229.6/24 brd 192.168.229.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.229.70/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c8a4:4ff4:6676:26ef/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3f82:e9b9:240a:7e6/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@slave1 ~]# ip a show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:50:b7:9c brd ff:ff:ff:ff:ff:ffinet 192.168.229.7/24 brd 192.168.229.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.229.60/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c8a4:4ff4:6676:26ef/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3f82:e9b9:240a:7e6/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::cce:d3a4:c796:639a/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@slave2 ~]# ip a show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:f7:17:f1 brd ff:ff:ff:ff:ff:ffinet 192.168.229.8/24 brd 192.168.229.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.229.80/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c8a4:4ff4:6676:26ef/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3f82:e9b9:240a:7e6/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::cce:d3a4:c796:639a/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever

在master2、slave1、slave2主机上查看主mysql的指向。

mysql> show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.5Master_User: repMaster_Port: 3306Connect_Retry: 60
...

可以看到全部指向master1。

五、MMM高可用性测试

服务器读写采用VIP地址进行读写,出现故障时VIP会漂移到其它节点,由其它节点提供服务。
1.查看集群状态
首先查看整个集群的状态,可以看到整个集群状态正常。

[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/ONLINE. Roles: writer(192.168.229.50)master2(192.168.229.6) master/ONLINE. Roles: reader(192.168.229.70)slave1(192.168.229.7) slave/ONLINE. Roles: reader(192.168.229.60)slave2(192.168.229.8) slave/ONLINE. Roles: reader(192.168.229.80)

2.模拟master1宕机
手动停止mysql服务,观察monitor日志,master1的日志如下。

[root@master1 ~]# systemctl stop mysqld
[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2021/03/10 01:06:21  WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.229.5:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.229.5' (111)
2021/03/10 01:06:21  WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.229.5:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.229.5' (111)
2021/03/10 01:06:30 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.229.5:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.229.5' (111)
2021/03/10 01:06:31 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2021/03/10 01:06:31  INFO Removing all roles from host 'master1':
2021/03/10 01:06:31  INFO     Removed role 'writer(192.168.229.50)' from host 'master1'
2021/03/10 01:06:31  INFO Orphaned role 'writer(192.168.229.50)' has been assigned to 'master2'

3.查看群集的最新状态

[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/HARD_OFFLINE. Roles: master2(192.168.229.6) master/ONLINE. Roles: reader(192.168.229.70), writer(192.168.229.50)slave1(192.168.229.7) slave/ONLINE. Roles: reader(192.168.229.60)slave2(192.168.229.8) slave/ONLINE. Roles: reader(192.168.229.80)

可以看出master1的状态由ONLINE转换为HARD_OFFLINE,写VIP转移到了master2主机上。
4.检查所有的db服务器群集状态

[root@monitor ~]# mmm_control checks all
master1  ping         [last change: 2021/03/09 23:44:53]  OK
master1  mysql        [last change: 2021/03/10 01:06:31]  ERROR: Connect error (host = 192.168.229.5:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.229.5' (111)
master1  rep_threads  [last change: 2021/03/09 23:44:53]  OK
master1  rep_backlog  [last change: 2021/03/09 23:44:53]  OK: Backlog is null
slave1   ping         [last change: 2021/03/09 23:44:53]  OK
slave1   mysql        [last change: 2021/03/09 23:44:53]  OK
slave1   rep_threads  [last change: 2021/03/09 23:44:53]  OK
slave1   rep_backlog  [last change: 2021/03/09 23:44:53]  OK: Backlog is null
master2  ping         [last change: 2021/03/09 23:44:53]  OK
master2  mysql        [last change: 2021/03/09 23:44:53]  OK
master2  rep_threads  [last change: 2021/03/09 23:44:53]  OK
master2  rep_backlog  [last change: 2021/03/09 23:44:53]  OK: Backlog is null
slave2   ping         [last change: 2021/03/09 23:44:53]  OK
slave2   mysql        [last change: 2021/03/09 23:44:53]  OK
slave2   rep_threads  [last change: 2021/03/09 23:44:53]  OK
slave2   rep_backlog  [last change: 2021/03/09 23:44:53]  OK: Backlog is null

可以看到master1能ping通,说明只是服务死掉了。
5.查看master2主机的ip地址

[root@master2 ~]# ip a show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:47:f8:4a brd ff:ff:ff:ff:ff:ffinet 192.168.229.6/24 brd 192.168.229.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.229.70/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.229.50/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c8a4:4ff4:6676:26ef/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::3f82:e9b9:240a:7e6/64 scope link noprefixroute valid_lft forever preferred_lft forever

6.查看slave1、slave2的主MySQL指向
slave1主机

mysql> show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.6Master_User: repMaster_Port: 3306Connect_Retry: 60
...

slave2主机

mysql> show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Reconnecting after a failed master event readMaster_Host: 192.168.229.6Master_User: repMaster_Port: 3306Connect_Retry: 60
...

7.启动master1主机的mysql服务
观察monitor日志,master1的日志如下。

[root@master1 ~]# systemctl start mysqld
[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2021/03/10 01:14:16  INFO Check 'mysql' on 'master1' is ok!
2021/03/10 01:14:17 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY
2021/03/10 01:14:18  INFO Check 'rep_threads' on 'master1' is ok!
2021/03/10 01:14:18  INFO Check 'rep_backlog' on 'master1' is ok!

可以看到master1的状态由hard_offline改变为awaiting_recovery状态。
8.将master1服务器上线

[root@monitor ~]# mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!

9.查看群集最新状态

[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/ONLINE. Roles: master2(192.168.229.6) master/ONLINE. Roles: reader(192.168.229.70), writer(192.168.229.50)slave1(192.168.229.7) slave/ONLINE. Roles: reader(192.168.229.60)slave2(192.168.229.8) slave/ONLINE. Roles: reader(192.168.229.80)

可以看到主库启动不会接管主,直到现有的主再次宕机。
总结

(1)master2备选主节点宕机不影响集群的状态,就是移除了master2备选节点的读状态。(2)master1主节点宕机,由master2备选主节点接管写角色,slave1、slave2指向新master2主库进行复制,slave1、slave2会自动change master到master2。(3)如果master1主库宕机,master2复制应用又落后于master1时就变成了主可写状态,这时的数据主无法保证一致性。
如果master2、slave1、slave2延迟于master1主,这时master1宕机,slave1、slave2将会等待数据追上db1后,再重新指向新的主node2进行复制操作,这时的数据也无法保证同步的一致性。(4)如果采用MMM高可用架构,主,主备选节点机器配置一样,而且开启半同步进一步提高安全性或采用MariaDB/mysql5.7进行多线程从复制,提高复制的性能。

10.其他相关介绍

(1)日志文件
日志文件往往是分析错误的关键,所以要善于利用日志文件进行问题分析。
db端:/var/log/mysql-mmm/mmm_agentd.log
监控端:/var/log/mysql-mmm/mmm_mond.log(2)命令文件
mmm_agentd:db代理进程的启动文件
mmm_mond:监控进程的启动文件
mmm_backup:备份文件
mmm_restore:还原文件
mmm_control:监控操作命令文件
db服务器端只有mmm_agentd程序,其它的都是在monitor服务器端。(3)mmm_control用法
mmm_control程序可以用于监控群集状态、切换writer、设置online\offline操作等。
help - show this message  #帮助信息
ping - ping monitor  #ping当前的群集是否正常
show - show status  #群集在线状态检查
checks [|all [|all]] - show checks status  #执行监控检查操作
set_online - set host online  #将host设置为online
set_offline - set host offline  #将host设置为offline
mode - print current mode.  #打印输出当前的mode
set_active - switch into active mode.
set_manual - switch into manual mode.
set_passive - switch into passive mode.
move_role [--force] - move exclusive role to host  #移除writer服务器为指定的host服务器(Only use --force if you know what you aredoing!)
set_ip - set role with ip to host

检查所有的db服务器群集状态

[root@monitor1 ~]# mmm_control checks all

检查项包括:ping、mysql是否正常运行、复制线程是否正常等。
检查群集环境在线状况

[root@monitor1 ~]# mmm_control show

对指定的host执行offline操作

[root@monitor1 ~]# mmm_control set_offline slave2

对指定的host执行onine操作

[root@monitor1 ~]# mmm_control set_online slave2

执行write切换(手动切换):查看当前的slave对应的master

[root@slave2 ~]# mysql -uroot -pasd123 -e 'show slave status\G;'
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.6

writer切换,要确保mmm_common.conf文件中的writer属性有配置对应的host,否则无法切换。

[root@monitor ~]# mmm_control move_role writer master1
OK: Role 'writer' has been moved from 'master2' to 'master1'. Now you can wait some time and check new roles info!
[root@monitor ~]# mmm_control showmaster1(192.168.229.5) master/ONLINE. Roles: writer(192.168.229.50)master2(192.168.229.6) master/ONLINE. Roles: reader(192.168.229.70)slave1(192.168.229.7) slave/ONLINE. Roles: reader(192.168.229.60)slave2(192.168.229.8) slave/ONLINE. Roles: reader(192.168.229.80)

slave从库自动切换到了新的master

[root@slave2 ~]# mysql -uroot -pasd123 -e 'show slave status\G;'
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.229.5

11.其它处理问题
如果不想让writer从master切换到backup(包括主从的延时也会导致写VIP的切换),那么可以在配置/etc/mysql-mmm/mmm_common.conf时,去掉里面的backup。#writer角色配置hosts master1
#这里只配置一个Hosts ips 192.168.229.50 #对外提供的写操作的虚拟IP
mode exclusive #exclusive代表只允许存在一个主,也就是只能提供一个写的IP。
这样的话当master1出现故障了writer写操作不会切换到master2服务器,并且slave也不会指向新的master,此时当前的MMM之前对外提供写服务。
12.总结
(1)对外提供读写的虚拟IP是由monitor程序控制。如果monitor没有启动,那么db服务器不会被分配虚拟ip,但是如果已经分配好了虚拟ip,当monitor程序关闭,原先分配的虚拟ip不会立即关闭外部程序还可以连接访问(只要不重启网络),这样的好处就是对于monitor的可靠性要求就会低一些,但是如果这时其中的某一个db服务器故障了就无法处理切换,也就是原先的虚拟ip还是维持不变,挂掉的那台DB的虚拟ip会变的不可访问。
(2)agent程序受monitor程序的控制处理write切换,从库切换等操作。如果monitor进程关闭了那么agent进程就起不到什么作用,它本身不能处理故障。
(3)monitor程序负责监控db服务器的状态,包括Mysql数据库、服务器是否运行、复制线程是否正常、主从延时等;它还用于控制agent程序处理故障。
(4)monitor会每隔几秒钟监控db服务器的状态,如果db服务器已经从故障变成了正常,那么monitor会自动在60s之后将其设置为online状态(默认是60s可以设为其它的值),有监控端的
配置文件参数“auto_set_online”决定,群集服务器的状态有三种分别是:
HARD_OFFLINE→AWAITING_RECOVERY→ONLINE
(5)默认monitor会控制mmm_agent会将writer db服务器read_only修改为OFF,其它的db服务器read_only修改为ON,所以为了严谨可以在所有的服务器的my.cnf文件中加入read_only=1由monitor控制来控制writer和read,root用户和复制用户不受read_only参数的影响。

MySQL高可用之MMM介绍相关推荐

  1. mysql高可用方案MHA介绍

    mysql高可用方案MHA介绍 概述 MHA是一位日本MySQL大牛用Perl写的一套MySQL故障切换方案,来保证数据库系统的高可用.在宕机的时间内(通常10-30秒内),完成故障切换,部署MHA, ...

  2. mysql高可用之MMM

    博主QQ:819594300 博客地址:http://zpf666.blog.51cto.com/ 有什么疑问的朋友可以联系博主,博主会帮你们解答,谢谢支持! 一.MMM简介: MMM即Multi-M ...

  3. MYSQL高可用架构MMM实现

    关注「WeiyiGeek」公众号 将我设为「特别关注」,每天带你玩转网络安全运维.应用开发.物联网IOT学习! 前言介绍 MMM环境安装 安装MMM 高可用性测试 MySQL-mmm 总结 前言介绍 ...

  4. MySQL 高可用之MMM

    一.MMM简介 MMM即Multi-Master Replication Manager for MySQL:mysql多主复制管理器,基于perl实现,关于mysql主主复制 配置的监控.故障转移和 ...

  5. MySQL高可用架构-MMM环境部署记录

    MMM介绍 MMM(Master-Master replication manager for MySQL)是一套支持双主故障切换和双主日常管理的脚本程序.MMM使用Perl语言开发,主要用来监控和管 ...

  6. mysql高可用架构——MMM用群集架构

    MMM群集架构概述 MMM(Master-Master replication managerfor Mysql,Mysql主主复制管理器)是一套灵活的脚本程序,基于perl实现, 用来对mysql ...

  7. mysql高可用方案对比

    通常用什么模型来解决mysql高可用性 AsynchronousReplicationAutomaticfailover 其原理是在一条异步复制通道上配置多个可用复制源,当某个复制源不可用时(宕机.复 ...

  8. Mysql高可用集群-解决MMM单点故障

    目录 一.理论概述 组件介绍 三.部署 四.测试 五.总结 preface: MMM架构相比于MHA来说各方面都逊色不少,写这篇案例也算是整理下思路吧. 一.理论概述 MMM(Master-Maste ...

  9. MySQL数据库高可用之mmm

    一:MMM简介 MMM即Multi-Master Replication Manager for MySQL:mysql多主复制管理器,基于perl实现,关于mysql主主复制 配置的监控.故障转移和 ...

最新文章

  1. android5.0 广播失效,解决Android 8.0及以上系统接收不到广播的问题
  2. Myeclipse 10 利用工具生成网络服务接口并调用
  3. mysql sql with_mysql5.7 查询sql 出错: with sql_mode=only_full_group_by
  4. ModelAndView学习笔记
  5. python imshow彩色_python中plt.imshow与cv2.imshow显示颜色问题
  6. 在centos8 stream启用 Extra Packages
  7. Java面试你必须要知道的那些知识,面试建议
  8. sqlserver 2008安装总是弹出重启提示
  9. java实现坐标图进行拖拉拽放_js实现限定区域范围拖拉拽效果
  10. HTML5 Canvas平移,放缩,旋转演示
  11. 日语学习  「そっと」 和 「こっそり」 的区別
  12. java 使用不同目录下的类_如何运行在不同目录下的java类文件? - Break易站
  13. position绝对定位后,a中使用display:block 无效的解决办法
  14. Linux虚拟机远程连接工具
  15. google地图 经纬度查询
  16. 信息差、技能差、资源差、认知差
  17. 程序员的工资到底花到哪里去了?
  18. 如何删除微软拼音输入法2003
  19. 关于阿里云个人网站备案流程的介绍
  20. 传播正能量——《海南英才》阅读的读后感2200字

热门文章

  1. catkin_make编译时fatal error: ###_msgs/***.h: 没有那个文件或目录
  2. h5 android 字体设置,解决因为手机设置字体大小导致h5页面在webview中变形的BUG
  3. 二皮脸data_2022年网络我的网络爬虫学习心得
  4. 沁恒CH32V307使用记录:GPIO与EXTI
  5. 交通安全与智能控制专业学计算机吗,交通安全与智能控制专业就业前景怎么样...
  6. Windows 11 笔记本电脑打开盖子无法立刻唤醒
  7. 【技术邻】搞热仿真离不开热电偶
  8. RT-Thread 软件定时器(学习笔记)
  9. HTML5 聚光灯特效
  10. PS磨皮滤镜Portraiture2.3简体中文绿色版32b/64b