Docker常规安装

总体步骤

1.搜索镜像

2.拉取镜像

3.查看镜像

4.启动镜像

服务端口映射

5.停止容器

6.移除容器

安装tomcat

 # 1.搜索镜像  docker hub上面查找tomcat镜像docker search tomcat# 2.拉取镜像  从docker hub上拉取tomcat镜像到本地 docker pull tomcat#3.查看镜像  docker images查看是否有拉取到的tomcat:docker images#4.启动镜像  使用tomcat镜像创建容器实例(也叫运行镜像)docker run -it -p 8080:8080 tomcat参数:-p 小写,主机端口:docker容器端口-P 大写,随机分配端口i:交互t:终端d:后台
#无法访问tomcat首页- 可能没有映射端口或者没有关闭防火墙- 把webapps.dist目录换成webapps
#免修改版说明
docker pull billygoo/tomcat8-jdk8
docker run -d -p 8080:8080 --name mytomcat8 billygoo/tomcat8-jdk8

安装mysql

 # 1.搜索镜像docker search mysql# 2.拉取镜像docker pull mysql#3.查看镜像docker images#4.启动镜像docker run -d -p 3306:3306 --privileged=true \-v /home/mysql/log:/var/log/mysql \-v /home/mysql/data:/var/lib/mysql \-v /home/mysql/conf:/etc/mysql/conf.d \-e MYSQL_ROOT_PASSWORD=123456  \--name mysql mysql:5.7#新建my.cnf 通过容器卷同步给mysql容器实例[client]default_character_set=utf8[mysqld]collation_server = utf8_general_cicharacter_set_server = utf8
#重新启动mysql容器实例再重新进入并查看字符编码docker restart mysqldocker exec -it mysql /bin/bashSHOW VARIABLES LIKE 'character%'

安装mysql主从复制

复制的基本原理:slave会从master读取binlog来进行数据同步
1. master将改变记录到二进制日志(binary log)。这些记录过程叫做二进制日志事件,binary log events;
2. slave将master的binary log events拷贝到它的中继日志(relay log);
3. slave重做中继日志中的事件,将改变应用到自己的数据库中。 MySQL复制是异步的且串行化的复制的基本原则每个slave只有一个master每个slave只能有一个唯一的服务器ID每个master可以有多个salve复制的最大问题延时

1.新建主服务器容器实例3307

 docker pull mysql:5.7
 docker run -p 3307:3306 --name mysql-master \-v /home/mysql-master/log:/var/log/mysql \-v /home/mysql-master/data:/var/lib/mysql \-v /home/mysql-master/conf:/etc/mysql \-e MYSQL_ROOT_PASSWORD=root  \-d mysql:5.7

运行结果

[root@localhost home]#  docker run -p 3307:3306 --name mysql-master \
>     -v /home/mysql-master/log:/var/log/mysql \
>     -v /home/mysql-master/data:/var/lib/mysql \
>     -v /home/mysql-master/conf:/etc/mysql \
>     -e MYSQL_ROOT_PASSWORD=root  \
>     -d mysql:5.7
8b5bc09ae8ebd0a4c3ab6bce6f96684bb03a5da8b5b86d2a3518351cae773d2f
[root@localhost home]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS                               NAMES
8b5bc09ae8eb   mysql:5.7     "docker-entrypoint.s…"   58 seconds ago   Up 56 seconds   33060/tcp, 0.0.0.0:3307->3306/tcp   mysql-master
[root@localhost home]# ll
总用量 8
drwx------. 2 lsp  lsp  4096 4月  11 2018 lsp
drwxr-xr-x. 5 root root 4096 4月   4 15:13 mysql-master
[root@localhost home]#

2.进入/home/mysql-master/conf目录下新建my.cnf

 vim my.cnf[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=101
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能
log-bin=mall-mysql-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062

运行结果

[-rw-r--r--. 1 root root 688 4月   4 15:25 my.cnf
[root@localhost conf]# cat my.cnf
[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=101
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能
log-bin=mall-mysql-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062[root@localhost conf]#

3.修改完配置后重启master实例

docker restart mysql-master

运行结果

[root@localhost conf]# docker restart mysql-master
mysql-master

4.进入mysql-master容器

docker exec -it mysql-master /bin/bash
mysql -uroot -proot

运行结果

[root@localhost conf]# docker exec -it mysql-master /bin/bash
root@8b5bc09ae8eb:/# mysql -uroot -proot
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36-log MySQL Community Server (GPL)Copyright (c) 2000, 2021, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql>

5.master容器实例内创建数据同步用户

CREATE USER 'slave'@'%' IDENTIFIED BY '123456';
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%';

运行结果

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> clear
mysql> CREATE USER 'slave'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%';
Query OK, 0 rows affected (0.00 sec)

6.新建从服务器容器实例3308

 docker run -p 3308:3306 --name mysql-slave \-v /home/mysql-slave/log:/var/log/mysql \-v /home/mysql-slave/data:/var/lib/mysql \-v /home/mysql-slave/conf:/etc/mysql \-e MYSQL_ROOT_PASSWORD=root  \-d mysql:5.7

运行结果

[root@localhost home]# docker run -p 3308:3306 --name mysql-slave \
>     -v /home/mysql-slave/log:/var/log/mysql \
>     -v /home/mysql-slave/data:/var/lib/mysql \
>     -v /home/mysql-slave/conf:/etc/mysql \
>     -e MYSQL_ROOT_PASSWORD=root  \
>     -d mysql:5.7
ee3ebe78db896662232a3e19559cbe02e74376043314186f2de94ed10ee52f56
[root@localhost home]#

7.进入/home/mysql-slave/conf目录下新建my.cnf

vim my.cnf[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=102
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能,以备Slave作为其它数据库实例的Master时使用
log-bin=mall-mysql-slave1-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
## relay_log配置中继日志
relay_log=mall-mysql-relay-bin
## log_slave_updates表示slave将复制事件写进自己的二进制日志
log_slave_updates=1
## slave设置为只读(具有super权限的用户除外)
read_only=1

运行结果

[root@localhost conf]# vi my.cnf
[root@localhost conf]# cat my.cnf
[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=102
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能,以备Slave作为其它数据库实例的Master时使用
log-bin=mall-mysql-slave1-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
## relay_log配置中继日志
relay_log=mall-mysql-relay-bin
## log_slave_updates表示slave将复制事件写进自己的二进制日志
log_slave_updates=1
## slave设置为只读(具有super权限的用户除外)
read_only=1[root@localhost conf]# 

8.修改完配置后重启slave实例

docker restart mysql-slave

运行结果

[root@localhost conf]# docker restart mysql-slave
mysql-slave
[root@localhost conf]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS                               NAMES
ee3ebe78db89   mysql:5.7     "docker-entrypoint.s…"   2 minutes ago    Up 5 seconds    33060/tcp, 0.0.0.0:3308->3306/tcp   mysql-slave
8b5bc09ae8eb   mysql:5.7     "docker-entrypoint.s…"   19 minutes ago   Up 7 minutes    33060/tcp, 0.0.0.0:3307->3306/tcp   mysql-master

9.在主数据库中查看主从同步状态

 show master status;

运行结果

mysql> show master status;
+-----------------------+----------+--------------+------------------+-------------------+
| File                  | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-----------------------+----------+--------------+------------------+-------------------+
| mall-mysql-bin.000001 |      617 |              | mysql            |                   |
+-----------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)mysql>

10.进入mysql-slave容器

docker exec -it mysql-slave /bin/bash
mysql -uroot -proot

运行结果

[root@localhost conf]# docker exec -it mysql-slave /bin/bash
root@ee3ebe78db89:/# mysql -uroot -proot
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36-log MySQL Community Server (GPL)Copyright (c) 2000, 2021, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql>

11.在从数据库中配置主从复制

change master to master_host='宿主机ip', master_user='slave', master_password='123456', master_port=3307, master_log_file='mall-mysql-bin.000001', master_log_pos=617, master_connect_retry=30;#mall-mysql-bin.000001 是 在主数据库中查看主从同步状态 File的值

含义

    master_host:主数据库的IP地址;master_port:主数据库的运行端口;master_user:在主数据库创建的用于同步数据的用户账号;master_password:在主数据库创建的用于同步数据的用户密码;master_log_file:指定从数据库要复制数据的日志文件,通过查看主数据的状态,获取File参数;master_log_pos:指定从数据库从哪个位置开始复制数据,通过查看主数据的状态,获取Position参数;master_connect_retry:连接失败重试的时间间隔,单位为秒。

运行结果

mysql> change master to master_host='192.168.153.138', master_user='slave', master_password='123456', master_port=3307, master_log_file='mall-mysql-bin.000001', master_log_pos=617, master_connect_retry=30;
Query OK, 0 rows affected, 2 warnings (0.12 sec)mysql> 

12.在从数据库中查看主从同步状态

show slave status \G;  或  show slave status;
状态:Slave_IO_Running: NoSlave_SQL_Running: No

运行结果

mysql> show slave status \G;
*************************** 1. row ***************************Slave_IO_State: Master_Host: 192.168.153.138Master_User: slaveMaster_Port: 3307Connect_Retry: 30Master_Log_File: mall-mysql-bin.000001Read_Master_Log_Pos: 617Relay_Log_File: mall-mysql-relay-bin.000001Relay_Log_Pos: 4Relay_Master_Log_File: mall-mysql-bin.000001Slave_IO_Running: No  #未开始Slave_SQL_Running: No  #未开始Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 617Relay_Log_Space: 154Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0Master_UUID: Master_Info_File: /var/lib/mysql/master.infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Master_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version:
1 row in set (0.01 sec)ERROR:
No query specified

13.在从数据库中开启主从同步

  start slave;

运行结果

mysql>   start slave;
Query OK, 0 rows affected (0.01 sec)

14.查看从数据库状态发现已经同步

 show slave status \G;   或   show slave status;

运行结果

mysql> show slave status \G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.153.138Master_User: slaveMaster_Port: 3307Connect_Retry: 30Master_Log_File: mall-mysql-bin.000001Read_Master_Log_Pos: 617Relay_Log_File: mall-mysql-relay-bin.000002Relay_Log_Pos: 325Relay_Master_Log_File: mall-mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 617Relay_Log_Space: 537Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 101Master_UUID: b5d52751-b3e6-11ec-997f-0242ac110002Master_Info_File: /var/lib/mysql/master.infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Slave has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version:
1 row in set (0.00 sec)ERROR:
No query specified

15.主从复制测试

#主机新建库-使用库-新建表-插入数据,ok
mysql> create database longpang;
Query OK, 1 row affected (0.03 sec)
mysql> use longpang;
Database changed
mysql>  create table docker_mysql(id int ,name varchar(20),create_time bigInt,update_time bigint);
Query OK, 0 rows affected (0.04 sec)mysql> insert into docker_mysql values(1,'docker',111,111);
Query OK, 1 row affected (0.89 sec)mysql> insert into docker_mysql values(2,'docker',111,111);
Query OK, 1 row affected (0.00 sec)mysql> #从机使用库-查看记录,okmysql> show databases-> ;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| longpang           |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)mysql> use longpang;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
mysql> select * from docker_mysql-> ;
+------+--------+-------------+-------------+
| id   | name   | create_time | update_time |
+------+--------+-------------+-------------+
|    1 | docker |         111 |         111 |
|    2 | docker |         111 |         111 |
+------+--------+-------------+-------------+
2 rows in set (0.00 sec)mysql>

安装redis

1.拉取镜像
docker pull redis2.创建实例并启动redis
mkdir -p /mydata/redis/conf
touch /mydata/redis/conf/redis.conf#1.开启redis验证    可选requirepass 123#2.允许redis外地连接  必须注释掉 # bind 127.0.0.1#3.daemonize no将daemonize yes注释起来或者 daemonize no设置,因为该配置和docker run中-d参数冲突,会导致容器一直启动失败# 4.开启redis数据持久化 appendonly yes  可选docker run -p 6379:6379   \
--privileged=true \
--name redis  \
-v /mydata/redis/data:/data \
-v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \
-d redis redis-server /etc/redis/redis.conf3.使用redis镜像执行redis-cli命令连接#docker exec -it 运行着Rediis服务的容器ID redis-clidocker exec -it redis redis-clivi /mydata/redis/conf/redis.conf
# 添加如下内容
appendonly yesdocker update redis --restart=always

安装redis集群[分布式存储案例]

假设:1~2亿条数据需要缓存,请问如何设计这个存储案例

单机单台100%不可能,肯定是分布式存储,用redis如何落地?

一般业界有3种解决方案。

解决方案

1.哈希取余分区

2亿条记录就是2亿个k,v,我们单机不行必须要分布式多机,假设有3台机器构成一个集群,用户每次读写操作都是根据公式:
hash(key) % N个机器台数,计算出哈希值,用来决定数据映射到哪一个节点上。

优点:
简单粗暴,直接有效,只需要预估好数据规划好节点,例如3台、8台、10台,就能保证一段时间的数据支撑。使用Hash算法让固定的一部分请求落到同一台服务器上,这样每台服务器固定处理一部分请求(并维护这些请求的信息),起到负载均衡+分而治之的作用。
缺点:
原来规划好的节点,进行扩容或者缩容就比较麻烦了额,不管扩缩,每次数据变动导致节点有变动,映射关系需要重新进行计算,在服务器个数固定不变时没有问题,如果需要弹性扩容或故障停机的情况下,原来的取模公式就会发生变化:Hash(key)/3会变成Hash(key) /?。此时地址经过取余运算的结果将发生很大变化,根据公式获取的服务器也会变得不可控。
某个redis机器宕机了,由于台数数量变化,会导致hash取余全部数据重新洗牌

2.一致性哈希算法分区

一致性Hash算法背景一致性哈希算法在1997年由麻省理工学院中提出的,设计目标是为了解决
分布式缓存数据变动和映射问题,某个机器宕机了,分母数量改变了,自然取余数不OK了。提出一致性Hash解决方案。 目的是当服务器个数发生变动时,尽量减少影响客户端到服务器的映射关系
  1. 算法构建一致性哈希环

    一致性哈希环一致性哈希算法必然有个hash函数并按照算法产生hash值,这个算法的所有可能哈希值会构成一个全量集,这个集合可以成为一个hash空间[0,2^32-1],这个是一个线性空间,但是在算法中,我们通过适当的逻辑控制将它首尾相连(0 = 2^32),这样让它逻辑上形成了一个环形空间。它也是按照使用取模的方法,前面笔记介绍的节点取模法是对节点(服务器)的数量进行取模。而一致性Hash算法是对2^32取模,简单来说,一致性Hash算法将整个哈希值空间组织成一个虚拟的圆环,如假设某哈希函数H的值空间为0-2^32-1(即哈希值是一个32位无符号整形),整个哈希环如下图:整个空间按顺时针方向组织,圆环的正上方的点代表0,0点右侧的第一个点代表1,以此类推,2、3、4、……直到2^32-1,也就是说0点左侧的第一个点代表2^32-1, 0和2^32-1在零点中方向重合,我们把这个由2^32个点组成的圆环称为Hash环。
    

  2. 服务器IP节点映射

    将集群中各个IP节点映射到环上的某一个位置。将各个服务器使用Hash进行一个哈希,具体可以选择服务器的IP或主机名作为关键字进行哈希,这样每台机器就能确定其在哈希环上的位置。假如4个节点NodeA、B、C、D,经过IP地址的哈希函数计算(hash(ip)),使用IP地址哈希后在环空间的位置如下:
    

  3. key落到服务器的落键规则

    当我们需要存储一个kv键值对时,首先计算key的hash值,hash(key),将这个key使用相同的函数Hash计算出哈希值并确定此数据在环上的位置,从此位置沿环顺时针“行走”,第一台遇到的服务器就是其应该定位到的服务器,并将该键值对存储在该节点上。
    如我们有Object A、Object B、Object C、Object D四个数据对象,经过哈希计算后,在环空间上的位置如下:根据一致性Hash算法,数据A会被定为到Node A上,B被定为到Node B上,C被定为到Node C上,D被定为到Node D上。
    

优点

  • 一致性哈希算法的容错性

    容错性
    假设Node C宕机,可以看到此时对象A、B、D不会受到影响,只有C对象被重定位到Node D。一般的,在一致性Hash算法中,如果一台服务器不可用,则受影响的数据仅仅是此服务器到其环空间中前一台服务器(即沿着逆时针方向行走遇到的第一台服务器)之间数据,其它不会受到影响。简单说,就是C挂了,受到影响的只是B、C之间的数据,并且这些数据会转移到D进行存储。
    

  • 一致性哈希算法的扩展性

    扩展性
    数据量增加了,需要增加一台节点NodeX,X的位置在A和B之间,那收到影响的也就是A到X之间的数据,重新把A到X的数据录入到X上即可,
    不会导致hash取余全部数据重新洗牌。

缺点:

  • 一致性哈希算法的数据倾斜问题

    Hash环的数据倾斜问题
    一致性Hash算法在服务节点太少时,容易因为节点分布不均匀而造成数据倾斜(被缓存的对象大部分集中缓存在某一台服务器上)问题,
    例如系统中只有两台服务器:
    

总结:

为了在节点数目发生改变时尽可能少的迁移数据将所有的存储节点排列在收尾相接的Hash环上,每个key在计算Hash后会顺时针找到临近的存储节点存放。
而当有节点加入或退出时仅影响该节点在Hash环上顺时针相邻的后续节点。  优点:
加入和删除节点只影响哈希环中顺时针方向的相邻的节点,对其他节点无影响。缺点:
数据的分布和节点的位置有关,因为这些节点不是均匀的分布在哈希环上的,所以数据在进行存储时达不到均匀分布的效果。

3.哈希槽分区

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aE16YC6T-1649930007495)(./picture/哈希槽分区.png)]

1.为什么出现?哈希槽实质就是一个数组,数组[0,2^14 -1]形成hash slot空间。2.能干什么解决均匀分配的问题,在数据和节点之间又加入了一层,把这层称为哈希槽(slot),用于管理数据和节点之间的关系,现在就相当于节点上放的是槽,槽里放的是数据。槽解决的是粒度问题,相当于把粒度变大了,这样便于数据移动。
哈希解决的是映射问题,使用key的哈希值来计算所在的槽,便于数据分配。3.多少个hash槽
一个集群只能有16384个槽,编号0-16383(0-2^14-1)。这些槽会分配给集群中的所有主节点,分配策略没有要求。可以指定哪些编号的槽分配给哪个主节点。集群会记录节点和槽的对应关系。解决了节点和槽的关系后,接下来就需要对key求哈希值,然后对16384取余,余数是几key就落入对应的槽里。slot = CRC16(key) % 16384。以槽为单位移动数据,因为槽的数目是固定的,处理起来比较容易,这样数据移动问题就解决了。
哈希槽计算:
Redis 集群中内置了 16384 个哈希槽,redis 会根据节点数量大致均等的将哈希槽映射到不同的节点。当需要在 Redis 集群中放置一个 key-value时,redis 先对 key 使用 crc16 算法算出一个结果,然后把结果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,也就是映射到某个节点上。如下代码,key之A 、B在Node2, key之C落在Node3上

3主3从redis集群配置

1.关闭防火墙+启动docker后台服务

 systemctl start docker

2.新建6个docker容器redis实例

docker run -d --name redis-node-1 --net host --privileged=true -v /home/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381docker run -d --name redis-node-2 --net host --privileged=true -v /home/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382docker run -d --name redis-node-3 --net host --privileged=true -v /home/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383docker run -d --name redis-node-4 --net host --privileged=true -v /home/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384docker run -d --name redis-node-5 --net host --privileged=true -v /home/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385docker run -d --name redis-node-6 --net host --privileged=true -v /home/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386

命令分步解释

docker run          创建并运行docker容器实例
--name redis-node-6 容器名字
--net host          使用宿主机的IP和端口,默认
--privileged=true  获取宿主机root用户权限
-v /data/redis/share/redis-node-6:/data 容器卷,宿主机地址:docker内部地址
redis:6.0.8         redis镜像和版本号
--cluster-enabled yes   开启redis集群
--appendonly yes    开启持久化
--port 6386         redis端口号

运行结果

[root@localhost home]# docker run -d --name redis-node-1 --net host --privileged=true -v /home/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381
bed3ef4d9f9c828a3f1788066b4adfbbc44ef84e9e606695efde81842982fd74
[root@localhost home]#
[root@localhost home]# docker run -d --name redis-node-2 --net host --privileged=true -v /home/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382
a8c03a8ccd2718df67be1993b0cceb318fb7176bbeab8a3bc40cff006d8a9be0
[root@localhost home]#
[root@localhost home]# docker run -d --name redis-node-3 --net host --privileged=true -v /home/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383
40dc2219ea746fa32e6193086a2f9fe8119766c6c63168732cd05f2132a5a926
[root@localhost home]#
[root@localhost home]# docker run -d --name redis-node-4 --net host --privileged=true -v /home/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384
798bffb2a259a1804e8fb80cdd0703b67d95048af0f28c28f2fd18442b5cd415
[root@localhost home]#
[root@localhost home]# docker run -d --name redis-node-5 --net host --privileged=true -v /home/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385
8a16b360409bedf4830d29dd1d4d8e4f64052678abb7c64326a02ebb33cae0a6
[root@localhost home]#
[root@localhost home]# docker run -d --name redis-node-6 --net host --privileged=true -v /home/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386
26fd5996d93d81cf37997e17379723b9a9b63073e31ed981af5ab3bdc8d417d8

docker ps 查看

[root@localhost home]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS             PORTS                               NAMES
26fd5996d93d   redis:6.0.8   "docker-entrypoint.s…"   19 seconds ago   Up 19 seconds                                          redis-node-6
8a16b360409b   redis:6.0.8   "docker-entrypoint.s…"   20 seconds ago   Up 20 seconds                                          redis-node-5
798bffb2a259   redis:6.0.8   "docker-entrypoint.s…"   21 seconds ago   Up 20 seconds                                          redis-node-4
40dc2219ea74   redis:6.0.8   "docker-entrypoint.s…"   21 seconds ago   Up 21 seconds                                          redis-node-3
a8c03a8ccd27   redis:6.0.8   "docker-entrypoint.s…"   21 seconds ago   Up 21 seconds                                          redis-node-2
bed3ef4d9f9c   redis:6.0.8   "docker-entrypoint.s…"   22 seconds ago   Up 22 seconds                                          redis-node-1

3.进入容器redis-node-1并为6台机器构建集群关系

1.进入容器docker exec -it redis-node-1 /bin/bash2.#注意,进入docker容器后才能执行一下命令,且注意自己的真实IP地址redis-cli --cluster create IP地址:6381 IP地址:6382              IP地址:6383 192.168.111.147:6384 IP地址:6385     IP地址:6386 --cluster-replicas 1#--cluster-replicas 1 表示为每个master创建一个slave节点3.#3主3从搞定

运行结果

[root@localhost home]# docker exec -it redis-node-1 /bin/bash
root@localhost:/data# redis-cli --cluster create 192.168.153.138:6381 192.168.153.138:6382 192.168.153.138:6383 192.168.153.138:6384 192.168.153.138:6385    192.168.153.138:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes... #hash槽对这六个节点经行分配
Master[0] -> Slots 0 - 5460 #如上图对16384个槽位分配 0 - 5460 5461 - 10922 10923 - 16383
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.153.138:6385 to 192.168.153.138:6381
Adding replica 192.168.153.138:6386 to 192.168.153.138:6382
Adding replica 192.168.153.138:6384 to 192.168.153.138:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381 #主slots:[0-5460] (5461 slots) master
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382 #主slots:[5461-10922] (5462 slots) master
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383 #主slots:[10923-16383] (5461 slots) master
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384 #从replicates fbacfaed169736db5278e4f685adf44a9bcd55ea
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385 #从replicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386 #从replicates 7a876fd455a6e77923e0ab340335c430f25ed11f
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
链接进入6381作为切入点,查看集群状态
# 链接进入6381作为切入点,查看节点状态
redis-cli -p 6381
cluster info
cluster nodes

运行结果

root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384 #总共槽位
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6 #已经直到节点
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:406
cluster_stats_messages_pong_sent:423
cluster_stats_messages_sent:829
cluster_stats_messages_ping_received:418
cluster_stats_messages_pong_received:406
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:829
127.0.0.1:6381> cluster nodes
ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386@16386 slave 7a876fd455a6e77923e0ab340335c430f25ed11f 0 1649059775000 2 connected
fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383@16383 master - 0 1649059774000 3 connected 10923-16383
7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382@16382 master - 0 1649059774696 2 connected 5461-10922
37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385@16385 slave d9b335b44e89e3a46f30466e70867de7fdd3d5ad 0 1649059773000 1 connected
798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384@16384 slave fbacfaed169736db5278e4f685adf44a9bcd55ea 0 1649059775721 3 connected
d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381@16381 myself,master - 0 1649059773000 1 connected 0-5460
master slave
6381 对应 6385
6382 对应 6386
6383 对应 6384

主从容错切换迁移案例

数据读写存储

一.数据读写存储1.启动6机构成的集群并通过exec进入2.对6381新增两个key3.防止路由失效加参数-c并新增两个keyredis-cli -p 6381 -c#加入参数-c,优化路由4.查看集群信息   redis-cli --cluster check ip地址:reids端口二.容错切换迁移1.主6381和从机切换,先停止主机6381#6381主机停了,对应的真实从机上位#6381作为1号主机分配的从机以实际情况为准,具体是几号机器就是几号2.再次查看集群信息cluster info3.先还原之前的3主3从4.查看集群状态redis-cli --cluster check 自己IP:reids端口

运行结果

#1.存放值
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.153.138:6383  #问题?  超过槽位存不进去
127.0.0.1:6381> set k2 v2
OK            #未超过槽位  存进去了#解决
#现在是集群环境,不能使用单机方式连接
#使用防止路由失效加参数 -c
命令:redis-cli -p 端口 -c root@localhost:/data# redis-cli -p 6381 -c
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.153.138:6383 #当前槽位无法放置,跳转到可以放置的槽位
OK
192.168.153.138:6383> #跳转到可以放置的槽位#2.查看集群信息
root@localhost:/data# redis-cli --cluster check 192.168.153.138:6383
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 5461 slots | 1 slaves.
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 5462 slots | 1 slaves.
192.168.153.138:6381 (d9b335b4...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6383)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

容错切换迁移

1.主6381和从机切换,先停止主机6381,看看从机6385会不会上位成为master?
运行结果:
[root@localhost home]# docker stop redis-node-1   #停止主机6381
redis-node-1
[root@localhost home]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED             STATUS             PORTS                               NAMES
26fd5996d93d   redis:6.0.8   "docker-entrypoint.s…"   45 minutes ago      Up 45 minutes                                          redis-node-6
8a16b360409b   redis:6.0.8   "docker-entrypoint.s…"   45 minutes ago      Up 45 minutes                                          redis-node-5
798bffb2a259   redis:6.0.8   "docker-entrypoint.s…"   45 minutes ago      Up 45 minutes                                          redis-node-4
40dc2219ea74   redis:6.0.8   "docker-entrypoint.s…"   45 minutes ago      Up 45 minutes                                          redis-node-3
a8c03a8ccd27   redis:6.0.8   "docker-entrypoint.s…"   45 minutes ago      Up 45 minutes                                          redis-node-2
#以redis-node-2号机器登陆
[root@localhost home]# docker exec -it redis-node-2 /bin/bash
root@localhost:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes
ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386@16386 slave 7a876fd455a6e77923e0ab340335c430f25ed11f 0 1649061966742 2 connected
37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385@16385 master - 0 1649061967000 7 connected 0-5460 #原来6381的子节点6385上位成功
d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381@16381 master,fail - 1649061727491 1649061725000 1 disconnected #6381宕机
7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382@16382 myself,master - 0 1649061965000 2 connected 5461-10922
798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384@16384 slave fbacfaed169736db5278e4f685adf44a9bcd55ea 0 1649061967764 3 connected
fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383@16383 master - 0 1649061965713 3 connected 10923-16383
2.先还原之前的3主3从  重启6381,6385会退位吗?
[root@localhost home]# docker start redis-node-1
redis-node-1
[root@localhost home]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED             STATUS             PORTS                               NAMES
26fd5996d93d   redis:6.0.8   "docker-entrypoint.s…"   55 minutes ago      Up 55 minutes                                          redis-node-6
8a16b360409b   redis:6.0.8   "docker-entrypoint.s…"   55 minutes ago      Up 55 minutes                                          redis-node-5
798bffb2a259   redis:6.0.8   "docker-entrypoint.s…"   55 minutes ago      Up 55 minutes                                          redis-node-4
40dc2219ea74   redis:6.0.8   "docker-entrypoint.s…"   55 minutes ago      Up 55 minutes                                          redis-node-3
a8c03a8ccd27   redis:6.0.8   "docker-entrypoint.s…"   55 minutes ago      Up 55 minutes                                          redis-node-2
bed3ef4d9f9c   redis:6.0.8   "docker-entrypoint.s…"   55 minutes ago      Up 3 seconds                                           redis-node-1127.0.0.1:6382> cluster nodes #查看节点状态
ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386@16386 slave 7a876fd455a6e77923e0ab340335c430f25ed11f 0 1649062414000 2 connected
37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385@16385 master - 0 1649062415000 7 connected 0-5460
d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381@16381 slave 37299c5efbf061f019bf24780661144a171c24df 0 1649062414907 7 connected #没有恢复
7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382@16382 myself,master - 0 1649062414000 2 connected 5461-10922
798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384@16384 slave fbacfaed169736db5278e4f685adf44a9bcd55ea 0 1649062415927 3 connected
fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383@16383 master - 0 1649062413893 3 connected 10923-16383#恢复之前的结构
[root@localhost home]# docker stop redis-node-5 #先停止6385
redis-node-5
[root@localhost home]# docker start redis-node-5 #启动6385
redis-node-5
master slave
6381 对应 6385
6382 对应 6386
6383 对应 6384
127.0.0.1:6382> cluster nodes
ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386@16386 slave 7a876fd455a6e77923e0ab340335c430f25ed11f 0 1649062614000 2 connected
37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385@16385 slave d9b335b44e89e3a46f30466e70867de7fdd3d5ad 0 1649062612000 8 connected
d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381@16381 master - 0 1649062614988 8 connected 0-5460 #6381 恢复master
7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382@16382 myself,master - 0 1649062612000 2 connected 5461-10922
798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384@16384 slave fbacfaed169736db5278e4f685adf44a9bcd55ea 0 1649062614000 3 connected
fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383@16383 master - 0 1649062613000 3 connected 10923-16383

主从扩容案例

#新建6387、6388两个节点+新建后启动+查看是否8节点1.新建6387、6388两个节点+新建后启动+查看是否8节点
docker run -d --name redis-node-7 --net host --privileged=true -v /home/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387docker run -d --name redis-node-8 --net host --privileged=true -v /home/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388docker ps#运行结果[root@localhost home]# docker run -d --name redis-node-7 --net host --privileged=true -v /home/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
cef889b627b55b1762c3d3481191efcc09bd85be64eb5480958a6dd4fb184c47
[root@localhost home]#
[root@localhost home]# docker run -d --name redis-node-8 --net host --privileged=true -v /home/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
8756b621cd39d377db1900dd057acee0cecd25ae5315b1f7e241942cdc335db6
[root@localhost home]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED             STATUS             PORTS                               NAMES
8756b621cd39   redis:6.0.8   "docker-entrypoint.s…"   2 minutes ago       Up 2 minutes                                           redis-node-8
cef889b627b5   redis:6.0.8   "docker-entrypoint.s…"   2 minutes ago       Up 2 minutes                                           redis-node-7
26fd5996d93d   redis:6.0.8   "docker-entrypoint.s…"   About an hour ago   Up About an hour                                       redis-node-6
8a16b360409b   redis:6.0.8   "docker-entrypoint.s…"   About an hour ago   Up 4 minutes                                           redis-node-5
798bffb2a259   redis:6.0.8   "docker-entrypoint.s…"   About an hour ago   Up About an hour                                       redis-node-4
40dc2219ea74   redis:6.0.8   "docker-entrypoint.s…"   About an hour ago   Up About an hour                                       redis-node-3
a8c03a8ccd27   redis:6.0.8   "docker-entrypoint.s…"   About an hour ago   Up About an hour                                       redis-node-2
bed3ef4d9f9c   redis:6.0.8   "docker-entrypoint.s…"   About an hour ago   Up 6 minutes                                           redis-node-1
2.进入6387容器实例内部docker exec -it redis-node-7 /bin/bash#运行结果[root@localhost home]# docker exec -it redis-node-7 /bin/bash
root@localhost:/data# 
3.将新增的6387节点(空槽号)作为master节点加入原集群
#将新增的6387作为master节点加入集群
redis-cli --cluster add-node 自己实际IP地址:6387 自己实际IP地址:6381
#6387 就是将要作为master新增节点
#6381 就是原来集群节点里面的领路人,相当于6387拜拜6381的码头从而找到组织加入集群#运行结果
root@localhost:/data# redis-cli --cluster add-node 192.168.153.138:6387 192.168.153.138:6381
>>> Adding node 192.168.153.138:6387 to cluster 192.168.153.138:6381
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.153.138:6387 to make it join the cluster.
[OK] New node added correctly.

4.检查集群情况第1次#redis-cli --cluster check 真实ip地址:redis端口#运行结果
root@localhost:/data# redis-cli --cluster check 192.168.153.138:6381
192.168.153.138:6381 (d9b335b4...) -> 1 keys | 5461 slots | 1 slaves.
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 5462 slots | 1 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 5461 slots | 1 slaves.
192.168.153.138:6387 (b796cbe5...) -> 0 keys | 0 slots | 0 slaves. # 暂时无槽位
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387  #6387 加如成功slots: (0 slots) master
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
5.重新分派槽号
#命令:redis-cli --cluster reshard IP地址:端口号redis-cli --cluster reshard 192.168.153.138:6381#运行结果 16384/master节点
root@localhost:/data# redis-cli --cluster reshard 192.168.153.138:6381
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots: (0 slots) master
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? b796cbe572b52febe482f5eab2816ba31f9ff4d1
Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.
Source node #1: all
...Moving slot 12286 from fbacfaed169736db5278e4f685adf44a9bcd55eaMoving slot 12287 from fbacfaed169736db5278e4f685adf44a9bcd55ea
Do you want to proceed with the proposed reshard plan (yes/no)? yes
6.检查集群情况第2次#redis-cli --cluster check 真实ip地址:6381#槽号分派说明为什么6387是3个新的区间,以前的还是连续?
重新分配成本太高,所以前3家各自匀出来一部分,从6381/6382/6383三个旧节点分别匀出1364个坑位给新节点6387#运行结果t@localhost:/data# redis-cli --cluster check 192.168.153.138:6381
192.168.153.138:6381 (d9b335b4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.138:6387 (b796cbe5...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master#槽位是前几个匀了一点给6387
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.#为什么6387是3个新的区间,以前的还是连续?
#重新分配成本太高,所以前3家各自匀出来一部分,从6381/6382/6383三个旧节点分别匀出1364个坑位给新节点6387
7.为主节点6387分配从节点6388
#命令:redis-cli --cluster add-node ip:新slave端口 ip:新master端口 --cluster-slave --cluster-master-id 新主机节点IDredis-cli --cluster add-node 192.168.153.138:6388 192.168.153.138:6387 --cluster-slave --cluster-master-id b796cbe572b52febe482f5eab2816ba31f9ff4d1------这个是6387的编号,按照自己实际情况#运行结果root@localhost:/data# redis-cli --cluster add-node 192.168.153.138:6388 192.168.153.138:6387 --cluster-slave --cluster-master-id b796cbe572b52febe482f5eab2816ba31f9ff4d1
>>> Adding node 192.168.153.138:6388 to cluster 192.168.153.138:6387
>>> Performing Cluster Check (using node 192.168.153.138:6387)
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.153.138:6388 to make it join the cluster.
Waiting for the cluster to join>>> Configure node as replica of 192.168.153.138:6387.
[OK] New node added correctly.
8.检查集群情况第3次redis-cli --cluster check 192.168.153.138:6382#运行结果 #4个master 4个slaveroot@localhost:/data# redis-cli --cluster check 192.168.153.138:6382
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6381 (d9b335b4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6387 (b796cbe5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6382)
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
S: 90bae6b2538ae9f8bd69a02897f99f290b7c7608 192.168.153.138:6388slots: (0 slots) slavereplicates b796cbe572b52febe482f5eab2816ba31f9ff4d1
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

主从缩容案例

#目的:6387和6388下线
1.检查集群情况1获得6388的节点IDredis-cli --cluster check 192.168.153.138:6382#运行结果 root@localhost:/data# redis-cli --cluster check 192.168.153.138:6382
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6381 (d9b335b4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6387 (b796cbe5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6382)
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
S: 90bae6b2538ae9f8bd69a02897f99f290b7c7608 192.168.153.138:6388slots: (0 slots) slavereplicates b796cbe572b52febe482f5eab2816ba31f9ff4d1
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2.将6388删除,从集群中将4号从节点6388删除
命令:redis-cli --cluster del-node ip:从机端口 从机6388节点IDredis-cli --cluster del-node 192.168.153.138:6388  90bae6b2538ae9f8bd69a02897f99f290b7c7608# 检查一下发现,6388被删除了,只剩下7台机器了。redis-cli --cluster check 192.168.153.138:6382#运行结果 root@localhost:/data#   redis-cli --cluster del-node 192.168.153.138:6388   90bae6b2538ae9f8bd69a02897f99f290b7c7608
>>> Removing node 90bae6b2538ae9f8bd69a02897f99f290b7c7608 from cluster 192.168.153.138:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@localhost:/data# redis-cli --cluster check 192.168.153.138:6382
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6381 (d9b335b4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6387 (b796cbe5...) -> 1 keys | 4096 slots | 0 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6382)
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data#  #上面少一个节点6388
3.将6387的槽号清空,重新分配,本例将清出来的槽号都给6381redis-cli --cluster reshard 192.168.153.138:6381#运行结果 root@localhost:/data#   redis-cli --cluster reshard 192.168.153.138:6381
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 #分配多少槽位
What is the receiving node ID? d9b335b44e89e3a46f30466e70867de7fdd3d5ad#分配给谁,这里给6381
Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.
Source node #1: b796cbe572b52febe482f5eab2816ba31f9ff4d1  #谁出的槽位
Source node #2: done
Ready to move 4096 slots.Source nodes:M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) masterDestination node:M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)Resharding plan:
...Moving slot 12286 from b796cbe572b52febe482f5eab2816ba31f9ff4d1Moving slot 12287 from b796cbe572b52febe482f5eab2816ba31f9ff4d1
Do you want to proceed with the proposed reshard plan (yes/no)? yes
...
4.检查集群情况第二次redis-cli --cluster check 192.168.153.138:6381#4096个槽位都指给6381,它变成了8192个槽位,相当于全部都给6381了,不然要输入3次,一锅端#运行结果
root@localhost:/data# redis-cli --cluster check 192.168.153.138:6381
192.168.153.138:6381 (d9b335b4...) -> 1 keys | 8192 slots | 1 slaves.
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.138:6387 (b796cbe5...) -> 0 keys | 0 slots | 0 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-6826],[10923-12287] (8192 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: b796cbe572b52febe482f5eab2816ba31f9ff4d1 192.168.153.138:6387slots: (0 slots) master
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
5. 将6387删除
#命令:redis-cli --cluster del-node ip:端口 6387节点IDredis-cli --cluster del-node 192.168.153.138:6387 b796cbe572b52febe482f5eab2816ba31f9ff4d1
#运行结果
root@localhost:/data# redis-cli --cluster del-node 192.168.153.138:6387 b796cbe572b52febe482f5eab2816ba31f9ff4d1
>>> Removing node b796cbe572b52febe482f5eab2816ba31f9ff4d1 from cluster 192.168.153.138:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
6.检查集群情况第三次redis-cli --cluster check 192.168.153.138:6381#运行结果   root@localhost:/data# redis-cli --cluster check 192.168.153.138:6381
192.168.153.138:6381 (d9b335b4...) -> 1 keys | 8192 slots | 1 slaves.
192.168.153.138:6382 (7a876fd4...) -> 0 keys | 4096 slots | 1 slaves.
192.168.153.138:6383 (fbacfaed...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.138:6381)
M: d9b335b44e89e3a46f30466e70867de7fdd3d5ad 192.168.153.138:6381slots:[0-6826],[10923-12287] (8192 slots) master1 additional replica(s)
S: 37299c5efbf061f019bf24780661144a171c24df 192.168.153.138:6385slots: (0 slots) slavereplicates d9b335b44e89e3a46f30466e70867de7fdd3d5ad
S: ece2d11584c4d7a0b1d9699c90aa2016fcf75924 192.168.153.138:6386slots: (0 slots) slavereplicates 7a876fd455a6e77923e0ab340335c430f25ed11f
M: 7a876fd455a6e77923e0ab340335c430f25ed11f 192.168.153.138:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
M: fbacfaed169736db5278e4f685adf44a9bcd55ea 192.168.153.138:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
S: 798440dfeb0aa728aba2f4a9f783eb20edeaa09c 192.168.153.138:6384slots: (0 slots) slavereplicates fbacfaed169736db5278e4f685adf44a9bcd55ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

安装 smb

1、拉取samba镜像
docker pull dperson/samba2、创建容器数据卷挂载目录
mkdir -p /home/docker/samba/data3、设置权限
chmod 777 /home/docker/samba/data -R4、创建samba容器
docker run --privileged  \
--name samba  \
-p 139:139 -p 445:445  \
-v /home/docker/samba/data:/mount  \
--restart=always  \
-d dperson/samba:latest  \
-u "root;123456"  \
-s "shared;/mount/;yes;no;no;all;none"

安装 minio

docker pull minio/miniodocker run --privileged  \--name minio \-p 9000:9000 \-p 9001:9001 \-v /etc/localtime:/etc/localtime \-v /home/docker/minio/data:/data \-v /home/docker/minio/config:/root/.minio \-e "MINIO_ACCESS_KEY=root" -e "MINIO_SECRET_KEY=root" \--restart=always  \-d minio/minio server /data --console-address ":9001"

安装 ftp

mkdir -p /home/docker/ftp/datadocker pull fauria/vsftpd docker run -d -v /home/docker/ftp/data:/home/vsftpd \
-p 20:20 -p 21:21 -p  21100-21110:21100-21110 \
-e FTP_USER=test -e FTP_PASS=test123 \
-e PASV_ADDRESS=192.168.xxx.xxx \
-e PASV_MIN_PORT=21100 -e PASV_MAX_PORT=21110 \
--name vsftpd --restart=always fauria/vsftpd

docekr常规安装相关推荐

  1. 《docker基础篇:8.Docker常规安装简介》包括:docker常规安装总体步骤、安装tomcat、安装mysql、安装redis

    文章目录 8.Docker常规安装简介 8.1 docker常规安装总体步骤 8.2安装tomcat 8.3 安装mysql 8.3.1 docker hub上面查找mysql镜像 8.3.2 从do ...

  2. 第八章 Docker常规安装简介

    1.总体步骤 1.搜索镜像 2.拉取镜像 3.查看镜像 4.启动镜像 服务端口映射 5.停止容器 6.移除容器 2.安装tomcat docker hub上面查找tomcat镜像 docker sea ...

  3. Linux 常规安装MySQL 执行启动命令报错(附安装方法)

    采用解压官网的targ ,配置mysql group权限组,并且初始化数据库后,执行service mysql start 报错: /etc/init.d/mysql: line 244: my_pr ...

  4. 容器虚拟化技术Docker(一)简介、安装、常见命令、数据卷、安装常规软件

    容器虚拟化技术Docker(一)简介.安装.常见命令.数据卷.安装常规软件 1.Docker简介 1.简介 Docker的主要目标是"Build,Ship and Run Any App,A ...

  5. docker 详细介绍安装步骤以及简单的运用

    目录 一,docker的简单介绍 二,docker的安装步骤 Centos7安装 1.确定版本是否是Centos7以上 2.卸载旧版docker (如果有的话) 3.安装一些软件 4.设置docker ...

  6. ISA2006标准版安装及无人值守安装

    ISA2006 ( Internet Security and Accelereation )是企业部署软件防火墙最好的选择.它支持虚拟网络×××,SSL安全连接和证书验证,ftp,http等多种高级 ...

  7. Docker 容器技术 — 安装

    目录 文章目录 目录 CentOS Ubuntu 配置 CentOS 常规安装 yum install -y wgetsystemctl stop firewalld systemctl disabl ...

  8. 如何一键部署php应用,我们怎样来使用宝塔面板一键部署安装博客程序ZBlogPHP

    通常,当我们安装ZBlogPHP时,我们需要提前创建一个数据库,然后下载并上传ZBlogPHP安装包,然后按照步骤逐步安装它.如果您的主机/服务器上安装了Pagoda面板,则可以通过"宝塔一 ...

  9. win7 64 下安装ubuntu14.04

    win7下安装ubuntu方法: * 使用win7下的自带的分区工具给ubuntu留出磁盘空间: 计算机 -> 右键菜单选择管理 -> 选择磁盘管理->选中最后的那个磁盘->右 ...

最新文章

  1. Javascript+xmlhttp调用Webservice
  2. bulk这个词的用法_雅思考试真题解析|雅思听力“正负含义词”的妙用
  3. #地形剖面图_高考地理笔记:经纬网、等值线、地形剖面图知识汇总
  4. one-to-many relationships in Grails forms
  5. phpexcel导出大量数据合并单元格_php 数据导出到excel 2种带有合并单元格的导出...
  6. Java ObjectInputStream readUnshared()方法与示例
  7. vs2017配置pthread.h的方法
  8. 书评 微权力下的成功项目管控(第2版)
  9. Linux下确认CPU是否开启超线程
  10. LAMP平台部署及应用_wuli大世界_新浪博客
  11. 安卓3G控制板、核心板、广告机开发一体规格书
  12. 朱光潜给青年的十二封信 之 谈升学和选课
  13. 《Modelica教程》by Fritzson 导言部分
  14. 爱奇艺再发行6亿美元可转债:债务问题基本解决 将轻装上阵
  15. 使用u启动为苹果笔记本重装win7系统教程
  16. 在线数据库管理工具 web-db mongodb
  17. 音质好的TWS耳机有哪些?音质最好的TWS耳机推荐
  18. 数据挖掘综合应用:数据预处理代码实战
  19. python怎样算入门_python初学者怎么入门
  20. Burp Suite入门介绍

热门文章

  1. #洛谷oj:P1525 [NOIP2010 提高组] 关押罪犯
  2. iphone引用自定义字体 html,在iphone中使用自定义字体
  3. l2行情接口选哪个比较好?
  4. scau数据结构习题
  5. linux系统SIG_ERR函数的用法,信号(Signal)
  6. MapboxMap 之设置 Style
  7. Linux-简易使用
  8. E-prime学习笔记01
  9. 使用HTML设计网页
  10. 开店星简直就是国内优秀的开源商城系统天花板