Docker搭建MongoRocks副本分片集群

  • 准备
    • 依赖
    • 安装
    • 下载镜像
  • 基本单实例
  • 带配置的单实例
    • 权限配置
    • docker参数解释
    • 启动命令
    • rocksdb配置解释
    • 查看启动日志
    • 连接测试
  • overlay网络container分片集群
    • 准备基础环境
    • 创建swarm overlay网络
    • 测试overlay网络连通性
    • 创建数据目录
    • 启动configsvr
    • 建立configsvr 副本集
    • 启动shardsvr
    • 建立shardsvr副本集
    • 启动mongos
    • 配置分片
    • 测试
  • docker stack 部署service集群
    • 准备基础环境
    • 创建数据目录
    • 编写docker stack deploy脚本
    • 启动集群
    • 检查启动
    • 构建副本集群
    • 添加分片
    • 测试

准备

依赖

  • 操作系统:CentOS 7.6

安装

参照安装(点击)

下载镜像

mongodb 在3.4以后有部分api变化,导致percona无法在后续版本继续集成rocks engine。目前可用的mongodb最大版本号为3.4,percona官方提供了3.4的docker镜像。

docker pull percona/percona-server-mongodb:3.4

基本单实例

基本的命令很简单:

docker run -p 27017:27017 -d percona/percona-server-mongodb:3.4 --storageEngine=rocksdb

这里我们并不启动这个容器,直接从带配置的容器开始。

带配置的单实例

权限配置

创建目录,为容器内用户赋予执行权限

mkdir -p /root/volumns/mongorocks/db
chmod 777 -R /root/volumns/mongorocks/db

docker参数解释

  • docker设置目录映射-v /root/volumns/mongorocks/db:/data/db
  • docker设置network访问容器名–name=mongorocks
  • docker设置link访问主机地址–hostname=mongorocks
  • docker设置自动重启 --restart=always
  • mongo设置数据目录–dbpath=/data/db

启动命令

完整启动命令如下

docker run -d \--name=mongorocks \--hostname=mongorocks \--restart=always \-p 27017:27017 \-v /root/volumns/mongorocks/db:/data/db \-v /etc/localtime:/etc/localtime \percona/percona-server-mongodb:3.4 \--storageEngine=rocksdb \--dbpath=/data/db \--rocksdbCacheSizeGB=1 \--rocksdbCompression=snappy \--rocksdbMaxWriteMBPerSec=1024 \--rocksdbCrashSafeCounters=false \--rocksdbCounters=true \--rocksdbSingleDeleteIndex=false

rocksdb配置解释

配置可以参照:官方描述(点击)
简单说明需要更改的配置项:

  • rocksdbCacheSizeGB设置block cache的大小:默认为30%的可用物理内存。rocksdb使用cache包含两个部分,未压缩数据uncompressed data 放在block cache中,压缩数据放在kernel page cache中。
  • rocksdbMaxWriteMBPerSec定义最大写入的速度:默认为1GiB/S速度,目前能够超过这个读写速度的只有nvmeSSD,目前只有本地SSD可以支持以上的速度,使用云盘ESSD只有480MiB。nvmeSSD所受到的限制是网卡的限制,阿里云可用最大带宽为4Gib/s。也就是500MiB。根据使用的部署环境来设定最大写入速度。这个值决定了读写占用CPU时间的分配。当值设置小于物理硬盘最大速度,值越小,读取速度越快,写入速度越慢;反之值越大,读取速度越慢,写入速度越快。当值超过物理硬盘速度上限时,写入速度达到上限,超过上限的部分将降低读取速度而不提升写入速度。可以通过linux命令测试硬盘读写速度。参照这里(点击)
  • rocksdbCompression定义压缩格式:默认snappy,可选none, snappy, zlib, lz4, lz4hc,各种

查看启动日志

docker logs mongorocks

日志显示成功

2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Block Cache Size GB: 1
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Compression: snappy
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] MaxWriteMBPerSec: 1024
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Engine custom option:
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Crash safe counters: 0
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Counters: 1
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Use SingleDelete in index: 0
2019-08-04T16:18:17.920+0000 I ACCESS   [main] Initialized External Auth Session
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongorocks
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] db version v3.4.21-2.19
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] git version: 2e0631f5e0d868dd51b71e1e55eb8a57300d00df
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] modules: none
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] build environment:
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] options: { storage: { dbPath: "/data/db", engine: "rocksdb", rocksdb: { cacheSizeGB: 1, compression: "snappy", counters: true, crashSafeCounters: false, maxWriteMBPerSec: 1024, singleDeleteIndex: false } } }
2019-08-04T16:18:17.935+0000 I STORAGE  [initandlisten] 0 dropped prefixes need compaction
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **          You can use percona-server-mongodb-enable-auth.sh to fix it.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-08-04T16:18:17.937+0000 I CONTROL  [initandlisten]
2019-08-04T16:18:17.937+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2019-08-04T16:18:17.938+0000 I INDEX    [initandlisten]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] build index done.  scanned 0 total records. 0 secs
2019-08-04T16:18:17.938+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 3.4
2019-08-04T16:18:17.939+0000 I NETWORK  [thread1] waiting for connections on port 27017

连接测试

docker run -it --link mongorocks --rm percona/percona-server-mongodb:3.4 \mongo --host mongorocks

连接成功

overlay网络container分片集群

准备基础环境

部署三台机器vm1,vm2,vm3。基础环境和单实例一样。

创建swarm overlay网络

创建swarm overlay网络(点击)

测试overlay网络连通性

在vm1上运行mongorocks manager

docker run -d --name=mongorocks-1 \--hostname=mongorocks-1 \--network=mongo-overlay \--restart=always \-p 27017:27017 \-v /root/volumns/mongorocks/db:/data/db -v /etc/localtime:/etc/localtime \
percona/percona-server-mongodb:3.4 \--storageEngine=rocksdb --dbpath=/data/db \--rocksdbCacheSizeGB=1   \--rocksdbCompression=snappy   \--rocksdbMaxWriteMBPerSec=1024   \--rocksdbCrashSafeCounters=false   \--rocksdbCounters=true   \--rocksdbSingleDeleteIndex=false

参数解析:

  • container加入overlay网络:–network=mongo-overlay

在vm3上通过刚刚建立的overlay网络测试连通性。

docker pull debian:latest
docker run --network=mongo-overlay -it debian:latest ping mongorocks-1

显示结果

64 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=1 ttl=64 time=0.494 ms
64 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=2 ttl=64 time=1.05 ms
64 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=3 ttl=64 time=1.19 ms

创建数据目录

准备建立三个configsvr,三个mongos,三个shardsvr,每台机器上分别部署一个configsvr,,一个mongos,一个shardsvr。
在vm1,vm2,vm3上建立数据目录。

#vm1
mkdir -p /root/volumns/mongo-config-1/db /root/volumns/mongo-mongos-1/db /root/volumns/mongo-shard-1-master/db /root/volumns/mongo-shard-3-slave/db /root/volumns/mongo-shard-2-arbiter/db
chmod -R 777 /root/volumns/mongo-config-1/db /root/volumns/mongo-mongos-1/db /root/volumns/mongo-shard-1-master/db /root/volumns/mongo-shard-3-slave/db /root/volumns/mongo-shard-2-arbiter/db
#vm2
mkdir -p /root/volumns/mongo-config-2/db /root/volumns/mongo-mongos-2/db /root/volumns/mongo-shard-2-master/db /root/volumns/mongo-shard-1-slave/db /root/volumns/mongo-shard-3-arbiter/db
chmod -R 777 /root/volumns/mongo-config-2/db /root/volumns/mongo-mongos-2/db /root/volumns/mongo-shard-2-master/db /root/volumns/mongo-shard-1-slave/db /root/volumns/mongo-shard-3-arbiter/db
#vm3
mkdir -p /root/volumns/mongo-config-3/db /root/volumns/mongo-mongos-3/db /root/volumns/mongo-shard-3-master/db /root/volumns/mongo-shard-2-slave/db /root/volumns/mongo-shard-1-arbiter/db
chmod -R 777 /root/volumns/mongo-config-3/db /root/volumns/mongo-mongos-3/db /root/volumns/mongo-shard-3-master/db /root/volumns/mongo-shard-2-slave/db /root/volumns/mongo-shard-1-arbiter/db

这里取名用的是master和slave,除了arbiter节点只负责仲裁意外,实际上副本集其他的节点都是对等关系,出现单点故障自动切换。

启动configsvr

#vm1
docker run -d --name=mongo-config-1 --hostname=mongo-config-1 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-1/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
#vm2
docker run -d --name=mongo-config-2 --hostname=mongo-config-2 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-2/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
#vm3
docker run -d --name=mongo-config-3 --hostname=mongo-config-3 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-3/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config

docker logs检查所有config 容器的启动日志,看是否启动正常

建立configsvr 副本集

3.4版本以后要求configsvr必须是副本集

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 \mongo mongo-config-1:27019 --eval \"rs.initiate({_id:'config',members:[{_id:0,host:'mongo-config-1:27019'},{_id:1,host:'mongo-config-2:27019'},{_id:2,host:'mongo-config-3:27019'}]})"

启动shardsvr

#vm1
docker run -d --name=mongo-shard-1-master --hostname=mongo-shard-1-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-1-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
docker run -d --name=mongo-shard-3-slave --hostname=mongo-shard-3-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-3-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
docker run -d --name=mongo-shard-2-arbiter --hostname=mongo-shard-2-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-2-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
#vm2
docker run -d --name=mongo-shard-2-master --hostname=mongo-shard-2-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-2-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
docker run -d --name=mongo-shard-1-slave --hostname=mongo-shard-1-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-1-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
docker run -d --name=mongo-shard-3-arbiter --hostname=mongo-shard-3-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-3-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
#vm3
docker run -d --name=mongo-shard-3-master --hostname=mongo-shard-3-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-3-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
docker run -d --name=mongo-shard-2-slave --hostname=mongo-shard-2-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-2-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
docker run -d --name=mongo-shard-1-arbiter --hostname=mongo-shard-1-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-1-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1

docker logs检查所有shard 容器的启动日志,看是否启动正常

建立shardsvr副本集

arbiterOnly标记arbiter实例。因为进入了mongo-overlay网络,以下命令可以在任意机器上运行。

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-1-master:27018 --eval "rs.initiate({_id:'shard-1',members:[{_id:0,host:'mongo-shard-1-master:27018'},{_id:1,host:'mongo-shard-1-slave:27018'},{_id:2,host:'mongo-shard-1-arbiter:27018', arbiterOnly: true}]})"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-2-master:27018 --eval "rs.initiate({_id:'shard-2',members:[{_id:0,host:'mongo-shard-2-master:27018'},{_id:1,host:'mongo-shard-2-slave:27018'},{_id:2,host:'mongo-shard-2-arbiter:27018', arbiterOnly: true}]})"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-3-master:27018 --eval "rs.initiate({_id:'shard-3',members:[{_id:0,host:'mongo-shard-3-master:27018'},{_id:1,host:'mongo-shard-3-slave:27018'},{_id:2,host:'mongo-shard-3-arbiter:27018', arbiterOnly: true}]})"

启动mongos

mongos不是副本集,不需要做副本集配置;可以在外层使用代理软件例如haproxy做mongos的反向代理。

#vm1
docker run -d --name=mongo-mongos-1 --hostname=mongo-mongos-1 --network=mongo-overlay --restart=always -p 27017:27017  -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
#vm2
docker run -d --name=mongo-mongos-2 --hostname=mongo-mongos-2 --network=mongo-overlay --restart=always -p 27017:27017  -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
#vm3
docker run -d --name=mongo-mongos-3 --hostname=mongo-mongos-3 --network=mongo-overlay --restart=always -p 27017:27017 -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019

docker logs检查所有mongos 容器的启动日志,看是否启动正常

配置分片

shard分片数据保存在configsvr中,只在一个mongos节点上添加所有shard到configsvr就可以了。逐条执行以下命令,以判定哪个分片出错了。

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-1/mongo-shard-1-master:27018,mongo-shard-1-slave:27018,mongo-shard-1-arbiter:27018');"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-2/mongo-shard-2-master:27018,mongo-shard-2-slave:27018,mongo-shard-2-arbiter:27018');"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-3/mongo-shard-3-master:27018,mongo-shard-3-slave:27018,mongo-shard-3-arbiter:27018');"

检查shard状态

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.status()"

显示结果

Percona Server for MongoDB shell version v3.4.21-2.19
connecting to: mongodb://mongo-mongos-1:27017/test
Percona Server for MongoDB server version: v3.4.21-2.19
--- Sharding Status --- sharding version: {"_id" : 1,"minCompatibleVersion" : 5,"currentVersion" : 6,"clusterId" : ObjectId("5d498c6d23c7391c25ba330a")}shards:{  "_id" : "shard-1",  "host" : "shard-1/mongo-shard-1-master:27018,mongo-shard-1-slave:27018",  "state" : 1 }{  "_id" : "shard-2",  "host" : "shard-2/mongo-shard-2-master:27018,mongo-shard-2-slave:27018",  "state" : 1 }{  "_id" : "shard-3",  "host" : "shard-3/mongo-shard-3-master:27018,mongo-shard-3-slave:27018",  "state" : 1 }active mongoses:"3.4.21-2.19" : 3autosplit:Currently enabled: yesbalancer:Currently enabled:  yesCurrently running:  no
NaNFailed balancer rounds in last 5 attempts:  0Migration Results for the last 24 hours: No recent migrationsdatabases:

测试

使用NosqlBooster for MongoDB生成测试程序
NosqlBooster for MongoDB会自动添加随机数据,保证每条数据都不一样。

faker.locale = "en"
const STEPCOUNT = 1000; //total 10 * 1000 = 10000
function isRandomBlank(blankWeight) {return Math.random() * 100 <= blankWeight;
};
for (let i = 0; i < 10; i++) {db.getCollection("testCollection").insertMany(_.times(STEPCOUNT, () => {return {"name": faker.name.findName(),"username": faker.internet.userName(),"email": faker.internet.email(),"address": {"street": faker.address.streetName(),"suite": faker.address.secondaryAddress(),"city": faker.address.city(),"zipcode": faker.address.zipCode()},"phone": faker.phone.phoneNumber(),"website": faker.internet.domainName(),"company": faker.company.companyName()}}))console.log("test:testCollection", `${(i + 1) * STEPCOUNT} docs inserted`);
}

根据测试程序设置分片属性,1是根据范围分片,hashed是根据key进行分片,为了更容易看到分片效果,选择hashed

sh.enableSharding('test')
sh.shardCollection(`test.testCollection`, { _id: 'hashed'})

在NosqlBooster for MongoDB运行测试程序,执行完成后查看分片状态

use test
db.getCollection('testCollection').stats()

显示结果

{"sharded": true,"capped": false,"ns": "test.testCollection","count": 10000,"size": 2998835,"storageSize": 2998528,"totalIndexSize": 346888,"indexSizes": {"_id_": 180000,"_id_hashed": 166888},"avgObjSize": 299,"nindexes": 2,"nchunks": 6,"shards": {"shard-1": {"ns": "test.testCollection","size": 994917,"count": 3317,"avgObjSize": 299,"storageSize": 994816,"capped": false,"nindexes": 2,"totalIndexSize": 115072,"indexSizes": {"_id_": 59706,"_id_hashed": 55366},"ok": 1},"shard-2": {"ns": "test.testCollection","size": 1005509,"count": 3354,"avgObjSize": 299,"storageSize": 1005312,"capped": false,"nindexes": 2,"totalIndexSize": 116324,"indexSizes": {"_id_": 60372,"_id_hashed": 55952},"ok": 1},"shard-3": {"ns": "test.testCollection","size": 998409,"count": 3329,"avgObjSize": 299,"storageSize": 998400,"capped": false,"nindexes": 2,"totalIndexSize": 115492,"indexSizes": {"_id_": 59922,"_id_hashed": 55570},"ok": 1}},"ok": 1
}

docker stack 部署service集群

准备基础环境

跟前面的环境配置一样,建立vm1,vm2,vm3的的swarm网络,但不需要建立overlay网络。

创建数据目录

在vm1,vm2,vm3上创建目录,并赋予权限

mkdir -p /data/mongo/config/db
chmod -R 777 /data/mongo/config/db
mkdir -p /data/mongo/shard-1/db
chmod -R 777 /data/mongo/shard-1/db
mkdir -p /data/mongo/shard-2/db
chmod -R 777 /data/mongo/shard-2/db
mkdir -p /data/mongo/shard-3/db
chmod -R 777 /data/mongo/shard-3/db

编写docker stack deploy脚本

version: '3.4'
services:shard-1-server-1:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-1/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm1shard-1-server-2:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-1/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm3shard-1-server-3:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-1/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm2shard-2-server-1:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-2/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm2shard-2-server-2:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-2/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm1shard-2-server-3:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-2/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm3shard-3-server-1:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-3/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm3shard-3-server-2:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-3/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm2shard-3-server-3:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3networks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/shard-3/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm1config-1:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet confignetworks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/config/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm1config-2:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet confignetworks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/config/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm2config-3:image: percona/percona-server-mongodb:3.4command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet confignetworks:- overlayvolumes:- /etc/localtime:/etc/localtime- /data/mongo/config/db:/data/dbdeploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm3mongos-1:image: percona/percona-server-mongodb:3.4command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019networks:- overlayports:- 27017:27017volumes:- /etc/localtime:/etc/localtimedepends_on:- config-1- config-2- config-3deploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm1mongos-2:image: percona/percona-server-mongodb:3.4command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019networks:- overlayports:- 27018:27017volumes:- /etc/localtime:/etc/localtimedepends_on:- config-1- config-2- config-3deploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm2mongos-3:image: percona/percona-server-mongodb:3.4command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019networks:- overlayports:- 27019:27017volumes:- /etc/localtime:/etc/localtimedepends_on:- config-1- config-2- config-3deploy:restart_policy:condition: on-failurereplicas: 1placement:constraints:- node.hostname==vm3
networks:overlay:driver: overlay

启动集群

docker stack deploy -c mongo.yaml mongo

显示

Creating network mongo_overlay
Creating service mongo_shard-3-server-3
Creating service mongo_shard-1-server-2
Creating service mongo_mongos-3
Creating service mongo_shard-3-server-1
Creating service mongo_mongos-1
Creating service mongo_config-2
Creating service mongo_shard-1-server-1
Creating service mongo_mongos-2
Creating service mongo_shard-2-server-2
Creating service mongo_shard-1-server-3
Creating service mongo_shard-2-server-1
Creating service mongo_config-3
Creating service mongo_shard-2-server-3
Creating service mongo_config-1
Creating service mongo_shard-3-server-2

检查启动

docker service ls

Relpliates全部是1/1就表示启动成功

ID                  NAME                     MODE                REPLICAS            IMAGE                                PORTS
9urzybegrbmz        mongo_config-1           replicated          1/1                 percona/percona-server-mongodb:3.4
jldecj0s6238        mongo_config-2           replicated          1/1                 percona/percona-server-mongodb:3.4
n9r4ld6komnq        mongo_config-3           replicated          1/1                 percona/percona-server-mongodb:3.4
ni94pd5odl89        mongo_mongos-1           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27017->27017/tcp
sh4ykadpmoka        mongo_mongos-2           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27018->27017/tcp
12m4nbyn77va        mongo_mongos-3           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27019->27017/tcp
psolde1gltn9        mongo_shard-1-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4
4t2xwavpgg26        mongo_shard-1-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4
qwjfpg93qkho        mongo_shard-1-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4
ztbxk12npvwo        mongo_shard-2-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4
tz3n5oj55osx        mongo_shard-2-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4
pcprsbo9xxin        mongo_shard-2-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4
nn7mrm0iy26v        mongo_shard-3-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4
ps4zqmiqzw1k        mongo_shard-3-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4
iv1gvzzm3ai0        mongo_shard-3-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4

构建副本集群

docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"config\",configsvr: true, members: [{ _id : 0, host : \"config-1:27019\" },{ _id : 1, host : \"config-2:27019\" }, { _id : 2, host : \"config-3:27019\" }]})' | mongo --port 27019";
docker exec -it $(docker ps | grep "shard-1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-1\", members: [{ _id : 0, host : \"shard-1-server-1:27018\" },{ _id : 1, host : \"shard-1-server-2:27018\" },{ _id : 2, host : \"shard-1-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
docker exec -it $(docker ps | grep "shard-2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-2\", members: [{ _id : 0, host : \"shard-2-server-1:27018\" },{ _id : 1, host : \"shard-2-server-2:27018\" },{ _id : 2, host : \"shard-2-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
docker exec -it $(docker ps | grep "shard-3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-3\", members: [{ _id : 0, host : \"shard-3-server-1:27018\" },{ _id : 1, host : \"shard-3-server-2:27018\" },{ _id : 2, host : \"shard-3-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";

注意:不可以在arbiter所在机器添加当前分片,需要换一台机器。

添加分片

docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-1/shard-1-server-1:27018,shard-1-server-2:27018,shard-1-server-3:27018\")' | mongo ";
docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-2/shard-2-server-1:27018,shard-2-server-2:27018,shard-2-server-3:27018\")' | mongo ";
docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-3/shard-3-server-1:27018,shard-3-server-2:27018,shard-3-server-3:27018\")' | mongo ";

测试

同overlay的测试部分。

Docker搭建MongoRocks副本分片集群(Docker Mongodb Rocksdb Replication Sharding)相关推荐

  1. Docker下Redis Cluster分片集群的搭建、基本操作、集群扩容和集群故障转移(非关系型数据库技术课程 第九周)

    文章目录 Docker 下Redis Cluster 分片集群搭建 1. Cluster 分片集群 1.1 Cluster 集群的结构和作用 1.2 Cluster 分片集群 的作用 1.3哈希槽(h ...

  2. docker搭建redis高可用集群

    目标:docker搭建redis高可用集群 1.架构:六个redis容器,三主三从,主从复制,主机宕机从机自动替代 2.网络架构设计:设计一个专属redis的docker网络 docker netwo ...

  3. 【006】Redis主从/哨兵/分片集群docker搭建

    项目源码合集 https://gitee.com/qiuyusy/small-project-study 搭建过程疯狂踩坑,记录一下希望各位少走弯路 目录 主从搭建 配置文件redis.conf 运行 ...

  4. 使用Docker搭建Elasticsearch6.8.6集群及设置集群用户密码

    本文基于Docker镜像搭建Elasticsearch集群,集群搭建完成后设置集群用户密码,主要包含以下内容: 修改系统参数 安装docker和docker-compose 编写yml配置文件 获取集 ...

  5. Docker搭建MySQL的PXC集群

    一.简介 PXC属于一套近乎完美的mysql高可用集群解决方案,相比那些比较传统的基于主从复制模式的集群架构MHA和MM+keepalived,galera cluster最突出特点就是解决了诟病已久 ...

  6. docker搭建Redis的主从集群

    服务器:Centos7 前提:selinux关掉 一,安装docker并配置镜像加速(我用的阿里加速) [root@localhost ~]# yum -y install docker [root@ ...

  7. docker swarm 布署minio集群

    活动地址:毕业季·进击的技术er docker swarm 布署minio集群 环境:ubuntu18.04服务器 4台,docker(docker-compose,docker swarm) 分为主 ...

  8. redis搭建主从哨兵模式+分片集群部署(redis系列二)

    前言:在前一章了解redis的基本介绍后,这一章主要介绍redis的实战部署,文章有点长请一步步耐心看完,我相信肯定会有收获的,这里用的资源包是2022年最新的redis版本可能会跟旧版本不同,在此章 ...

  9. mongodb分片概念和原理-实战分片集群

    一.分片 分片是一种跨多台机器分发数据的方法.MongoDB使用分片来支持具有非常大的数据集和高吞吐量操作的部署. 问题: 具有大型数据集或高吞吐量应用程序的数据库系统可能会挑战单个服务器的容量.例如 ...

最新文章

  1. Socket桥(转载)
  2. 技术分享连载(六十四)
  3. [蓝桥杯][算法提高VIP]盾神与积木游戏(贪心)
  4. Spring Boot—07应用application.properties中的配置
  5. python-函数的位置参数
  6. python对比柱状图_python 绘制分组对比柱状图
  7. django 1.8 官方文档翻译: 1-3-1 高级教程:如何编写可重用的应用
  8. OpenCV 累加一个三通道矩阵的所有元素
  9. 学计算机用16g内存,说出去就是个笑话,两万多电脑内存才16G,我认为这配置不合理!...
  10. C#中如何获取一个二维数组的两维长度,即行数和列数?
  11. 线性代数复盘 | 同济大学工程数学第六版第三章思维导图笔记——线性方程组的解(复习专用)
  12. 6-5 统计二叉树叶子结点个数 (10 分)(C语言版)
  13. (微信定时发送消息)一个java文件,完成可配置的微信定时发送消息任务
  14. “蔚来杯“2022牛客暑期多校训练营1 J Serval and Essay(启发式合并)
  15. cse7761电能计量芯片驱动程序
  16. 抖音怎么知道自己上热门 手机视频md5值修改
  17. 自动驾驶:道路交通领域的范式革命,交通强国建设的引擎
  18. 文件系统层次结构标准和Linux上下载源代码配置编译安装
  19. Multisim14创建LM386教程(含所需文件)
  20. 《当阿呆遇上阿瓜》:当铁锹遇上石头

热门文章

  1. java对时间的换算
  2. uniapp 微信公众号支付demo
  3. 服务器连电视无网络怎么修改密码,在手机里修改路由器密码,怎么电视连的上路由器连不上服务器呢...
  4. 根据txt文件中保存的图片名字,移动指定文件夹里的图片到新的文件夹
  5. 绝地求生服务器6月23日紧急维护,绝地求生6月28日更新维护时间介绍
  6. 苹果iPhone关机后仍在运行 “永远开机”或成恶意软件温床
  7. Kotlin学习记录(四)—— 常用集合的使用
  8. POJ 2983 浅谈差分约束系统处理严格等价性问题
  9. MNIST机器学习入门(一)
  10. android webview加载黑屏,华为10手机打开Webview ANR 黑屏问题