1. 架构图:

2.  四个组件:mongos、config server、shard、replica set

mongos,数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。

config server,顾名思义为配置服务器,存储所有数据库元信息(路由、分片)的配置。

Shard,和大数据存储HDFS分片存储的思想的东西。

Replica set,副本集服务器。

3. 准备环境:

ip:192.168.1.26
ip:192.168.1.27
ip:192.168.1.109

在每台机器上建立mongos、config、shard1、shard2、shard3五个目录。

tar xvf mongodb-linux-x86_64-3.2.4.gz
mv mongodb-linux-x86_64-3.2.4 /usr/local/mongodb
mkdir -p /data/mongodbtest/mongos
mkdir -p /data/mongodbtest/config/data
mkdir -p /data/mongodbtest/config/log
mkdir -p /data/mongodbtest/mongos/log
mkdir -p /data/mongodbtest/shard1/data
mkdir -p /data/mongodbtest/shard1/log
mkdir -p /data/mongodbtest/shard2/data
mkdir -p /data/mongodbtest/shard2/log
mkdir -p /data/mongodbtest/shard3/data
mkdir -p /data/mongodbtest/shard3/log
groupadd mongo
useradd -g mongo -d /data/mongodbtest mongo

cd /usr/local/
chown -R mongo.mongo mongodb/
cd /data/
chown -R mongo.mongo mongodbtest/

su - mongo
/usr/local/mongodb/bin/mongod --dbpath /data/mongodbtest/config/data/ --configsvr --port 21000 \
--logpath /data/mongodbtest/config/log/config.log -fork

/usr/local/mongodb/bin/mongos --configdb 192.168.1.109:21000,192.168.1.26:21000,192.168.1.27:21000 \
--port 20000 --logpath /data/mongodbtest/mongos/log/mongos.log –fork

/usr/local/mongodb/bin/mongod --shardsvr --replSet shard1 --port 22001 \
--dbpath /data/mongodbtest/shard1/data --logpath /data/mongodbtest/shard1/log/shard1.log --fork -nojournal --oplogSize 10

/usr/local/mongodb/bin/mongod --shardsvr --replSet shard2 --port 22002 \
--dbpath /data/mongodbtest/shard2/data --logpath /data/mongodbtest/shard2/log/shard2.log --fork -nojournal --oplogSize 10

/usr/local/mongodb/bin/mongod --shardsvr --replSet shard3 --port 22003 \
--dbpath /data/mongodbtest/shard3/data --logpath /data/mongodbtest/shard3/log/shard3.log --fork -nojournal --oplogSize 10

/usr/local/mongodb/bin/mongo 127.0.0.1:22001
>use admin;
config={_id:"shard1",members:[
{_id:0,host:"192.168.1.26:22001"},
{_id:1,host:"192.168.1.27:22001"},
{_id:2,host:"192.168.1.109:22001",arbiterOnly:true}]
}
rs.initiate(config);

/usr/local/mongodb/bin/mongo 127.0.0.1:22002
>use admin;
config={_id:"shard2",members:[
{_id:0,host:"192.168.1.27:22002"},
{_id:1,host:"192.168.1.109:22002"},
{_id:2,host:"192.168.1.26:22002",arbiterOnly:true}]
}
rs.initiate(config);

/usr/local/mongodb/bin/mongo 127.0.0.1:22003
>use admin;
config={_id:"shard3",members:[
{_id:0,host:"192.168.1.109:22003"},
{_id:1,host:"192.168.1.26:22003"},
{_id:2,host:"192.168.1.27:22003",arbiterOnly:true}]
}
rs.initiate(config);
开同一台服务器第二个窗口
/usr/local/mongodb/bin/mongo 127.0.0.1:20000
mongos> use admin;
sh.addShard("shard1/192.168.1.26:22001,192.168.1.27:22001,192.168.1.109:22001");
{ "shardAdded" : "shard1", "ok" : 1 }
sh.addShard("shard2/192.168.1.26:22002,192.168.1.27:22002,192.168.1.109:22002");
{ "shardAdded" : "shard2", "ok" : 1 }
sh.addShard("shard3/192.168.1.26:22003,192.168.1.27:22003,192.168.1.109:22003");
{ "shardAdded" : "shard3", "ok" : 1 }

#查看分片服务器的配置
mongos> use admin
switched to db admin
mongos> db.runCommand({listshards:1});
{
"shards" : [
{
"_id" : "shard1",
"host" : "shard1/192.168.1.109:22001,192.168.1.26:22001"
},
{
"_id" : "shard2",
"host" : "shard2/192.168.1.26:22002,192.168.1.27:22002"
},
{
"_id" : "shard3",
"host" : "shard3/192.168.1.109:22003,192.168.1.26:22003"
}
],
"ok" : 1
}

#指定testdb库分片生效
/usr/local/mongodb/bin/mongo 127.0.0.1:20000
mongos> use admin
switched to db admin
mongos> db.runCommand({enablesharding:"testdb"});
#指定数据库里需要分片的集合和片键,片键为id且唯一
mongos> db.runCommand({shardcollection:"testdb.teble1",key:{id:1},unique:true});

mongos> use testdb;
#插入测试数据
mongos> for(var i=1;i<=5000;i++) db.teble1.save({id:i,"test1":"testval1"});

#查看分片情况
mongos> db.teble1.stats()
{
"sharded" : true,
"capped" : false,
"ns" : "testdb.teble1",
"count" : 5000,
"size" : 270000,
"storageSize" : 131072,
"totalIndexSize" : 180224,
"indexSizes" : {
"_id_" : 86016,
"id_1" : 94208
},
"avgObjSize" : 54,
"nindexes" : 2,
"nchunks" : 3,
"shards" : {
"shard1" : {
"ns" : "testdb.teble1",
"count" : 1,
"size" : 54,
"avgObjSize" : 54,
"storageSize" : 16384,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=0,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=0,extractor=,format=btree,huffman_key=,huffman_value=,immutable=0,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=),lsm=(auto_throttle=,bloom=,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=0,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type" : "file",
"uri" : "statistics:table:collection-5-7077211040315156523",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
"file allocation unit size" : 4096,
"blocks allocated" : 3,
"checkpoint size" : 8192,
"allocations requiring file extension" : 3,
"blocks freed" : 0,
"file magic number" : 120897,
"file major version number" : 1,
"minor version number" : 0,
"file bytes available for reuse" : 0,
"file size in bytes" : 16384
},
"btree" : {
"btree checkpoint generation" : 241,
"column-store variable-size deleted values" : 0,
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size RLE encoded values" : 0,
"column-store variable-size leaf pages" : 0,
"pages rewritten by compaction" : 0,
"number of key/value pairs" : 0,
"fixed-record size" : 0,
"maximum tree depth" : 3,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 2867,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 67108864,
"overflow pages" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes read into cache" : 0,
"bytes written from cache" : 148,
"checkpoint blocked page eviction" : 0,
"unmodified pages evicted" : 0,
"page split during eviction deepened the tree" : 0,
"modified pages evicted" : 0,
"data source pages selected for eviction unable to be evicted" : 0,
"hazard pointer blocked page eviction" : 0,
"internal pages evicted" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 0,
"in-memory page splits" : 0,
"in-memory page passed criteria to be split" : 0,
"overflow values cached in memory" : 0,
"pages read into cache" : 0,
"pages read into cache requiring lookaside entries" : 0,
"overflow pages read into cache" : 0,
"pages written from cache" : 2,
"page written requiring lookaside records" : 0,
"pages written requiring in-memory restoration" : 0
},
"compression" : {
"raw compression call failed, no additional data available" : 0,
"raw compression call failed, additional data available" : 0,
"raw compression call succeeded" : 0,
"compressed pages read" : 0,
"compressed pages written" : 0,
"page written failed to compress" : 0,
"page written was too small to compress" : 2
},
"cursor" : {
"create calls" : 1,
"insert calls" : 1,
"bulk-loaded cursor-insert calls" : 0,
"cursor-insert key and value bytes inserted" : 55,
"next calls" : 1,
"prev calls" : 1,
"remove calls" : 0,
"cursor-remove key bytes removed" : 0,
"reset calls" : 3,
"restarted searches" : 0,
"search calls" : 0,
"search near calls" : 0,
"truncate calls" : 0,
"update calls" : 0,
"cursor-update value bytes updated" : 0
},
"reconciliation" : {
"dictionary matches" : 0,
"internal page multi-block writes" : 0,
"leaf page multi-block writes" : 0,
"maximum blocks required for a page" : 0,
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"overflow values written" : 0,
"pages deleted" : 0,
"fast-path pages deleted" : 0,
"page checksum matches" : 0,
"page reconciliation calls" : 2,
"page reconciliation calls for eviction" : 0,
"leaf page key bytes discarded using prefix compression" : 0,
"internal page key bytes discarded using suffix compression" : 0
},
"session" : {
"object compaction" : 0,
"open cursor count" : 1
},
"transaction" : {
"update conflicts" : 0
}
},
"nindexes" : 2,
"totalIndexSize" : 32768,
"indexSizes" : {
"_id_" : 16384,
"id_1" : 16384
},
"ok" : 1
},
"shard2" : {
"ns" : "testdb.teble1",
"count" : 4981,
"size" : 268974,
"avgObjSize" : 54,
"storageSize" : 98304,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=0,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=0,extractor=,format=btree,huffman_key=,huffman_value=,immutable=0,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=),lsm=(auto_throttle=,bloom=,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=0,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type" : "file",
"uri" : "statistics:table:collection-9-334754013080382892",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
"file allocation unit size" : 4096,
"blocks allocated" : 13,
"checkpoint size" : 90112,
"allocations requiring file extension" : 13,
"blocks freed" : 0,
"file magic number" : 120897,
"file major version number" : 1,
"minor version number" : 0,
"file bytes available for reuse" : 0,
"file size in bytes" : 98304
},
"btree" : {
"btree checkpoint generation" : 237,
"column-store variable-size deleted values" : 0,
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size RLE encoded values" : 0,
"column-store variable-size leaf pages" : 0,
"pages rewritten by compaction" : 0,
"number of key/value pairs" : 0,
"fixed-record size" : 0,
"maximum tree depth" : 3,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 2867,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 67108864,
"overflow pages" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes read into cache" : 0,
"bytes written from cache" : 289465,
"checkpoint blocked page eviction" : 0,
"unmodified pages evicted" : 0,
"page split during eviction deepened the tree" : 0,
"modified pages evicted" : 0,
"data source pages selected for eviction unable to be evicted" : 0,
"hazard pointer blocked page eviction" : 0,
"internal pages evicted" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 0,
"in-memory page splits" : 0,
"in-memory page passed criteria to be split" : 0,
"overflow values cached in memory" : 0,
"pages read into cache" : 0,
"pages read into cache requiring lookaside entries" : 0,
"overflow pages read into cache" : 0,
"pages written from cache" : 12,
"page written requiring lookaside records" : 0,
"pages written requiring in-memory restoration" : 0
},
"compression" : {
"raw compression call failed, no additional data available" : 0,
"raw compression call failed, additional data available" : 0,
"raw compression call succeeded" : 0,
"compressed pages read" : 0,
"compressed pages written" : 10,
"page written failed to compress" : 0,
"page written was too small to compress" : 2
},
"cursor" : {
"create calls" : 6,
"insert calls" : 5000,
"bulk-loaded cursor-insert calls" : 0,
"cursor-insert key and value bytes inserted" : 279937,
"next calls" : 1,
"prev calls" : 1,
"remove calls" : 19,
"cursor-remove key bytes removed" : 19,
"reset calls" : 5079,
"restarted searches" : 0,
"search calls" : 78,
"search near calls" : 0,
"truncate calls" : 0,
"update calls" : 0,
"cursor-update value bytes updated" : 0
},
"reconciliation" : {
"dictionary matches" : 0,
"internal page multi-block writes" : 0,
"leaf page multi-block writes" : 1,
"maximum blocks required for a page" : 11,
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"overflow values written" : 0,
"pages deleted" : 0,
"fast-path pages deleted" : 0,
"page checksum matches" : 0,
"page reconciliation calls" : 2,
"page reconciliation calls for eviction" : 0,
"leaf page key bytes discarded using prefix compression" : 0,
"internal page key bytes discarded using suffix compression" : 10
},
"session" : {
"object compaction" : 0,
"open cursor count" : 6
},
"transaction" : {
"update conflicts" : 0
}
},
"nindexes" : 2,
"totalIndexSize" : 114688,
"indexSizes" : {
"_id_" : 53248,
"id_1" : 61440
},
"ok" : 1
},
"shard3" : {
"ns" : "testdb.teble1",
"count" : 18,
"size" : 972,
"avgObjSize" : 54,
"storageSize" : 16384,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=0,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=0,extractor=,format=btree,huffman_key=,huffman_value=,immutable=0,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=),lsm=(auto_throttle=,bloom=,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=0,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type" : "file",
"uri" : "statistics:table:collection-9--450246059770980883",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
"file allocation unit size" : 4096,
"blocks allocated" : 3,
"checkpoint size" : 8192,
"allocations requiring file extension" : 3,
"blocks freed" : 0,
"file magic number" : 120897,
"file major version number" : 1,
"minor version number" : 0,
"file bytes available for reuse" : 0,
"file size in bytes" : 16384
},
"btree" : {
"btree checkpoint generation" : 236,
"column-store variable-size deleted values" : 0,
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size RLE encoded values" : 0,
"column-store variable-size leaf pages" : 0,
"pages rewritten by compaction" : 0,
"number of key/value pairs" : 0,
"fixed-record size" : 0,
"maximum tree depth" : 3,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 2867,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 67108864,
"overflow pages" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes read into cache" : 0,
"bytes written from cache" : 1117,
"checkpoint blocked page eviction" : 0,
"unmodified pages evicted" : 0,
"page split during eviction deepened the tree" : 0,
"modified pages evicted" : 0,
"data source pages selected for eviction unable to be evicted" : 0,
"hazard pointer blocked page eviction" : 0,
"internal pages evicted" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 0,
"in-memory page splits" : 0,
"in-memory page passed criteria to be split" : 0,
"overflow values cached in memory" : 0,
"pages read into cache" : 0,
"pages read into cache requiring lookaside entries" : 0,
"overflow pages read into cache" : 0,
"pages written from cache" : 2,
"page written requiring lookaside records" : 0,
"pages written requiring in-memory restoration" : 0
},
"compression" : {
"raw compression call failed, no additional data available" : 0,
"raw compression call failed, additional data available" : 0,
"raw compression call succeeded" : 0,
"compressed pages read" : 0,
"compressed pages written" : 0,
"page written failed to compress" : 0,
"page written was too small to compress" : 2
},
"cursor" : {
"create calls" : 1,
"insert calls" : 18,
"bulk-loaded cursor-insert calls" : 0,
"cursor-insert key and value bytes inserted" : 990,
"next calls" : 1,
"prev calls" : 1,
"remove calls" : 0,
"cursor-remove key bytes removed" : 0,
"reset calls" : 20,
"restarted searches" : 0,
"search calls" : 0,
"search near calls" : 0,
"truncate calls" : 0,
"update calls" : 0,
"cursor-update value bytes updated" : 0
},
"reconciliation" : {
"dictionary matches" : 0,
"internal page multi-block writes" : 0,
"leaf page multi-block writes" : 0,
"maximum blocks required for a page" : 0,
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"overflow values written" : 0,
"pages deleted" : 0,
"fast-path pages deleted" : 0,
"page checksum matches" : 0,
"page reconciliation calls" : 2,
"page reconciliation calls for eviction" : 0,
"leaf page key bytes discarded using prefix compression" : 0,
"internal page key bytes discarded using suffix compression" : 0
},
"session" : {
"object compaction" : 0,
"open cursor count" : 1
},
"transaction" : {
"update conflicts" : 0
}
},
"nindexes" : 2,
"totalIndexSize" : 32768,
"indexSizes" : {
"_id_" : 16384,
"id_1" : 16384
},
"ok" : 1
}
},
"ok" : 1
}

java程序调用分片集群
public class TestMongoDBShards {
public static void main(String[] args) {
try {
List<ServerAddress> addresses = new ArrayList<ServerAddress>();
ServerAddress address1 = new ServerAddress("192.168.30.131" , 20000);
ServerAddress address2 = new ServerAddress("192.168.30.132" , 20000);
ServerAddress address3 = new ServerAddress("192.168.30.134" , 20000);
addresses.add(address1);
addresses.add(address2);
addresses.add(address3);

MongoClient client = new MongoClient(addresses);
DB db = client.getDB( "testdb" );
DBCollectioncoll = db.getCollection( "table1" );
BasicDBObject object = new BasicDBObject();
object.append( "id" , 1);
DBObjectdbObject = coll.findOne(object);
System.out .println(dbObject);
} catch (Exception e) {
e.printStackTrace();
}
}
}

查看主节点,副节点,仲裁节点分布情况:
                           shard1               shard2             shard3
192.168.1.26    PRIMARY         PRIMARY        PRIMARY
192.168.1.27    ARBITER         SECONDARY   ARBITER
192.168.1.109  SECONDARY   ARBITER        SECONDARY

分片采用的是片键分片,主节点全是在26主机上,读请求去请求副节点。

转载于:https://www.cnblogs.com/hmysql/p/8044425.html

mongodb高可用集群 3 ---分片与副本集结合相关推荐

  1. 搭建高可用mongodb集群(二)—— 副本集

    2019独角兽企业重金招聘Python工程师标准>>> 在上一篇文章<搭建高可用MongoDB集群(一)--配置MongoDB> 提到了几个问题还没有解决. 主节点挂了能 ...

  2. 基本概念:节点、集群、分片及副本

    后台启动cerebro: nohup /usr/local/cerebro-0.8.3/bin/cerebro -Dhttp.port=9000 & GET _cluster/health G ...

  3. docker搭建mongodb高可用集群

    docker搭建mongodb集群 参考资料: 基于 Docker 的 MongoDB 主从集群 http://www.imooc.com/article/details/id/258885 mong ...

  4. MongoDB 分片(sharding)+副本集(replSet)集群搭建

    文章目录 MongoDB安装 Windows平台安装 1.下载 2.安装 3.启动MongoDB服务 4.进入MongoDB后台 Linux平台安装MongoDB 1.下载 2.安装 3.创建数据库目 ...

  5. mongodb集群与分片的配置说明

    mongodb集群与分片的配置说明 Shardingcluster介绍: 这是一种可以水平扩展的模式,在数据量很大时特给力,实际大规模应用一般会采用这种架构去构建monodb系统. 系统分为需要三种角 ...

  6. Redis高可用之主从复制、哨兵、cluster集群

    一 Redis高可用 1.什么是高可用 在web服务器中,高可用是指服务器可以正常访问的时间,衡量的标准是在多长时间内可以提供正常服务(99.9%.99.99%.99.999%等等). 高可用的计算公 ...

  7. kubeasz一键部署containerd运行时、高可用k8s(1.26.x)集群-Day 02

    1. 生产环境部署架构 (1)多master节点,实现master节点的高可用和高性能. (2)单独的etcd分布式集群(生产使用SSD盘),高可用持久化k8s资源对象数据,并实现高可用. (3)多n ...

  8. 企业级负载均衡集群——通过fence设备解决集群节点之间争抢资源的现象(FENCE搭建、高可用服务配置详解)

    1.FENCE工具的原理及作用 FENCE设备是RHCS集群中必不可少的一个组成部分,通过FENCE设备可以避免因出现不可预知的情况而造成的"脑裂"现象 FENCE设备的出现,就是 ...

  9. (五)Hadoop HA高可用集群的配置-Hadoop HA集群模式的安装

    目录 一.目的及要求: 实验目的: 实验要求: 二.环境要求: 集群规划: 三.具体步骤: 项目一:Hadoop的基本安装配置 项目二:Hadoop高可用完全分布模式配置 项目三:同步安装和配置 项目 ...

最新文章

  1. 关于如何在MyEclipse下修改项目名包名,以及类
  2. 《京东技术解密》读书笔记:坚持技术十年如一日
  3. 六十四、SpringBoot中的模板引擎Thymeleaf
  4. python 面向对象 私有化浅析
  5. 搜索文献_【大牛经验分享】如何高效快捷搜索文献?
  6. python去重语句_Python对列表去重的多种方法(四种方法)
  7. List中对象model的排序问题
  8. 教你彻底卸载Ubuntu双系统,去污不残留
  9. SQL点滴18—SqlServer中的merge操作,相当地风骚
  10. java clone()用法_java clone方法使用详解(转)
  11. MATLAB表示非线性系统,matlab非线性控制系统分析.ppt
  12. 面试简历上的项目经验
  13. 刘慈欣计算机预测未来的小说,科幻作家都是“神预测”?刘慈欣等现身说法
  14. IDEA中两中默认背景颜色的RGB
  15. 手机版的python怎么用,手机上如何使用python
  16. chap4Web服务器-入门学习笔记
  17. python中def什么时候用_在python中解析日期而不使用defau
  18. hdu5056 oring count
  19. 清者自清!国际泳联为孙杨“药检风波”盖棺定论
  20. unity 3d slider机械臂转动

热门文章

  1. 跨领域的智能云管理平台-孙立辉(云平台 CSM)
  2. 第九章 java常用类
  3. 断电断网等原因致使重新进入linux时系统进不去,进不了图形界面,出现/dev/sda2 recovery,clean,,,
  4. 做网站用html好还是vue好,中大型项目用react还是vue?
  5. QTableView 去除Item选中的虚线框
  6. 基于 HTML+CSS+JS 的纸牌记忆游戏
  7. python代码编译出现SyntaxError: Non-ASCII character ‘\xe8‘ in file问题
  8. 如果有天我们变陌生了,那么我就重新认识你(晚安心语)
  9. vue移动端调试工具
  10. VMware Workstation 12 安装windows_server_2016