2019独角兽企业重金招聘Python工程师标准>>>

介绍

可追踪游标,特别是追踪MongoDB的操作日志是MongoDB中拥有多种用途、非常受欢迎的特色,例如向数据库发送一个有关所有变化的实时通知。一个可追踪游标在概念上与Unix系统中的”tail -f”命令相同。一旦已经到达结果集的末尾,游标将不会关闭,直到新数据到达,才会返回所有结果。

对于复制集而言,追踪操作日志非常简单,但是对于分片集群而言,会变得稍微复杂点。在这篇博文中,我们将说明如何在一个分片集群中追踪MongoDB的操作日志。

为什么要追踪操作日志?

可追踪游标可以用于封顶集合,并且经常用于数据流的发布-订阅类型。特别地,我们内部用于复制的MongoDB操作日志是一个封顶集合,从节点将会使用一个可追踪的游标来获取用于复制的操作。

ETL中的第三方工具或者异构的复制集域也可以从MongoDB操作日志中读取事件。例如,Mongo连接件或者MongoDB ElasticSearch 的River插件都可以实现该功能。

但是,通过使用一个如此强大的接口,我们可以实现的不仅仅是复制集!交互式编程已经成为主流趋势,特别是在HTML5以及JavaScript应用中。一旦你改变数据模型的某些值,一些现在的JavaScript框架将会立即自动地更新用户接口。

通过追踪操作日志的方法追踪一个MongoDB集合或者是整个数据集是上述编程模型的一个完美搭配。这意味着发生在整个数据库中的任何改变都将会出发一个实时的通知到应用服务器。

实际上,一个超棒的JavaScript框架已经正在实现这个工作了:Meteor。在他们的网站上有一个超酷的视频演示,你可以观看一下。这使得Meteor成为一个全堆栈的交互式平台:改变将自动从数据库传递到UI界面。

使用一个可追踪游标读取操作日志

下面是一个如何在mongo命令行中实现一个可追踪游标的实例:

shard01:PRIMARY> c = db.oplog.rs.find( { fromMigrate : { $exists : false } } ).addOption( DBQuery.Option.tailable ).addOption(DBQuery.Option.awaitData)
{ "ts" : Timestamp(1422998530, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp(1422998574, 1), "h" : NumberLong("-6781014703318499311"), "v" : 2, "op" : "i", "ns" : "test.mycollection", "o" : { "_id" : 1, "data" : "hello" } }
{ "ts" : Timestamp(1422998579, 1), "h" : NumberLong("-217362260421471244"), "v" : 2, "op" : "i", "ns" : "test.mycollection", "o" : { "_id" : 3, "data" : "hello" } }
{ "ts" : Timestamp(1422998584, 1), "h" : NumberLong("7215322058367374253"), "v" : 2, "op" : "i", "ns" : "test.mycollection", "o" : { "_id" : 5, "data" : "hello" } }
shard01:PRIMARY> c.hasNext()
true
shard01:PRIMARY> c.next()
{"ts" : Timestamp(1423049506, 1),"h" : NumberLong("5775895302295493166"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 12,"data" : "hello"}
}
shard01:PRIMARY> c.hasNext()
false

正如你说看到的,当在命令行中使用游标的时候,游标不会一直等待,而是在几秒钟之后进入超时状态。然后,你可以使用hasNext() 及next()方法来检查是否有新的数据到来。然后,确实有新数据到达!

当然,你也可以在 find() 函数中添加过滤器来获取那些你想得到的事件。例如,下面是一个Meteor中追踪游标的示例:

meteor:PRIMARY> db.currentOp()
{"inprog" : [{"opid" : 345,"active" : true,"secs_running" : 4,"op" : "getmore","ns" : "local.oplog.rs","query" : {"ns" : {"$regex" : "^meteor\\."},"$or" : [{"op" : {"$in" : ["i","u","d"]}},{"op" : "c","o.drop" : {"$exists" : true}}],"ts" : {"$gt" : Timestamp(1422200128, 7)}},

在分片集群中追踪操作日志

但是,当你使用分片时会面临什么呢?首先,你将不得不单独追踪每一个分片上的每一个操作日志。

这些仍然是可以实现的,但是,还有更多的其它并发情况。在一个分片集群中,MongoDB平衡器将会偶尔将数据从某个分片迁移到另一个。这就意味着,你在某个分片将会看到一系列删除操作,然后在下一个分片中将会同步看到一系列对应的插入操作。但是,这些仅仅是MongoDB内部的问题。如果你正在通过追踪操作日志来捕捉数据库中的变化,很有可能你并不希望看到这些情况,甚至也许会被这些内部事件弄糊涂。例如,一个在分片集群中追踪操作日志的Meteor应用也许会莫名其妙地突然删除一些数据!

我来举个例子。首先,我们先使用mlunch实用工具来搭建一个分片集群:

$ mlaunch --sharded 2 --replicaset
launching: mongod on port 27018
launching: mongod on port 27019
launching: mongod on port 27020
launching: mongod on port 27021
launching: mongod on port 27022
launching: mongod on port 27023
launching: config server on port 27024
replica set 'shard01' initialized.
replica set 'shard02' initialized.
launching: mongos on port 27017
adding shards. can take up to 30 seconds...

接着,连接到mongos,对一个集合进行分片然后向里面插入一些数据:

$ mongo
MongoDB shell version: 2.6.7
connecting to: test
mongos> sh.enableSharding( "test" )
{ "ok" : 1 }
mongos> sh.shardCollection( "test.mycollection", { _id : 1 }, true )
{ "collectionsharded" : "test.mycollection", "ok" : 1 }
mongos> db.mycollection.insert( { _id : 1, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 3, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 5, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 7, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 9, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 11, data : "hello" } )
WriteResult({ "nInserted" : 1 })

接着,如果我在shard01上连接到mongod,我们可以看到所有数据都在这里,我们也可以从操作日志中看到插入操作:

$ mongo --port 27018
MongoDB shell version: 2.6.7
connecting to: 127.0.0.1:27018/test
shard01:PRIMARY> show collections
mycollection
system.indexes
shard01:PRIMARY> db.mycollection.find()
{ "_id" : 1, "data" : "hello" }
{ "_id" : 3, "data" : "hello" }
{ "_id" : 5, "data" : "hello" }
{ "_id" : 7, "data" : "hello" }
{ "_id" : 9, "data" : "hello" }
{ "_id" : 11, "data" : "hello" }
shard01:PRIMARY> use local
switched to db local
shard01:PRIMARY> show collections
me
oplog.rs
slaves
startup_log
system.indexes
system.replset
shard01:PRIMARY> db.oplog.rs.find().pretty()
{"ts" : Timestamp(1422998530, 1),"h" : NumberLong(0),"v" : 2,"op" : "n","ns" : "","o" : {"msg" : "initiating set"}
}
{"ts" : Timestamp(1422998574, 1),"h" : NumberLong("-6781014703318499311"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 1,"data" : "hello"}
}
{"ts" : Timestamp(1422998579, 1),"h" : NumberLong("-217362260421471244"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 3,"data" : "hello"}
}
{"ts" : Timestamp(1422998584, 1),"h" : NumberLong("7215322058367374253"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 5,"data" : "hello"}
}
{"ts" : Timestamp(1422998588, 1),"h" : NumberLong("-5372877897993278968"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 7,"data" : "hello"}
}
{"ts" : Timestamp(1422998591, 1),"h" : NumberLong("-243188455606213719"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 9,"data" : "hello"}
}
{"ts" : Timestamp(1422998597, 1),"h" : NumberLong("5040618552262309692"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 11,"data" : "hello"}
}

而在shard02上,到目前为止还没有任何数据,那是因为当前数据量太小,平衡器并不会运行。接着,我们将数据切分为2块,这将触发一个平衡器回合:

mongos> sh.status()
--- Sharding Status --- sharding version: {"_id" : 1,"version" : 4,"minCompatibleVersion" : 4,"currentVersion" : 5,"clusterId" : ObjectId("54d13c0555c0347d23e33cdd")
}shards:{  "_id" : "shard01",  "host" : "shard01/hingo-sputnik:27018,hingo-sputnik:27019,hingo-sputnik:27020" }{  "_id" : "shard02",  "host" : "shard02/hingo-sputnik:27021,hingo-sputnik:27022,hingo-sputnik:27023" }databases:{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }{  "_id" : "test",  "partitioned" : true,  "primary" : "shard01" }test.mycollectionshard key: { "_id" : 1 }chunks:shard01 1{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard01 Timestamp(1, 0) mongos> sh.splitAt( "test.mycollection", { _id : 6 } )
{ "ok" : 1 }mongos> sh.status()
--- Sharding Status --- sharding version: {"_id" : 1,"version" : 4,"minCompatibleVersion" : 4,"currentVersion" : 5,"clusterId" : ObjectId("54d13c0555c0347d23e33cdd")
}shards:{  "_id" : "shard01",  "host" : "shard01/hingo-sputnik:27018,hingo-sputnik:27019,hingo-sputnik:27020" }{  "_id" : "shard02",  "host" : "shard02/hingo-sputnik:27021,hingo-sputnik:27022,hingo-sputnik:27023" }databases:{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }{  "_id" : "test",  "partitioned" : true,  "primary" : "shard01" }test.mycollectionshard key: { "_id" : 1 }chunks:shard02 1shard01 1{ "_id" : { "$minKey" : 1 } } -->> { "_id" : 6 } on : shard02 Timestamp(2, 0) { "_id" : 6 } -->> { "_id" : { "$maxKey" : 1 } } on : shard01 Timestamp(2, 1) mongos> 

可以发现,集合正在被切分为2块,平衡器也已经在进行它的工作:将数据均匀地在分片中进行迁移。如果我们回到shard01,我们可以看见一半的记录是如何不见的({“op”: “d”} 表示删除操作):

shard01:PRIMARY> use test
switched to db test
shard01:PRIMARY> db.mycollection.find()
{ "_id" : 7, "data" : "hello" }
{ "_id" : 9, "data" : "hello" }
{ "_id" : 11, "data" : "hello" }
shard01:PRIMARY> use local
switched to db local
shard01:PRIMARY> db.oplog.rs.find().pretty()
{"ts" : Timestamp(1422998530, 1),"h" : NumberLong(0),"v" : 2,"op" : "n","ns" : "","o" : {"msg" : "initiating set"}
}
{"ts" : Timestamp(1422998574, 1),"h" : NumberLong("-6781014703318499311"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 1,"data" : "hello"}
}
{"ts" : Timestamp(1422998579, 1),"h" : NumberLong("-217362260421471244"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 3,"data" : "hello"}
}
{"ts" : Timestamp(1422998584, 1),"h" : NumberLong("7215322058367374253"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 5,"data" : "hello"}
}
{"ts" : Timestamp(1422998588, 1),"h" : NumberLong("-5372877897993278968"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 7,"data" : "hello"}
}
{"ts" : Timestamp(1422998591, 1),"h" : NumberLong("-243188455606213719"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 9,"data" : "hello"}
}
{"ts" : Timestamp(1422998597, 1),"h" : NumberLong("5040618552262309692"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 11,"data" : "hello"}
}
{"ts" : Timestamp(1422998892, 1),"h" : NumberLong("3056127588031004421"),"v" : 2,"op" : "d","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 1}
}
{"ts" : Timestamp(1422998892, 2),"h" : NumberLong("-7633416138502997855"),"v" : 2,"op" : "d","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 3}
}
{"ts" : Timestamp(1422998892, 3),"h" : NumberLong("1499304029305069766"),"v" : 2,"op" : "d","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 5}
}
shard01:PRIMARY> 

然后,在shard02我们也可以看到相同的记录出现了:

$ mongo --port 27021
MongoDB shell version: 2.6.7
connecting to: 127.0.0.1:27021/test
shard02:PRIMARY> db.mycollection.find()
{ "_id" : 1, "data" : "hello" }
{ "_id" : 3, "data" : "hello" }
{ "_id" : 5, "data" : "hello" }
shard02:PRIMARY> use local
switched to db local
shard02:PRIMARY> db.oplog.rs.find().pretty()
{"ts" : Timestamp(1422998531, 1),"h" : NumberLong(0),"v" : 2,"op" : "n","ns" : "","o" : {"msg" : "initiating set"}
}
{"ts" : Timestamp(1422998890, 1),"h" : NumberLong("-6780991630754185199"),"v" : 2,"op" : "i","ns" : "test.system.indexes","fromMigrate" : true,"o" : {"v" : 1,"key" : {"_id" : 1},"name" : "_id_","ns" : "test.mycollection"}
}
{"ts" : Timestamp(1422998890, 2),"h" : NumberLong("-165956952201849851"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 1,"data" : "hello"}
}
{"ts" : Timestamp(1422998890, 3),"h" : NumberLong("-7432242710082771022"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 3,"data" : "hello"}
}
{"ts" : Timestamp(1422998890, 4),"h" : NumberLong("6790671206092100026"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 5,"data" : "hello"}
}

如果我们再次插入更多数据:

mongos> db.mycollection.insert( { _id : 2, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 4, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 6, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 8, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.insert( { _id : 10, data : "hello" } )
WriteResult({ "nInserted" : 1 })
mongos> db.mycollection.find()
{ "_id" : 1, "data" : "hello" }
{ "_id" : 7, "data" : "hello" }
{ "_id" : 3, "data" : "hello" }
{ "_id" : 9, "data" : "hello" }
{ "_id" : 5, "data" : "hello" }
{ "_id" : 11, "data" : "hello" }
{ "_id" : 2, "data" : "hello" }
{ "_id" : 6, "data" : "hello" }
{ "_id" : 4, "data" : "hello" }
{ "_id" : 8, "data" : "hello" }
{ "_id" : 10, "data" : "hello" }

然后,和我们预料的一样,这些插入出现在了shard01中:

shard01:PRIMARY> use local
switched to db local
shard01:PRIMARY> db.oplog.rs.find().pretty()...beginning is the same as above, omitted for brevity ...{"ts" : Timestamp(1422998892, 3),"h" : NumberLong("1499304029305069766"),"v" : 2,"op" : "d","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 5}
}
{"ts" : Timestamp(1422999422, 1),"h" : NumberLong("-6691556866108433789"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 6,"data" : "hello"}
}
{"ts" : Timestamp(1422999426, 1),"h" : NumberLong("-3908881761176526422"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 8,"data" : "hello"}
}
{"ts" : Timestamp(1422999429, 1),"h" : NumberLong("-4997431625184830993"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 10,"data" : "hello"}
}
shard01:PRIMARY> 

shard02中的数据如下:

shard02:PRIMARY> use local
switched to db local
shard02:PRIMARY> db.oplog.rs.find().pretty()
{"ts" : Timestamp(1422998531, 1),"h" : NumberLong(0),"v" : 2,"op" : "n","ns" : "","o" : {"msg" : "initiating set"}
}
{"ts" : Timestamp(1422998890, 1),"h" : NumberLong("-6780991630754185199"),"v" : 2,"op" : "i","ns" : "test.system.indexes","fromMigrate" : true,"o" : {"v" : 1,"key" : {"_id" : 1},"name" : "_id_","ns" : "test.mycollection"}
}
{"ts" : Timestamp(1422998890, 2),"h" : NumberLong("-165956952201849851"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 1,"data" : "hello"}
}
{"ts" : Timestamp(1422998890, 3),"h" : NumberLong("-7432242710082771022"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 3,"data" : "hello"}
}
{"ts" : Timestamp(1422998890, 4),"h" : NumberLong("6790671206092100026"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 5,"data" : "hello"}
}
{"ts" : Timestamp(1422999414, 1),"h" : NumberLong("8160426227798471967"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 2,"data" : "hello"}
}
{"ts" : Timestamp(1422999419, 1),"h" : NumberLong("-3554656302824563522"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 4,"data" : "hello"}
}
shard02:PRIMARY>

将内部操作与应用操作分离

因此,如果一个像Meteor一样的应用像上面一样进行读取,这肯定会给识别数据模型的最终状态带来极大的挑战。如果我们仅仅是简单地从所有分片中整合操作日志事件,将会出现下面这些插入和删除操作:

OPERATION _ID
insert 1
insert 3
insert 5
insert 7
insert 9
insert 11
insert 1
insert 3
insert 5
delete 1
delete 3
delete 5
insert 2
insert 4
insert 6
insert 8
insert 10

因此,给定上述序列,id为1,3,5的数据是否还存在呢?

幸运的是,区分集群内部的操作和应用操作是可能的。也许你已经注意到由迁移产生的操作有一个fromMigrate 的标记集合:

{"ts" : Timestamp(1422998890, 2),"h" : NumberLong("-165956952201849851"),"v" : 2,"op" : "i","ns" : "test.mycollection","fromMigrate" : true,"o" : {"_id" : 1,"data" : "hello"}
}

由于我们只对那些在把集群看做一个整体时,真实改变数据库状态的的操作感兴趣,我们就可以过滤掉所有设置了该标记的信息。注意:正确的方式是使用:$exists,而不是false

shard01:PRIMARY> db.oplog.rs.find( { fromMigrate : false } ).pretty() shard01:PRIMARY> db.oplog.rs.find( { fromMigrate : { $exists : false } } ).pretty()
{"ts" : Timestamp(1422998530, 1),"h" : NumberLong(0),"v" : 2,"op" : "n","ns" : "","o" : {"msg" : "initiating set"}
}
{"ts" : Timestamp(1422998574, 1),"h" : NumberLong("-6781014703318499311"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 1,"data" : "hello"}
}
{"ts" : Timestamp(1422998579, 1),"h" : NumberLong("-217362260421471244"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 3,"data" : "hello"}
}
{"ts" : Timestamp(1422998584, 1),"h" : NumberLong("7215322058367374253"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 5,"data" : "hello"}
}
{"ts" : Timestamp(1422998588, 1),"h" : NumberLong("-5372877897993278968"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 7,"data" : "hello"}
}
{"ts" : Timestamp(1422998591, 1),"h" : NumberLong("-243188455606213719"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 9,"data" : "hello"}
}
{"ts" : Timestamp(1422998597, 1),"h" : NumberLong("5040618552262309692"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 11,"data" : "hello"}
}
{"ts" : Timestamp(1422999422, 1),"h" : NumberLong("-6691556866108433789"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 6,"data" : "hello"}
}
{"ts" : Timestamp(1422999426, 1),"h" : NumberLong("-3908881761176526422"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 8,"data" : "hello"}
}
{"ts" : Timestamp(1422999429, 1),"h" : NumberLong("-4997431625184830993"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 10,"data" : "hello"}
}
shard01:PRIMARY> 

然后,在shard02中,数据如下:

shard02:PRIMARY> db.oplog.rs.find( { fromMigrate : { $exists : false } } ).pretty()
{"ts" : Timestamp(1422998531, 1),"h" : NumberLong(0),"v" : 2,"op" : "n","ns" : "","o" : {"msg" : "initiating set"}
}
{"ts" : Timestamp(1422999414, 1),"h" : NumberLong("8160426227798471967"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 2,"data" : "hello"}
}
{"ts" : Timestamp(1422999419, 1),"h" : NumberLong("-3554656302824563522"),"v" : 2,"op" : "i","ns" : "test.mycollection","o" : {"_id" : 4,"data" : "hello"}
}
shard02:PRIMARY> 

的确就是我们想要的!

如果你对学习更多关于MongoDB操作的最佳实践感兴趣,下载我们的指南:

下载最佳操作实践

本文译自:http://www.mongodb.com/blog/post/tailing-mongodb-oplog-sharded-clusters。

转载于:https://my.oschina.net/yagami1983/blog/1823900

在分片集群中追踪MongoDB的操作日志相关推荐

  1. Docker搭建MongoRocks副本分片集群(Docker Mongodb Rocksdb Replication Sharding)

    Docker搭建MongoRocks副本分片集群 准备 依赖 安装 下载镜像 基本单实例 带配置的单实例 权限配置 docker参数解释 启动命令 rocksdb配置解释 查看启动日志 连接测试 ov ...

  2. 拆分命令_在MongoDB分片集群中拆分数据块chunks

    MongoDB Manual (Version 4.2)> Sharding > Data Partitioning with Chunks > Split Chunks in a ...

  3. 大数据之-Hadoop之HDFS_hadoop集群中的安全模式_操作案例---大数据之hadoop工作笔记0075

    然后我们再来看,hadoop的安全模式,我们操作一下,上面的命令对应了相应的功能 我们先执行 /opt/module/hadoop-2.7.2/bin/hdfs dfsadmin -safemode ...

  4. mongoDB研究笔记:分片集群的工作机制

    上面的(http://www.cnblogs.com/guoyuanwei/p/3565088.html)介绍了部署了一个默认的分片集群,对mongoDB的分片集群有了大概的认识,到目前为止我们还没有 ...

  5. MongoDB——MongoDB分片集群(Sharded Cluster)两种搭建方式

    MongoDB分片集群(Sharded Cluster)两种搭建方式 MongoDB分片的概念 分片集群包含的组件 分片集群架构目标 MongoDB分片集群搭建 第一套副本集 第二套副本集 配置节点副 ...

  6. TiDB和MongoDB分片集群架构比较

    此文已由作者温正湖授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 最近阅读了TiDB源码的说明文档,跟MongoDB的分片集群做了下简单对比. 首先展示TiDB的整体架构 M ...

  7. Docker中搭建redis分片集群,搭建redis哨兵结构,实现springboot中对redis分片集群、哨兵结构的访问,Redis缓存雪崩、缓存击穿处理(非关系型数据库技术课程 第十二周)

    文章目录 一.要求: 二.知识总结 缓存雪崩 解决方案 docker中redis分片集群搭建 配置好配置文件 redis-6380.conf redis-6381.conf redis-6382.co ...

  8. mongo 3.4分片集群系列之六:详解配置数据库

    这个系列大致想跟大家分享以下篇章: 1.mongo 3.4分片集群系列之一:浅谈分片集群 2.mongo 3.4分片集群系列之二:搭建分片集群--哈希分片 3.mongo 3.4分片集群系列之三:搭建 ...

  9. 【mongo系列】 六、mongo分片集群

    一.简介 1. 分片集群简介 1.分片原因 分片集群是为了解决单节点存在的CPU和存储,IO的瓶颈问题,将原来存储在单个mongo实例中的数据,按照一定的规则分散存储在多个mongo实例中,每个mon ...

最新文章

  1. Java多线程学习处理高并发问题
  2. 仅用2年过渡到自研ARM芯片,苹果的底气从何而来?
  3. Stack Overflow被收购了,以后要付费“抄代码”?
  4. 【Kafka】Kafka数据可靠性深度解读
  5. iOS中GCD的魔力
  6. Unity中使用WebView
  7. 计算机科学与技术社会实践报告,计算机科学与技术系社会实践报告
  8. GIS实战应用案例100篇(二十一)-全国分省、市、县净初级生产力NPP数据制作实战(附代码)
  9. platform(win32) 错误
  10. java 该改变request url_如何在Java中使用servlet过滤器来更改传入的servlet请求URL?...
  11. 【转载】【程序员练级】提高英语阅读水平经验分享上篇
  12. 如何将静态网页部署到github上
  13. ELK入门级介绍--打造实时日志查询系统
  14. 前端面试被问到项目中的难点有哪些?
  15. lookup无序查找_数据查找之LOOKUP篇(一):LOOKUP函数解析
  16. 针对豆瓣TOP250电影知识图谱的构建(Python+neo4j)
  17. 聊一聊微博新知博主这件事,看看赚钱方式有哪些?
  18. vmware workstation14永久激活密钥
  19. c语言 cdma编码正交的8位码片,CDMA编码
  20. 关于信息熵最大值的讨论

热门文章

  1. 智商黑洞(门萨Mensa测试)4
  2. C语言遍历目录文件并排序
  3. docker 常用命令 -----(批量查看镜像/容器,重命名,运行,进入容器,私有仓库)
  4. 【Python】关于DataFrame数据类型二三事
  5. html调用 另存为,:将html另存为文本
  6. 如何在Mac上获取App Store的ipa包(非越狱手机也可以)
  7. 笔记本电脑网络图标丢失解决流程
  8. SharePoint列表SPList运用SPQuery进行查询的两个实用方法
  9. 与网友“阵春风”交流
  10. php连接数据库的表如何居中,在php中打印数据如何居中显示