一个日志库数据量过大,以前3.2 版本的MongoDB oplogsize=50G  也没法初始化新节点。
在把版本升级到3.4,磁盘选择高效云盘后。再做新节点初始化数据同步。
时间已从以前的3天多,到现在的15小时左右了。
主是节省的时间是在索引建立这块。数据同步时,索引也在处理,oplog 数据也在同时同步。

以下是分析日志的一点观点,因只是基于日志,可能有不足之处。看过源码的话,可能会有更深的了解。

----------------------------------------------------------------------------------------------

#新节点同步表数据的同时,好似用了另外一个线程,把数据保存在另外一个500M内存空间中,用来建立索引,

2017-02-22T10:08:29.665+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.WebAppInterfaceCall0]      building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-02-22T10:09:18.997+0800 I -        [repl writer worker 7]   MongoApiLogs.WebAppInterfaceCall collection clone progress: 1125277/54284989 2% (documents copied)
2017-02-22T11:05:48.188+0800 I -        [conn143] end connection 127.0.0.1:51202 (34 connections now open)
2017-02-22T11:05:48.235+0800 I -        [repl writer worker 15]   MongoApiLogs.WebAppInterfaceCall collection clone progress: 52120797/54284989 96% (documents copied)
2017-02-22T11:07:26.721+0800 I -        [repl writer worker 15]   MongoApiLogs.WebAppInterfaceCall collection clone progress: 53709792/54284989 98% (documents copied)

2017-02-22T11:08:26.000+0800 I -        [InitialSyncInserters-MongoApiLogs.WebAppInterfaceCall0]   Index: (2/3) BTree Bottom Up Progress: 14989700/54293678 27%
2017-02-22T11:08:36.000+0800 I -        [InitialSyncInserters-MongoApiLogs.WebAppInterfaceCall0]   Index: (2/3) BTree Bottom Up Progress: 27407200/54293678 50%
2017-02-22T11:08:46.000+0800 I -        [InitialSyncInserters-MongoApiLogs.WebAppInterfaceCall0]   Index: (2/3) BTree Bottom Up Progress: 43126600/54293678 79%
2017-02-22T11:08:52.753+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.WebAppInterfaceCall0]      done building bottom layer, going to commit

#在同步数据的同时,也在同步oplog,
比如下面的 ts: { $gte: Timestamp 1487734680000|184 }
确认了一下,就是今天的时间(我从10:00 开始做新节点同步)。
有了这个处理,就不会有象在3.2之间版本的数据同步已过窗口期的问题了(同步完数据后,再同步oplog最旧时间>开始同步时间)。

Log:PRIMARY> new Date(1487734680000)
ISODate("2017-02-22T03:38:00Z")

2017-02-22T11:38:04.051+0800 I REPL     [replication-2] Scheduled new oplog query Fetcher source: 10.160.39.59:28002 database: local query: { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1487734680000|184 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000 } query metadata: { $ssm: { $secondaryOk: 1 } } active: 1 timeout: 10000ms inShutdown: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 215025 -- target:10.160.39.59:28002 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1487734680000|184 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000 } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms
2017-02-22T11:39:36.883+0800 I -        [repl writer worker 13]   MongoApiLogs.MHttpLogInterfaceCall collection clone progress: 33081015/76709766 43% (documents copied)
2017-02-22T11:40:15.099+0800 I REPL     [replication-3] Restarting oplog query due to error: ExceededTimeLimit: operation exceeded time limit. Last fetched optime (with hash): { ts: Timestamp 1487734810000|65, t: -1 }[-7242886806593697378]. Restarts remaining: 3
2017-02-22T11:40:15.100+0800 I REPL     [replication-3] Scheduled new oplog query Fetcher source: 10.160.39.59:28002 database: local query: { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1487734810000|65 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000 } query metadata: { $ssm: { $secondaryOk: 1 } } active: 1 timeout: 10000ms inShutdown: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 220910 -- target:10.160.39.59:28002 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1487734810000|65 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000 } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms

2017-02-22T14:09:22.000+0800 I -        [InitialSyncInserters-MongoApiLogs.AppInterfaceCall0]   Index: (2/3) BTree Bottom Up Progress: 46086700/50045224 92%
2017-02-22T14:09:24.443+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.AppInterfaceCall0]      done building bottom layer, going to commit
2017-02-22T14:09:27.942+0800 I REPL     [repl writer worker 14] CollectionCloner::start called, on ns:MongoApiLogs.CMSInterfaceCall
2017-02-22T14:09:27.998+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.CMSInterfaceCall0] build index on: MongoApiLogs.CMSInterfaceCall properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "MongoApiLogs.CMSInterfaceCall" }
2017-02-22T14:09:27.998+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.CMSInterfaceCall0]      building index using bulk method; build may temporarily use up to 500 megabytes of RAM

#下面日志看到表:【MongoApiLogs.OpenApiRequestLog】初始化时建立了索引,后面进行数据同步。(以前是先数据同步,完成后再一个个索引进行建立,)
2017-02-22T14:09:28.297+0800 I REPL     [repl writer worker 1] CollectionCloner::start called, on ns:MongoApiLogs.OpenApiRequestLog
2017-02-22T14:09:28.345+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.OpenApiRequestLog0] build index on: MongoApiLogs.OpenApiRequestLog properties: { v: 1, key: { InterfaceType: 1.0 }, name: "InterfaceType_1", ns: "MongoApiLogs.OpenApiRequestLog" }
2017-02-22T14:09:28.345+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.OpenApiRequestLog0]      building index using bulk method; build may temporarily use up to 250 megabytes of RAM
2017-02-22T14:09:28.356+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.OpenApiRequestLog0] build index on: MongoApiLogs.OpenApiRequestLog properties: { v: 1, key: { CreateTime: -1.0 }, name: "CreateTime_-1", ns: "MongoApiLogs.OpenApiRequestLog", expireAfterSeconds: 2592000.0 }
2017-02-22T14:09:28.356+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.OpenApiRequestLog0]      building index using bulk method; build may temporarily use up to 250 megabytes of RAM
2017-02-22T14:09:28.363+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.OpenApiRequestLog0] build index on: MongoApiLogs.OpenApiRequestLog properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "MongoApiLogs.OpenApiRequestLog" }
2017-02-22T14:09:28.363+0800 I INDEX    [InitialSyncInserters-MongoApiLogs.OpenApiRequestLog0]      building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-02-22T14:10:06.424+0800 I REPL     [replication-4] Restarting oplog query due to error: ExceededTimeLimit: operation exceeded time limit. Last fetched optime (with hash): { ts: Timestamp 1487743800000|3, t: -1 }[684200904940466821]. Restarts remaining: 3
2017-02-22T14:10:06.424+0800 I REPL     [replication-4] Scheduled new oplog query Fetcher source: 10.160.39.59:28002 database: local query: { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1487743800000|3 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000 } query metadata: { $ssm: { $secondaryOk: 1 } } active: 1 timeout: 10000ms inShutdown: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 575215 -- target:10.160.39.59:28002 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1487743800000|3 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000 } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms
2017-02-22T14:10:23.859+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:55862 #423 (35 connections now open)
2017-02-22T14:10:23.859+0800 I NETWORK  [conn423] received client metadata from 127.0.0.1:55862 conn423: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1" }, os: { type: "Linux", name: "CentOS release 6.8 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2017-02-22T14:10:23.866+0800 I -        [conn423] end connection 127.0.0.1:55862 (35 connections now open)
2017-02-22T14:10:35.932+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:55868 #424 (35 connections now open)
2017-02-22T14:10:35.932+0800 I NETWORK  [conn424] received client metadata from 127.0.0.1:55868 conn424: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1" }, os: { type: "Linux", name: "CentOS release 6.8 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2017-02-22T14:10:35.940+0800 I -        [conn424] end connection 127.0.0.1:55868 (35 connections now open)
2017-02-22T14:10:42.011+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:55876 #425 (35 connections now open)
2017-02-22T14:10:42.011+0800 I NETWORK  [conn425] received client metadata from 127.0.0.1:55876 conn425: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1" }, os: { type: "Linux", name: "CentOS release 6.8 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2017-02-22T14:10:42.018+0800 I -        [conn425] end connection 127.0.0.1:55876 (35 connections now open)
2017-02-22T14:10:47.136+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:55884 #426 (35 connections now open)
2017-02-22T14:10:47.136+0800 I NETWORK  [conn426] received client metadata from 127.0.0.1:55884 conn426: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1" }, os: { type: "Linux", name: "CentOS release 6.8 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2017-02-22T14:10:47.144+0800 I -        [conn426] end connection 127.0.0.1:55884 (35 connections now open)
2017-02-22T14:10:50.205+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:55886 #427 (35 connections now open)
2017-02-22T14:10:50.205+0800 I NETWORK  [conn427] received client metadata from 127.0.0.1:55886 conn427: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1" }, os: { type: "Linux", name: "CentOS release 6.8 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2017-02-22T14:10:50.212+0800 I -        [conn427] end connection 127.0.0.1:55886 (35 connections now open)
2017-02-22T14:10:50.273+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:55888 #428 (35 connections now open)
2017-02-22T14:10:50.273+0800 I NETWORK  [conn428] received client metadata from 127.0.0.1:55888 conn428: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1" }, os: { type: "Linux", name: "CentOS release 6.8 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2017-02-22T14:10:50.280+0800 I -        [conn428] end connection 127.0.0.1:55888 (35 connections now open)
2017-02-22T14:11:51.354+0800 I -        [repl writer worker 15]   MongoApiLogs.OpenApiRequestLog collection clone progress: 1837886/5923483 31% (documents copied)

MongoDB3.4 版本新节点同步的一点惊喜相关推荐

  1. EOS1.1版本新特性介绍

    EOSIO/eos 目前在github的项目活跃度方面排名第一,release版本更新的速度让人应接不暇.今天EOS的大版本1.1发布,我也有幸参与了贡献,本篇文章重点介绍1.1版本的重大功能升级. ...

  2. 【刘文彬】 EOS1.1版本新特性介绍

    原文链接:醒者呆的博客园,https://www.cnblogs.com/Evsward/p/9330057.html EOSIO/eos 目前在github的项目活跃度方面排名第一,release版 ...

  3. 代码演示C#各版本新功能

    代码演示C#各版本新功能 C#各版本新功能其实都能在官网搜到,但很少有人整理在一起,并通过非常简短的代码将每个新特性演示出来. 代码演示C#各版本新功能 C# 2.0版 - 2005 泛型 分部类型 ...

  4. 新松机器人袁_中科新松许小刚:智能协作机器人是中国机器人产业发展新节点...

    8月9日,由中国高科技行业门户OFweek维科网主办,以"与AI大咖对话 看AI如何赋能万物"为主题的WAIE 2019第四届上海国际人工智能展览会暨人工智能产业大会,在上海新国际 ...

  5. 华为云FusionInsight智能数据湖版本新能力解读

    9月23日至25日,华为全联接2021以"深耕数字化"为主题,各行业领军人物分享最新成果与实践.其中在"华为云FusionInsight智能数据湖打造千行百业数据底座&q ...

  6. React16、17、18版本新特性

    react-16版本新特性 一.hooks import { useState } from 'react'function App() {// 参数:状态初始值比如,传入 0 表示该状态的初始值为 ...

  7. HAC集群添加新节点

    瀚高数据库 目录 环境 文档用途 详细信息 环境 系统平台:Linux x86-64 Red Hat Enterprise Linux 7 版本:4.5.7,4.5.6 文档用途 本文档用于指导HAC ...

  8. Spark 3.2.0 版本新特性 push-based shuffle 论文详解(一)概要和介绍

    前言 本文隶属于专栏<大数据技术体系>,该专栏为笔者原创,引用请注明来源,不足和错误之处请在评论区帮忙指出,谢谢! 本专栏目录结构和参考文献请见大数据技术体系 目录 Spark 3.2.0 ...

  9. DataNode新节点服役与旧节点退役

    大数据相关博客的目录 在阅读此篇之前,应当先阅读基于CentOS7镜像和数据挂载卷实现Docker搭建Hadoop集群 服役与退役 Hadoop集群中管理员经常需要向集群中添加节点,或从集群中移除节点 ...

最新文章

  1. (转)一个古老的编程游戏:Python Challenge全通攻略
  2. mysql mediumtext longtext
  3. 设计模式——外观模式
  4. python读取xls文件详解_python3解析excel文件
  5. 敏捷管理之绩效考核方案
  6. sql md5函数_【学习笔记】常见漏洞:SQL注入的利用与防御
  7. 【算法】Huffman编码(数据结构+算法)
  8. java线程的小问题与回答
  9. 【经验心得】固定布局做到各手机屏幕适配简单粗暴的方法
  10. 登陆获取cookie
  11. android微信照片未发送成功,解决Android整合微信分享无法发送图片问题
  12. python自动化办公都能做什么-用 Python 自动化办公能做到哪些有趣或有用的事情?...
  13. linux添加引导菜单,为CentOS 7添加win7的引导菜单(king测)
  14. 算法:翻转图片Rotate Image
  15. MacBookPro 键盘映射
  16. 32、出任爬虫公司CEO(爬取职友网招聘信息)
  17. 凶猛的长城汽车:走在挑战万亿市值的征途上
  18. r语言爬虫和python爬虫哪个好-R语言爬虫常用方法总结(以案例说明)
  19. 数学概念: 导数和切线方程
  20. latex如何更改某一段落的字体_LaTeX 设置字体

热门文章

  1. Linux产生随机数的几种常见方法
  2. 从web编辑器 UEditor 中单独提取图片上传,包含多图片单图片上传以及在线涂鸦功能...
  3. Java8 時間API
  4. Redis 常用监控信息命令总结
  5. 方法变量与方法表达式
  6. 处理大并发的30条数据库规范
  7. 四则运算关于加括号的思路
  8. ecshop 去版权
  9. jquery常见获取高度
  10. ubuntu 12.04 LTS 64位兼容运行32位程序