记一次mogodb占用cpu高问题
公司服务器上安装了contly,是一个开源的node.js项目,用于统计手机app使用情况,后端数据储存使用的mongodb,使用的时候经常发现mongodb占用cpu非常高,打到了210%的爆表值
top - 13:42:39 up 308 days, 23:01, 2 users, load average: 2.84, 2.96, 2.93 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie %Cpu(s): 59.4 us, 2.7 sy, 0.0 ni, 36.9 id, 0.2 wa, 0.7 hi, 0.0 si, 0.0 st KiB Mem: 8173524 total, 6095460 used, 2078064 free, 133752 buffers KiB Swap: 1048572 total, 750536 used, 298036 free. 3480688 cached MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16870 root 20 0 48.452g 2.643g 2.490g S 210.5 33.9 38607:41 mongod 28508 root 20 0 891316 110272 5976 S 5.0 1.3 10:07.08 nodejs 28589 root 20 0 893736 114044 5972 S 5.0 1.4 13:19.13 nodejs 28566 root 20 0 959004 113128 5976 S 4.0 1.4 11:16.49 nodejs 28670 root 20 0 1189932 383820 5980 S 4.0 4.7 26:00.97 nodejs 16367 www-data 20 0 86960 2952 996 S 1.0 0.0 237:47.74 nginx 16368 www-data 20 0 87028 3212 996 S 1.0 0.0 235:54.43 nginx 16370 www-data 20 0 86964 2952 996 S 1.0 0.0 245:46.74 nginx 16371 www-data 20 0 86960 2948 996 S 1.0 0.0 246:40.89 nginx 1154 root 20 0 23792 1764 1144 R 0.7 0.0 0:00.23 top 7 root 20 0 0 0 0 S 0.3 0.0 1053:48 rcu_sched 8 root 20 0 0 0 0 S 0.3 0.0 229:02.41 rcuos/0 6559 root 20 0 3957096 217820 5668 S 0.3 2.7 230:37.95 jsvc 1 root 20 0 37348 3704 976 S 0.0 0.0 0:19.37 init 2 root 20 0 0 0 0 S 0.0 0.0 0:02.51 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 68:42.78 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 9 root 20 0 0 0 0 S 0.0 0.0 225:56.62 rcuos/1 10 root 20 0 0 0 0 S 0.0 0.0 217:26.86 rcuos/2 11 root 20 0 0 0 0 S 0.0 0.0 228:57.90 rcuos/3 12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/4 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/5 14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/6 15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/7 16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/8 17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/9 18 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/10 19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/11 20 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/12 21 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/13 22 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/14 23 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/15 24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/16 25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/17 26 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/18 27 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/19 28 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/20 29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/21 30 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/22 31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/23 32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/24 33 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/25 34 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/26 35 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/27 36 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/28 37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/29 38 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/30 39 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/31 40 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 41 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/0 42 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/1 43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/2 44 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/3 45 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/4 46 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/5 47 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/6 48 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/7 49 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/8 50 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/9 51 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/10 52 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/11
仔细查看是中断很多,命令是pid -w 1
root@localhost:~# pidstat -w 1 Linux 3.13.0-24-generic (localhost) 07/14/2017 _x86_64_ (4 CPU)01:44:41 PM UID PID cswch/s nvcswch/s Command 01:44:42 PM 0 7 260.40 0.00 rcu_sched 01:44:42 PM 0 8 78.22 0.00 rcuos/0 01:44:42 PM 0 9 49.50 0.00 rcuos/1 01:44:42 PM 0 10 88.12 0.00 rcuos/2 01:44:42 PM 0 11 96.04 0.00 rcuos/3 01:44:42 PM 0 74 0.99 0.00 watchdog/0 01:44:42 PM 0 75 0.99 0.00 watchdog/1 01:44:42 PM 0 77 7.92 0.00 ksoftirqd/1 01:44:42 PM 0 80 0.99 0.00 watchdog/2 01:44:42 PM 0 82 39.60 0.00 ksoftirqd/2 01:44:42 PM 0 85 0.99 0.00 watchdog/3 01:44:42 PM 0 87 9.90 0.00 ksoftirqd/3 01:44:42 PM 0 1103 7.92 0.00 kworker/u64:2 01:44:42 PM 0 1213 0.99 4.95 pidstat 01:44:42 PM 0 1265 0.99 0.00 supervisord 01:44:42 PM 0 1279 9.90 0.00 vmtoolsd 01:44:42 PM 0 1373 0.99 0.00 fail2ban-server 01:44:42 PM 0 1954 18.81 0.00 xfsaild/sdb1 01:44:42 PM 0 2132 9.90 0.00 kworker/1:1H 01:44:42 PM 0 6800 0.99 0.00 kworker/0:1 01:44:42 PM 0 6807 0.99 0.00 kworker/2:2 01:44:42 PM 1000 14269 0.99 0.00 zabbix_agentd 01:44:42 PM 1000 14273 0.99 0.00 zabbix_agentd 01:44:42 PM 33 16367 154.46 0.00 nginx 01:44:42 PM 33 16368 178.22 0.00 nginx 01:44:42 PM 33 16370 166.34 0.00 nginx 01:44:42 PM 33 16371 168.32 0.00 nginx 01:44:42 PM 0 16870 100.00 0.00 mongod 01:44:42 PM 0 28498 0.99 0.00 supervisord 01:44:42 PM 0 28508 126.73 39.60 nodejs 01:44:42 PM 0 28566 141.58 46.53 nodejs 01:44:42 PM 0 28589 137.62 110.89 nodejs 01:44:42 PM 0 28670 123.76 96.04 nodejs 01:44:42 PM 0 28716 6.93 0.00 kworker/3:1 01:44:42 PM 0 28732 0.99 0.00 kworker/1:0
vmstat
root@localhost:~# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----r b swpd free buff cache si so bi bo in cs us sy id wa st3 1 750536 2089980 133864 3482948 0 0 8 738 0 0 14 4 81 1 03 0 750536 2089932 133864 3482948 0 0 0 2062 6793 9336 67 5 28 0 03 0 750536 2086820 133864 3482952 0 0 0 1767 6125 8760 72 5 23 0 04 0 750536 2084232 133864 3482952 0 0 0 1611 6227 8686 60 4 37 0 03 0 750536 2088264 133864 3482952 0 0 0 140 5974 8556 65 4 31 0 04 0 750536 2083864 133864 3482956 0 0 0 2635 9100 10589 78 5 17 0 02 0 750536 2085624 133864 3482916 0 0 0 168 8152 11602 63 5 33 0 0
中断和上下文切换多,是因为nodejs一直在被从sleep中唤醒,实际上是countly要接受手机端返回来的数据,不断被IO唤醒,每次唤醒都要上下文切换,所以表象是上下文频繁切换,实质上是counly负担比较重。
不过这个貌似和mongodb没有关系,只能说明countly的负担比较重,一直在写入mongodb,所以现在mongodb的负担也比较大。其实io算不上高,cpu的iowait也不多,硬盘写入并不是瓶颈,可能是mongodb本身的问题
查看mongostat
root@localhost:~# mongostat connected to: 127.0.0.1 insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time 13 171 424 2 0 15|0 0 24g 48.5g 2.64g 0 countly:11.9% 0 0|0 3|0 102k 67k 12 13:51:04 9 225 1358 105 0 13|0 0 24g 48.5g 2.65g 0 countly:6.5% 0 0|0 0|1 255k 177k 12 13:51:05 218 967 3056 11 0 82|0 0 24g 48.5g 2.63g 0 countly:20.7% 0 0|0 2|0 807k 965k 12 13:51:06 19 158 315 2 0 12|0 0 24g 48.5g 2.65g 0 countly:1.7% 0 0|0 1|0 88k 92k 12 13:51:07 17 132 270 2 0 8|0 0 24g 48.5g 2.58g 0 countly:2.1% 0 0|0 2|0 77k 115k 12 13:51:08 14 151 433 2 0 11|0 0 24g 48.5g 2.64g 0 countly:2.2% 0 0|0 3|0 106k 89k 12 13:51:09 22 181 335 3 0 17|0 0 24g 48.5g 2.66g 0 countly:7.9% 0 0|0 3|0 96k 105k 12 13:51:10 36 230 765 3 0 8|0 0 24g 48.5g 2.66g 0 countly:3.6% 0 1|0 0|1 184k 224k 12 13:51:11 122 567 1580 1 0 56|0 0 24g 48.5g 2.67g 0 countly:16.4% 0 0|0 2|0 440k 559k 12 13:51:12 8 124 286 4 0 8|0 0 24g 48.5g 2.65g 0 countly:2.0% 0 0|0 2|0 67k 61k 12 13:51:13 insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time 21 193 450 1 0 18|0 0 24g 48.5g 2.66g 0 countly:5.0% 0 0|0 3|0 119k 126k 12 13:51:14 156 774 950 1 0 67|0 0 24g 48.5g 2.65g 0 countly:18.5% 0 0|0 3|0 414k 623k 12 13:51:15 7 69 194 2 0 12|0 0 24g 48.5g 2.65g 0 countly:1.4% 0 0|0 3|0 49k 40k 12 13:51:16 33 280 525 2 0 12|0 0 24g 48.5g 2.64g 0 countly:4.8% 0 0|0 3|0 145k 187k 12 13:51:17 18 201 563 5 0 17|0 0 24g 48.5g 2.66g 0 countly:5.5% 0 0|0 3|0 141k 126k 12 13:51:18 10 144 334 2 0 15|0 0 24g 48.5g 2.65g 0 countly:4.4% 0 0|0 4|0 84k 103k 12 13:51:19 14 210 525 4 0 19|0 0 24g 48.5g 2.63g 0 countly:5.8% 0 0|0 1|0 125k 96k 12 13:51:20 563 1959 2259 2 0 222|0 0 24g 48.5g 2.61g 0 countly:49.2% 0 0|0 1|0 1m 2m 12 13:51:21 227 692 1326 2 0 60|0 0 24g 48.5g 2.68g 3 countly:14.3% 0 0|0 2|0 647k 652k 12 13:51:22 17 146 315 1 0 8|0 0 24g 48.5g 2.65g 0 countly:3.5% 0 2|0 0|1 87k 95k 12 13:51:23 insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time 23 166 376 1 0 10|0 0 24g 48.5g 2.65g 0 countly:7.4% 0 0|0 2|0 104k 104k 12 13:51:24 18 113 212 *0 0 7|0 0 24g 48.5g 2.64g 0 countly:12.6% 0 0|0 2|0 65k 75k 12 13:51:25 13 177 375 *0 0 19|0 0 24g 48.5g 2.64g 0 countly:2.4% 0 3|0 0|1 93k 110k 12 13:51:26 20 147 299 5 0 5|0 0 24g 48.5g 2.63g 0 countly:1.9% 0 0|0 2|0 85k 103k 12 13:51:27 40 280 476 2 0 15|0 1 24g 48.5g 2.63g 0 countly:2.9% 0 0|0 1|0 157k 247k 12 13:51:28 7 108 263 2 0 5|0 0 24g 48.5g 2.64g 0 countly:1.6% 0 0|0 2|0 62k 78k 12 13:51:29 17 181 394 *0 0 8|0 0 24g 48.5g 2.59g 0 countly:5.4% 0 0|0 2|0 101k 147k 12 13:51:30 17 152 350 1 0 12|0 0 24g 48.5g 2.63g 0 countly:3.7% 0 0|0 1|0 90k 80k 12 13:51:31 14 123 261 4 0 11|0 0 24g 48.5g 2.58g 0 countly:4.0% 0 2|0 0|1 72k 108k 12 13:51:32 184 741 1220 21 0 84|0 0 24g 48.5g 2.67g 0 countly:25.3% 0 0|0 4|0 492k 654k 12 13:51:33
mongodb的锁很多,怀疑是因为写入慢,一直在等待,浪费了大量的cpu时间
进入到mongodb中查看
root@localhost:~# mongo MongoDB shell version: 2.4.14 connecting to: test > db.serverStatus() {"host" : "localhost","version" : "2.4.14","process" : "mongod","pid" : 16870,"uptime" : 1551490,"uptimeMillis" : NumberLong(1551489627),"uptimeEstimate" : 1537152,"localTime" : ISODate("2017-07-14T06:12:34.348Z"),"asserts" : {"regular" : 0,"warning" : 0,"msg" : 0,"user" : 1,"rollovers" : 0},"backgroundFlushing" : {"flushes" : 25858,"total_ms" : 6413133,"average_ms" : 248.01349679016164,"last_ms" : 111,"last_finished" : ISODate("2017-07-14T06:12:27.580Z")},"connections" : {"current" : 12,"available" : 19988,"totalCreated" : NumberLong(660)},"cursors" : {"totalOpen" : 2,"clientCursors_size" : 2,"timedOut" : 125,"totalNoTimeout" : 2},"dur" : {"commits" : 29,"journaledMB" : 0.28672,"writeToDataFilesMB" : 0.272729,"compression" : 0.9344831856907262,"commitsInWriteLock" : 0,"earlyCommits" : 0,"timeMs" : {"dt" : 3071,"prepLogBuffer" : 0,"writeToJournal" : 49,"writeToDataFiles" : 4,"remapPrivateView" : 5}},"extra_info" : {"note" : "fields vary by platform","heap_usage_bytes" : 174045352,"page_faults" : 37855},"globalLock" : {"totalTime" : NumberLong("1551489627000"),"lockTime" : NumberLong("28450005021"),"currentQueue" : {"total" : 0,"readers" : 0,"writers" : 0},"activeClients" : {"total" : 2,"readers" : 2,"writers" : 0}},"indexCounters" : {"accesses" : 2559023213,"hits" : 2559023356,"misses" : 0,"resets" : 0,"missRatio" : 0},"locks" : {"." : {"timeLockedMicros" : {"R" : NumberLong(1631854144),"W" : NumberLong("28450005021")},"timeAcquiringMicros" : {"R" : NumberLong(1839692044),"W" : NumberLong(269051643)}},"admin" : {"timeLockedMicros" : {},"timeAcquiringMicros" : {}},"local" : {"timeLockedMicros" : {"r" : NumberLong(2235967),"w" : NumberLong(0)},"timeAcquiringMicros" : {"r" : NumberLong(856059),"w" : NumberLong(0)}},"countly" : {"timeLockedMicros" : {"r" : NumberLong("4052844846798"),"w" : NumberLong("77669514748")},"timeAcquiringMicros" : {"r" : NumberLong("2069545770119"),"w" : NumberLong("82155316804")}},"countly_drill" : {"timeLockedMicros" : {"r" : NumberLong(874145401),"w" : NumberLong("19474885608")},"timeAcquiringMicros" : {"r" : NumberLong(271347249),"w" : NumberLong(1475646788)}},"test" : {"timeLockedMicros" : {"r" : NumberLong(68640),"w" : NumberLong(79192)},"timeAcquiringMicros" : {"r" : NumberLong(146),"w" : NumberLong(26)}}},"network" : {"bytesIn" : 233060269131,"bytesOut" : 258427384819,"numRequests" : 1384182586},"opcounters" : {"insert" : 43661297,"query" : 434433635,"update" : 877502004,"delete" : 4880964,"getmore" : 1023,"command" : 26378757},"opcountersRepl" : {"insert" : 0,"query" : 0,"update" : 0,"delete" : 0,"getmore" : 0,"command" : 0},"recordStats" : {"accessesNotInMemory" : 8466,"pageFaultExceptionsThrown" : 3696,"countly" : {"accessesNotInMemory" : 1231,"pageFaultExceptionsThrown" : 221},"countly_drill" : {"accessesNotInMemory" : 7235,"pageFaultExceptionsThrown" : 3475},"local" : {"accessesNotInMemory" : 0,"pageFaultExceptionsThrown" : 0},"test" : {"accessesNotInMemory" : 0,"pageFaultExceptionsThrown" : 0}},"writeBacksQueued" : false,"mem" : {"bits" : 64,"resident" : 2692,"virtual" : 49774,"supported" : true,"mapped" : 24630,"mappedWithJournal" : 49260},"metrics" : {"document" : {"deleted" : NumberLong(4398459),"inserted" : NumberLong(43661297),"returned" : NumberLong(249187499),"updated" : NumberLong(868225290)},"getLastError" : {"wtime" : {"num" : 1332687,"totalMillis" : 154},"wtimeouts" : NumberLong(0)},"operation" : {"fastmod" : NumberLong(846861873),"idhack" : NumberLong(1085704909),"scanAndOrder" : NumberLong(0)},"queryExecutor" : {"scanned" : NumberLong(1095979673)},"record" : {"moves" : NumberLong(1249911)},"repl" : {"apply" : {"batches" : {"num" : 0,"totalMillis" : 0},"ops" : NumberLong(0)},"buffer" : {"count" : NumberLong(0),"maxSizeBytes" : 268435456,"sizeBytes" : NumberLong(0)},"network" : {"bytes" : NumberLong(0),"getmores" : {"num" : 0,"totalMillis" : 0},"ops" : NumberLong(0),"readersCreated" : NumberLong(0)},"oplog" : {"insert" : {"num" : 0,"totalMillis" : 0},"insertBytes" : NumberLong(0)},"preload" : {"docs" : {"num" : 0,"totalMillis" : 0},"indexes" : {"num" : 0,"totalMillis" : 0}}},"ttl" : {"deletedDocuments" : NumberLong(10),"passes" : NumberLong(25857)}},"ok" : 1 }
锁确实比较多,查看mongotop
root@localhost:/mnt/mongodb/log# mongotop connected to: 127.0.0.1ns total read write 2017-07-14T06:46:41countly.online_users55d2e563292d6b2c7d1d132d 1669ms 1665ms 4mscountly.online_users5770b587f4a7f02a6a2b9cce 53ms 52ms 1mscountly.carriers 42ms 0ms 42mscountly.online_users5506ab1979a3a67e5214cda6 40ms 38ms 2mscountly.devices 16ms 0ms 16mscountly.apps 9ms 9ms 0mscountly.users 8ms 0ms 8mscountly.device_details 7ms 0ms 7mscountly.app_users55d2e563292d6b2c7d1d132d 4ms 0ms 4msns total read write 2017-07-14T06:46:42countly.online_users55d2e563292d6b2c7d1d132d 3671ms 3669ms 2mscountly.online_users5770b587f4a7f02a6a2b9cce 64ms 63ms 1mscountly.online_users5506ab1979a3a67e5214cda6 26ms 25ms 1mscountly.devices 19ms 0ms 19mscountly.carriers 16ms 0ms 16mscountly.users 6ms 0ms 6mscountly.apps 6ms 6ms 0mscountly.app_users55d2e563292d6b2c7d1d132d 4ms 1ms 3ms countly_drill.drill_eventse02cd05a335f76036692df25f94be2b3d5c20bd2 3ms 0ms 3msns total read write 2017-07-14T06:46:43countly.online_users55d2e563292d6b2c7d1d132d 1709ms 1705ms 4mscountly.online_users5506ab1979a3a67e5214cda6 33ms 31ms 2mscountly.carriers 19ms 0ms 19mscountly.devices 10ms 0ms 10mscountly.users 10ms 0ms 10mscountly.online_users54feb14879a3a67e52f6f7cd 7ms 7ms 0mscountly.apps 5ms 5ms 0mscountly.app_users55d2e563292d6b2c7d1d132d 4ms 0ms 4ms countly_drill.drill_eventsbb446ae67767dec699789c496e6c067b32dee999 3ms 0ms 3msns total read write 2017-07-14T06:46:44countly.online_users55d2e563292d6b2c7d1d132d 2301ms 2298ms 3mscountly.online_users5770b587f4a7f02a6a2b9cce 262ms 261ms 1mscountly.online_users5506ab1979a3a67e5214cda6 35ms 34ms 1ms countly_drill.drill_eventse02cd05a335f76036692df25f94be2b3d5c20bd2 22ms 0ms 22mscountly.online_users54feb14879a3a67e52f6f7cd 8ms 7ms 1mscountly.users 7ms 0ms 7ms countly_drill.drill_eventscc860280b73e64c3413b38f2e9959acb7babd3f9 6ms 0ms 6mscountly.app_users55d2e563292d6b2c7d1d132d 5ms 2ms 3mscountly.apps 4ms 4ms 0ms
是读的时间很长,这个怀疑是没有建立索引,另外mongodb本身有一个bug
https://docs.mongodb.com/manual/administration/production-notes/
解决方法如下所示
#numactl --interleave all command
原理如下
http://www.cnblogs.com/Lifehacker/p/database_swap_insanity_on_Linux.html
这么操作了好像也没什么效果,最后想到之前这个数据库因为磁盘爆满清空过,之前听说过mongodb迁移后会丢失索引,要重建索引,我查了一下索引
> db.online_users55d2e563292d6b2c7d1d132d.find() { "_id" : "4e5d3922-6553-430a-849f-8b1007373d7c", "la" : ISODate("2017-06-30T00:55:08.933Z"), "n" : 0 } { "_id" : "55d7d514-f75c-432a-b557-17c036437a32", "la" : ISODate("2017-06-29T16:38:12.863Z"), "n" : 0 } { "_id" : "0118022e-54d2-4fb9-998d-a544d9a76d42", "la" : ISODate("2017-07-05T17:17:44.307Z"), "n" : 0 } { "_id" : "2a160945-fcc1-42f5-8fd5-693f97c4c332", "la" : ISODate("2017-07-02T07:01:35.619Z"), "n" : 0 } { "_id" : "14cd09d4-fd09-43b8-9259-8a007fc08b45", "la" : ISODate("2017-06-28T01:23:29.400Z"), "n" : 0 } { "_id" : "b9fa7cc9-cd8c-4a1b-83fb-fc4c4f929715", "la" : ISODate("2017-06-26T13:38:32.887Z"), "n" : 1 } { "_id" : "1d968b85-2571-4cf7-9520-bfeb44d893bc", "la" : ISODate("2017-07-13T13:08:18.807Z"), "n" : 0 } { "_id" : "0b1a3d07-8e5d-4e78-a93f-d13811a81b86", "la" : ISODate("2017-07-13T03:10:39.927Z"), "n" : 0 } { "_id" : "18d8f9fb-dff3-46e6-a3c7-bb19db748c4a", "la" : ISODate("2017-07-08T17:59:37.518Z"), "n" : 0 } { "_id" : "a81349ac-9d83-4ef7-be0b-1b9bf64f867f", "la" : ISODate("2017-07-13T23:27:35.174Z"), "n" : 0 } { "_id" : "dd841258-36a6-45de-bc45-450441d9bf33", "la" : ISODate("2017-07-14T06:34:23.838Z"), "n" : 0 } { "_id" : "75720c9f-69af-4833-ae14-78a381cc31b6", "la" : ISODate("2017-07-13T15:57:28.700Z"), "n" : 0 } { "_id" : "ef37ff3d-979d-4275-9991-398ccb2d5576", "la" : ISODate("2017-07-14T04:45:38.872Z"), "n" : 0 } { "_id" : "465f923e-b0f1-4aa7-8f68-2d7db95973b2", "la" : ISODate("2017-07-07T11:47:53.029Z"), "n" : 0 } { "_id" : "e6b5fd38-bb56-4634-9232-7eeceb8cf46a", "la" : ISODate("2017-07-09T03:27:04.545Z"), "n" : 0 } { "_id" : "d4957c5d-154f-4c4f-8843-9792beab4817", "la" : ISODate("2017-07-11T21:49:35.417Z"), "n" : 0 } { "_id" : "08d0d626-b37b-4761-81b2-98b5ee426df8", "la" : ISODate("2017-06-29T13:56:14.974Z"), "n" : 0 } { "_id" : "6d2eef45-cb25-4ff6-b8cd-497ab3b938ae", "la" : ISODate("2017-07-07T02:19:16.154Z"), "n" : 0 } { "_id" : "6f444a2b-e2fc-4ab0-8b1d-d4334bf095e3", "la" : ISODate("2017-07-09T12:17:05.711Z"), "n" : 0 } { "_id" : "85e06ac1-7158-42fb-95ed-2cd39a7f0687", "la" : ISODate("2017-06-26T02:27:25.389Z"), "n" : 1 } Type "it" for more > db.online_users55d2e563292d6b2c7d1d132d.find({"_id" : "4e5d3922-6553-430a-849f-8b1007373d7c"}).explain() {"cursor" : "BtreeCursor _id_","isMultiKey" : false,"n" : 1,"nscannedObjects" : 1,"nscanned" : 1,"nscannedObjectsAllPlans" : 1,"nscannedAllPlans" : 1,"scanAndOrder" : false,"indexOnly" : false,"nYields" : 0,"nChunkSkips" : 0,"millis" : 0,"indexBounds" : {"start" : {"_id" : "4e5d3922-6553-430a-849f-8b1007373d7c"},"end" : {"_id" : "4e5d3922-6553-430a-849f-8b1007373d7c"}},"server" : "localhost:27017" }
明明是走索引的,可能是la这一列没有加索引(这个我当时没有看,但是从现在来看这个思路应该是正确的)
加上索引
db.online_users547c2a634dcc1eef3d000001.ensureIndex({la: 1}, {expireAfterSeconds: 180});
因为这个是cache性质频繁读写的表,要转成Capped Collection,自动老化移出,优化性能,并设置最大记录数
db.runCommand({"convertToCapped":"online_users547c2a634dcc1eef3d000001",size:60})
db.runCommand({"convertToCapped":"live_data547c2a634dcc1eef3d000001",size:60})
这样之后结果就是好多了,再观察一段时间吧
转载于:https://www.cnblogs.com/wuxie1989/p/7204682.html
记一次mogodb占用cpu高问题相关推荐
- TortoiseSVN status cache占用CPU高
进程占用CPU高 每次从SVN上更新资源时,电脑都会卡死,直到资源更新完.当要Commit资源时,SVN也会卡死资源管理器,如下图所示: 解决占用CPU高的问题 1.禁用图标缓存 2.排除路径和包含路 ...
- Linux下JAVA线程占用CPU高的分析方法
2019独角兽企业重金招聘Python工程师标准>>> 上次分析系统中占用CPU高的问题,得到一些使用Java自身调试工具的经验,与大家分享. (1)使用top命令找出占用cpu最 ...
- Win10 NVIDIA Container占用CPU高的处理方法
打开任务管理器,发现 NVIDIA Container 占用CPU非常高(36.6%). 2 Windows 10系统 NVIDIA Container占用CPU高的处理方法 在任务管理器窗口中,先展 ...
- win10升级后CFT加载程序占用CPU高解决办法
win10升级后CFT加载程序占用CPU高,目前网上的解释是微软BUG,需要待后续版本解决,CPU持续占用较高,干不了其它事情,可以用如下方法解决. 用windows进程管理杀死后,由于此进程会自动启 ...
- jpa,分析duid参数,当前用户的最大线程数,线上问题排查,stack命令查看占用CPU高的线程堆栈信息
1.先查看应用进程号: ps -ef | grep 应用名 ,也就是 pid 2.查看pid垃圾回收情况: jstat -gc pid 5000(时间间隔) 3.dump jvm二进制的内存详细使用情 ...
- win10系统system进程占用cpu高怎么解决
Win10系统经常发现任务管理器中的system进程占用了很高的cpu内存,一直在50%左右,这也导致了多项应用程序运行卡顿,本人是家庭激活版本,也在网上搜索win10系统system进程占用cpu高 ...
- 为什么死循环占用CPU高
文章出处:http://bbs.chinaunix.net/thread-1613560-1-1.html 为什么死循环占用CPU高 一个进程如果是死循环,那么占有的CPU会很高,可是操作系统时间片运 ...
- Linux下排查进程占用cpu高的问题
相信很多C++程序员都经历程序占用cpu过高的问题,这种问题,如果对代码运行逻辑足够熟悉,只靠脑子想估计定位起来也不难,但是如果是调用第三方sdk,或者团队其他人开发的库导致的cpu占用居高,就不那么 ...
- linux c++应用程序内存高或者占用CPU高的解决方案_20161213
对于绝大多数实时程序来说,实时处理相关程序中的循环问题所带来的对机器的损耗和自身的处理速度的平衡,以及与其他程序的交互以及对其他功能的影响难免会成为程序设计中最大的障碍同时也是最大的突破点. 在所有这 ...
最新文章
- 蓝桥杯 扑克序列(全排列)
- 中修改环境变量_超详干货!Linux环境变量配置全攻略
- hdu5387(模拟)
- 解决faster-rcnn中训练时assert(boxes[:,2]=boxes[:,0]).all()的问题
- 十问十答 CDDL 许可证
- JPA教程–在Java SE环境中设置JPA
- 案例 实现文件读写器 c# 1614523907
- oracle number长度转换,Oracle Number型数值存储与转换的实现详解
- 想要获得别人尊重,你必须得自己先牛逼起来
- SQL SERVER游标浅析
- C#高仿腾讯QQ截图程序(改)
- 菜鸟进阶Linux高手之路——第四天(下)
- pl/sql 过程分页显示小案例
- Android-将RGB彩色图转换为灰度图
- 怎么安装服务器打印组件,Windows Server2012 配置打印服务器图文教程
- 【色彩管理】ICC曲线制作教程
- 图解FDISK与FORMAT命令分区与格式化
- 关于资金调拨系统的设计方法论
- 2023养生健康品牌连锁加盟展/医养健康产业展/山东大健康展
- 经典:基因组测序数据从头拼接或组装算法的原理