文章目录

  • ceph集群的监控
    • 查看OSD版本
    • 检查集群的健康情况
    • 监控集群的时间
    • 查看集群的空间利用率
    • 查看集群的状态
    • 查看集群的实时状态
    • 获取秘钥列表
    • 查看ceph特征
    • 查看osd存储引擎
    • 获得所有的PG到OSD的映射
  • 检查ceph mon
    • 查看mon状态的方法
    • mon法定人数状态
    • 获取PG值
    • 获取PGP值
    • 修改pg数
    • 修改pool的副本数
  • 监控ceph osd
    • 查看osd.xx的状态,主要看osdmap版本号
    • 查看osd树视图
    • 获取ceph集群和osd的详细信息
    • 检查池的副本数
    • 为pool指定规则
    • 查看添加到黑名单的客户端
  • 检查crush map信息
    • 检查集群的crush map信息
    • 查看crush map 工作集
    • 查看一个crush 规则集的详细信息
    • 搜索osd和它在crush map中的位置
  • 监控PG
    • 查看pg组映射(mgp)信息
    • 获取PG状态
    • 查看一个PG的map的详细信息
    • 查看pg中stuck状态
    • 显示集群所有pg统计
    • 恢复一个丢失的pg
  • 监控MDS
    • 检查mds的状态
    • 查看元数据服务器的详细信息
    • 查看rados存储池
    • 删除pool的标签
    • 创建池
    • 删除pool
    • 使用命令查看池中的对象
    • 查看集群空间的使用情况
  • 查看osd map 和状态
    • 查看osd 的ID
    • 查看OSD的树形图
    • 检查集群的monitor map信息
    • 获取集群的osd map信息
    • 查看osdmap图
  • ceph集群定位常用命令
    • 禁止集群pg做从均衡,当出现问题时,可以设置,用于排查问题
    • 禁止修复数据 backfill,当出现问题时,我们暂时不想修复数据,可以使用,配合nobackfill 一起使用
    • 禁止修复数据 recover,当出现问题时,我们暂时不想修复数据,可以使用,配合nobackfill 一起使用
    • 当集群出现问题,osd一会儿up,一个down的时候,可以使用这个命令,禁止osd down
    • 当集群出现问题,osd一会儿up,一个down的时候,可以使用这个命令,禁止osd up
    • 禁止集群中的osd自动因为长时间down,而out
    • 不做深度处理取消使用对应的unset即可,比如ceph osd unset noout
    • 设置单个osd的状态为out
    • 设置单个osd的状态为in
    • 设置单个osd的状态为down
    • 实时修改osd.xx的日志级别,不需要重启osd
    • 实时修改mon的日志级别,不需要重启mon
    • 单位秒,刚开始设置为1,怕服务器有压力,观察之后可以去掉设置为0
    • 调整恢复线程数,可以根据实际情况调整
    • 调整恢复线程的级别
    • 将osdmap导出成json格式
  • ceph官方文档链接

ceph集群的监控

查看OSD版本

ceph tell osd.* version

[root@controller01 ~]# ceph osd tree | tail -n 5
35   5.45999         osd.35     down        0          1.00000
36   5.45999         osd.36       up  1.00000          1.00000
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#
[root@controller01 ~]# ceph tell osd.39 version
{"version": "ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)"
}
[root@controller01 ~]#

检查集群的健康情况

ceph health
ceph health detail 得到集群的详细信息

[root@controller01 ~]# ceph health
HEALTH_WARN clock skew detected on mon.stor02; 10 pgs degraded; 10 pgs stuck degraded; 12 pgs stuck unclean; 10 pgs stuck undersized; 10 pgs undersized; recovery 21571/30526620 objects degraded (0.071%); recovery 9448/30526620 objects misplaced (0.031%); Monitor clock skew detected
[root@controller01 ~]#
[root@controller01 ~]# ceph health detail
HEALTH_WARN clock skew detected on mon.stor02; 10 pgs degraded; 10 pgs stuck degraded; 12 pgs stuck unclean; 10 pgs stuck undersized; 10 pgs undersized; recovery 21571/30526620 objects degraded (0.071%); recovery 9448/30526620 objects misplaced (0.031%); Monitor clock skew detected
pg 2.7c is stuck unclean for 6156704.649646, current state active+undersized+degraded, last acting [7,24]
pg 1.c2 is stuck unclean for 21617572.481252, current state active+undersized+degraded, last acting [2,15]
pg 4.8a is stuck unclean for 12774354.992788, current state active+remapped, last acting [6,18,34]
pg 3.e7 is stuck unclean for 42646060.464713, current state active+undersized+degraded, last acting [4,30]
pg 3.28 is stuck unclean for 21590801.902113, current state active+undersized+degraded, last acting [25,5]
pg 4.74 is stuck unclean for 21956444.260313, current state active+undersized+degraded, last acting [5,27]
pg 4.83 is stuck unclean for 6267936.553246, current state active+undersized+degraded, last acting [30,3]
pg 1.198 is stuck unclean for 18188616.005111, current state active+undersized+degraded, last acting [1,37]
pg 1.26 is stuck unclean for 12774354.994511, current state active+undersized+degraded, last acting [6,10]
pg 3.151 is stuck unclean for 18027285.274227, current state active+remapped, last acting [5,36,38]
pg 4.126 is stuck unclean for 21885480.884108, current state active+undersized+degraded, last acting [6,26]
pg 4.9 is stuck unclean for 6740290.081517, current state active+undersized+degraded, last acting [36,5]
pg 2.7c is stuck undersized for 6156605.485650, current state active+undersized+degraded, last acting [7,24]
pg 1.c2 is stuck undersized for 16519865.915701, current state active+undersized+degraded, last acting [2,15]
pg 3.e7 is stuck undersized for 7507687.105664, current state active+undersized+degraded, last acting [4,30]
pg 3.28 is stuck undersized for 16519863.497942, current state active+undersized+degraded, last acting [25,5]
pg 4.74 is stuck undersized for 16519863.959714, current state active+undersized+degraded, last acting [5,27]
pg 4.83 is stuck undersized for 6156606.072119, current state active+undersized+degraded, last acting [30,3]
pg 1.198 is stuck undersized for 12729841.966923, current state active+undersized+degraded, last acting [1,37]
pg 1.26 is stuck undersized for 12729840.466606, current state active+undersized+degraded, last acting [6,10]
pg 4.126 is stuck undersized for 16519865.062451, current state active+undersized+degraded, last acting [6,26]
pg 4.9 is stuck undersized for 6156606.071368, current state active+undersized+degraded, last acting [36,5]
pg 2.7c is stuck degraded for 6156605.485725, current state active+undersized+degraded, last acting [7,24]
pg 1.c2 is stuck degraded for 16519865.915776, current state active+undersized+degraded, last acting [2,15]
pg 3.e7 is stuck degraded for 7507687.105739, current state active+undersized+degraded, last acting [4,30]
pg 3.28 is stuck degraded for 16519863.498017, current state active+undersized+degraded, last acting [25,5]
pg 4.74 is stuck degraded for 16519863.959788, current state active+undersized+degraded, last acting [5,27]
pg 4.83 is stuck degraded for 6156606.072194, current state active+undersized+degraded, last acting [30,3]
pg 1.198 is stuck degraded for 12729841.966997, current state active+undersized+degraded, last acting [1,37]
pg 1.26 is stuck degraded for 12729840.466681, current state active+undersized+degraded, last acting [6,10]
pg 4.126 is stuck degraded for 16519865.062525, current state active+undersized+degraded, last acting [6,26]
pg 4.9 is stuck degraded for 6156606.071442, current state active+undersized+degraded, last acting [36,5]
pg 1.198 is active+undersized+degraded, acting [1,37]
pg 4.126 is active+undersized+degraded, acting [6,26]
pg 3.e7 is active+undersized+degraded, acting [4,30]
pg 1.c2 is active+undersized+degraded, acting [2,15]
pg 4.83 is active+undersized+degraded, acting [30,3]
pg 2.7c is active+undersized+degraded, acting [7,24]
pg 4.74 is active+undersized+degraded, acting [5,27]
pg 3.28 is active+undersized+degraded, acting [25,5]
pg 1.26 is active+undersized+degraded, acting [6,10]
pg 4.9 is active+undersized+degraded, acting [36,5]
recovery 21571/30526620 objects degraded (0.071%)
recovery 9448/30526620 objects misplaced (0.031%)
mon.stor02 addr 1.2.3.34:6789/0 clock skew 18.1235s > max 0.05s (latency 0.000456307s)
[root@controller01 ~]#

监控集群的时间

这个命令实时显示所有的集群事件
ceph -w

[root@controller01 ~]# ceph -wcluster 1da1773d-0e8d-4c37-a480-92c2b4ea4275health HEALTH_WARNclock skew detected on mon.stor0210 pgs degraded10 pgs stuck degraded12 pgs stuck unclean10 pgs stuck undersized10 pgs undersizedrecovery 21571/30526620 objects degraded (0.071%)recovery 9448/30526620 objects misplaced (0.031%)Monitor clock skew detected monmap e1: 3 mons at {stor01=1.2.3.33:6789/0,stor02=1.2.3.34:6789/0,stor03=1.2.3.35:6789/0}election epoch 82, quorum 0,1,2 stor01,stor02,stor03osdmap e11243: 40 osds: 34 up, 34 in; 3 remapped pgspgmap v171785641: 2624 pgs, 5 pools, 41083 GB data, 9937 kobjects120 TB used, 67008 GB / 185 TB avail21571/30526620 objects degraded (0.071%)9448/30526620 objects misplaced (0.031%)2611 active+clean10 active+undersized+degraded2 active+remapped1 active+clean+scrubbing+deepclient io 45529 kB/s rd, 8797 kB/s wr, 1101 op/s2023-07-22 06:35:04.194917 mon.0 [INF] pgmap v171785640: 2624 pgs: 10 active+undersized+degraded, 1 active+clean+scrubbing+deep, 2 active+remapped, 2611 active+clean; 41083 GB data, 120 TB used, 67008 GB / 185 TB avail; 69941 kB/s rd, 10191 kB/s wr, 977 op/s; 21571/30526620 objects degraded (0.071%); 9448/30526620 objects misplaced (0.031%)
2023-07-22 06:35:05.202325 mon.0 [INF] pgmap v171785641: 2624 pgs: 10 active+undersized+degraded, 1 active+clean+scrubbing+deep, 2 active+remapped, 2611 active+clean; 41083 GB data, 120 TB used, 67008 GB / 185 TB avail; 45529 kB/s rd, 8797 kB/s wr, 1101 op/s; 21571/30526620 objects degraded (0.071%); 9448/30526620 objects misplaced (0.031%)
2023-07-22 06:35:06.209785 mon.0 [INF] pgmap v171785642: 2624 pgs: 10 active+undersized+degraded, 1 active+clean+scrubbing+deep, 2 active+remapped, 2611 active+clean; 41083 GB data, 120 TB used, 67008 GB / 185 TB avail; 30537 kB/s rd, 6157 kB/s wr, 1190 op/s; 21571/30526620 objects degraded (0.071%); 9448/30526620 objects misplaced (0.031%)
2023-07-22 06:35:07.218476 mon.0 [INF] pgmap v171785643: 2624 pgs: 10 active+undersized+degraded, 1 active+clean+scrubbing+deep, 2 active+remapped, 2611 active+clean; 41083 GB data, 120 TB used, 67008 GB / 185 TB avail; 53885 kB/s rd, 3927 kB/s wr, 1097 op/s; 21571/30526620 objects degraded (0.071%); 9448/30526620 objects misplaced (0.031%)
2023-07-22 06:35:08.225996 mon.0 [INF] pgmap v171785644: 2624 pgs: 10 active+undersized+degraded, 1 active+clean+scrubbing+deep, 2 active+remapped, 2611 active+clean; 41083 GB data, 120 TB used, 67008 GB / 185 TB avail; 50391 kB/s rd, 6269 kB/s wr, 735 op/s; 21571/30526620 objects degraded (0.071%); 9448/30526620 objects misplaced (0.031%)
^C[root@controller01 ~]#

查看集群的空间利用率

ceph df

[root@controller01 ~]# ceph df
GLOBAL:SIZE     AVAIL      RAW USED     %RAW USED 185T     67008G         120T         64.72
POOLS:NAME        ID     USED       %USED     MAX AVAIL     OBJECTS rbd         0           0         0        19592G           0 images      1        397G      0.21        19592G       50957 vms         2       3410G      1.80        19592G      476735 volumes     3      37275G     19.62        19592G     9647848 backups     4           0         0        19592G           0
[root@controller01 ~]#

查看集群的状态

ceph status\ ceph -s

[root@controller01 ~]# ceph statuscluster 1da1773d-0e8d-4c37-a480-92c2b4ea4275health HEALTH_WARNclock skew detected on mon.stor0210 pgs degraded10 pgs stuck degraded12 pgs stuck unclean10 pgs stuck undersized10 pgs undersizedrecovery 21571/30526620 objects degraded (0.071%)recovery 9448/30526620 objects misplaced (0.031%)Monitor clock skew detected monmap e1: 3 mons at {stor01=1.2.3.33:6789/0,stor02=1.2.3.34:6789/0,stor03=1.2.3.35:6789/0}election epoch 82, quorum 0,1,2 stor01,stor02,stor03osdmap e11243: 40 osds: 34 up, 34 in; 3 remapped pgspgmap v171785681: 2624 pgs, 5 pools, 41083 GB data, 9937 kobjects120 TB used, 67008 GB / 185 TB avail21571/30526620 objects degraded (0.071%)9448/30526620 objects misplaced (0.031%)2611 active+clean10 active+undersized+degraded2 active+remapped1 active+clean+scrubbing+deepclient io 47781 kB/s rd, 8574 kB/s wr, 830 op/s
[root@controller01 ~]#
[root@controller01 ~]# ceph -scluster 1da1773d-0e8d-4c37-a480-92c2b4ea4275health HEALTH_WARNclock skew detected on mon.stor0210 pgs degraded10 pgs stuck degraded12 pgs stuck unclean10 pgs stuck undersized10 pgs undersizedrecovery 21571/30526620 objects degraded (0.071%)recovery 9448/30526620 objects misplaced (0.031%)Monitor clock skew detected monmap e1: 3 mons at {stor01=1.2.3.33:6789/0,stor02=1.2.3.34:6789/0,stor03=1.2.3.35:6789/0}election epoch 82, quorum 0,1,2 stor01,stor02,stor03osdmap e11243: 40 osds: 34 up, 34 in; 3 remapped pgspgmap v171785685: 2624 pgs, 5 pools, 41083 GB data, 9937 kobjects120 TB used, 67008 GB / 185 TB avail21571/30526620 objects degraded (0.071%)9448/30526620 objects misplaced (0.031%)2611 active+clean10 active+undersized+degraded2 active+remapped1 active+clean+scrubbing+deepclient io 22770 kB/s rd, 10227 kB/s wr, 377 op/s
[root@controller01 ~]#

查看集群的实时状态

watch ceph -s

[root@controller01 ~]# watch ceph -s
Every 2.0s: ceph -s                                                                                                                                Sat Jul 22 06:36:16 2023cluster 1da1773d-0e8d-4c37-a480-92c2b4ea4275health HEALTH_WARNclock skew detected on mon.stor0210 pgs degraded10 pgs stuck degraded12 pgs stuck unclean10 pgs stuck undersized10 pgs undersizedrecovery 21571/30526620 objects degraded (0.071%)recovery 9448/30526620 objects misplaced (0.031%)Monitor clock skew detectedmonmap e1: 3 mons at {stor01=1.2.3.33:6789/0,stor02=1.2.3.34:6789/0,stor03=1.2.3.35:6789/0}election epoch 82, quorum 0,1,2 stor01,stor02,stor03osdmap e11243: 40 osds: 34 up, 34 in; 3 remapped pgspgmap v171785712: 2624 pgs, 5 pools, 41083 GB data, 9937 kobjects120 TB used, 67008 GB / 185 TB avail21571/30526620 objects degraded (0.071%)9448/30526620 objects misplaced (0.031%)2611 active+clean10 active+undersized+degraded2 active+remapped1 active+clean+scrubbing+deepclient io 63180 kB/s rd, 6774 kB/s wr, 1189 op/s

获取秘钥列表

ceph auth list

[root@controller01 ~]# ceph auth list | tail -n9
installed auth entries:caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images
client.cinder-backupkey: AQCPJkxa/+fWOhAAhtmB30WHP19oM50EWdHWhA==caps: [mon] allow rcaps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=backups
client.glancekey: AQCOJkxaYw+bFxAAirbHxpe58S6ErauXUUaXNA==caps: [mon] allow rcaps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
[root@controller01 ~]#

查看ceph特征

ceph features 【有些版本可能不支持,如下】

[root@controller01 ~]# ceph features
no valid command found; 10 closest matches:
osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
osd lost <int[0-]> {--yes-i-really-mean-it}
osd pg-temp <pgid> {<id> [<id>...]}
osd primary-temp <pgid> <id>
osd rm <ids> [<ids>...]
osd reweight <int[0-]> <float[0.0-1.0]>
osd out <ids> [<ids>...]
osd in <ids> [<ids>...]
osd down <ids> [<ids>...]
fs new <fs_name> <metadata> <data>
Error EINVAL: invalid command
[root@controller01 ~]#

查看osd存储引擎

ceph daemon osd.* config show|grep osd_objectstore

[root@controller01 ~]# ceph osd tree | tail -n3
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#
[root@controller01 ~]# ceph daemon osd.39 config show|grep osd_objectstore
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[root@controller01 ~]#

获得所有的PG到OSD的映射

ceph pg dump pgs|awk '{print $1,$15}'|grep -v pg

[root@controller01 ~]# ceph pg dump pgs|awk '{print $1,$15}'|grep -v pg  | tail -n9
dumped pgs in format plain
3.1b3 [6,18,9]
4.1ab [37,15,24]
2.1ad [10,38,2]
1.1ae [39,3,18]
3.1ac [36,17,27]
4.1aa [1,30,38]
2.1ac [5,36,14]
1.1af [37,28,10]
3.1ad [28,23,32]
[root@controller01 ~]#

检查ceph mon

查看mon状态的方法

ceph mon stat
ceph mon_status
ceph mon dump

[root@controller01 ~]# ceph mon stat
e1: 3 mons at {stor01=1.2.3.33:6789/0,stor02=1.2.3.34:6789/0,stor03=1.2.3.35:6789/0}, election epoch 82, quorum 0,1,2 stor01,stor02,stor03
[root@controller01 ~]#
[root@controller01 ~]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 1da1773d-0e8d-4c37-a480-92c2b4ea4275
last_changed 0.000000
created 0.000000
0: 1.2.3.33:6789/0 mon.stor01
1: 1.2.3.34:6789/0 mon.stor02
2: 1.2.3.35:6789/0 mon.stor03
[root@controller01 ~]#

mon法定人数状态

ceph quorum_status

[root@controller01 ~]# ceph quorum_status
{"election_epoch":82,"quorum":[0,1,2],"quorum_names":["stor01","stor02","stor03"],"quorum_leader_name":"stor01","monmap":{"epoch":1,"fsid":"1da1773d-0e8d-4c37-a480-92c2b4ea4275","modified":"0.000000","created":"0.000000","mons":[{"rank":0,"name":"stor01","addr":"1.2.3.33:6789\/0"},{"rank":1,"name":"stor02","addr":"1.2.3.34:6789\/0"},{"rank":2,"name":"stor03","addr":"1.2.3.35:6789\/0"}]}}
[root@controller01 ~]#

获取PG值

ceph osd pool get volumes pg_num

[root@controller01 ~]# ceph osd pool get volumes pg_num
pg_num: 1024
[root@controller01 ~]#

获取PGP值

ceph osd pool get volumes pgp_num

[root@controller01 ~]# ceph osd pool get volumes pgp_num
pgp_num: 1024
[root@controller01 ~]#

修改pg数

ceph osd pool set volumes pg_num 1024【谨慎操作】

修改pool的副本数

ceph osd pool set volumes size 2 【谨慎操作】

监控ceph osd

查看osd.xx的状态,主要看osdmap版本号

ceph daemon osd.xx status 【有些版本可能没有这个命令】

[root@controller01 ~]# ceph osd tree | tail -n3
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#
[root@controller01 ~]# ceph daemon osd.39 status
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[root@controller01 ~]#

查看osd树视图

ceph osd tree

[root@controller01 ~]# ceph osd tree | tail -n3
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#

获取ceph集群和osd的详细信息

ceph osd dump

[root@controller01 ~]# ceph osd dump | tail -n9
osd.34 up   in  weight 0.869995 up_from 1938 up_thru 11132 down_at 1936 last_clean_interval [340,1937) 1.2.3.37:6808/61984 192.168.100.37:6805/3061984 192.168.100.37:6810/3061984 1.2.3.37:6805/3061984 exists,up eee52fc6-13e9-4d9d-914d-791ebf80ea5b
osd.35 down out weight 0 up_from 1970 up_thru 7081 down_at 7114 last_clean_interval [345,1969) 1.2.3.37:6812/62981 192.168.100.37:6803/3062981 192.168.100.37:6806/3062981 1.2.3.37:6803/3062981 autoout,exists bd0d95dd-2d90-49ce-b6f7-428162c2f0fd
osd.36 up   in  weight 1 up_from 7789 up_thru 10954 down_at 7784 last_clean_interval [350,7784) 1.2.3.37:6816/64001 192.168.100.37:6812/2064001 192.168.100.37:6821/2064001 1.2.3.37:6810/2064001 exists,up 3027ad75-10c9-4c2c-8fa8-76df18e00893
osd.37 up   in  weight 1 up_from 2288 up_thru 11097 down_at 2284 last_clean_interval [354,2286) 1.2.3.37:6820/65046 192.168.100.37:6819/1065046 192.168.100.37:6827/1065046 1.2.3.37:6807/1065046 exists,up 955c0711-edba-4324-ab6c-5910f0e77bd1
osd.38 up   in  weight 1 up_from 2287 up_thru 11178 down_at 2284 last_clean_interval [358,2285) 1.2.3.37:6824/66113 192.168.100.37:6814/1066113 192.168.100.37:6815/1066113 1.2.3.37:6804/1066113 exists,up a9bce0df-6ed9-4a6f-ab7f-64bc01b84b32
osd.39 up   in  weight 1 up_from 2287 up_thru 11218 down_at 2284 last_clean_interval [362,2285) 1.2.3.37:6828/67281 192.168.100.37:6817/1067281 192.168.100.37:6818/1067281 1.2.3.37:6806/1067281 exists,up fb279c38-1eb1-4863-80cc-f56dfd745401
pg_temp 1.198 [1,37,33]
pg_temp 3.151 [5,36,38]
pg_temp 4.8a [6,18,34]
[root@controller01 ~]#

检查池的副本数

ceph osd dump |grep -i size

[root@controller01 ~]# ceph osd dump |grep -i size
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 'images' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 11243 flags hashpspool stripe_width 0
pool 2 'vms' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 11241 flags hashpspool stripe_width 0
pool 3 'volumes' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 3284 flags hashpspool stripe_width 0
pool 4 'backups' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 133 flags hashpspool stripe_width 0
[root@controller01 ~]#

为pool指定规则

ceph osd pool set $pool-name crush_rule $rulename 【谨慎操作】

查看添加到黑名单的客户端

ceph osd blacklist ls

[root@controller01 ~]# ceph osd blacklist ls
listed 0 entries
[root@controller01 ~]#

检查crush map信息

检查集群的crush map信息

ceph osd crush dump

[root@controller01 ~]# ceph osd crush dump | tail -n9"require_feature_tunables": 1,"require_feature_tunables2": 1,"require_feature_tunables3": 0,"has_v2_rules": 0,"has_v3_rules": 0,"has_v4_buckets": 0}
}[root@controller01 ~]#

查看crush map 工作集

ceph osd crush rule list

[root@controller01 ~]# ceph osd crush rule list
["replicated_ruleset"
][root@controller01 ~]#

查看一个crush 规则集的详细信息

ceph osd crush rule dump <crush_rule_name>【ceph osd crush rule list的结果】

[root@controller01 ~]# ceph osd crush rule dump   replicated_ruleset
{"rule_id": 0,"rule_name": "replicated_ruleset","ruleset": 0,"type": 1,"min_size": 1,"max_size": 10,"steps": [{"op": "take","item": -1,"item_name": "default"},{"op": "chooseleaf_firstn","num": 0,"type": "host"},{"op": "emit"}]
}[root@controller01 ~]#

搜索osd和它在crush map中的位置

ceph osd find <numeric_osd_id> 定位该osd所在的主机详细信息

[root@controller01 ~]# ceph osd tree | tail -n3
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#
[root@controller01 ~]# ceph osd find 39
{"osd": 39,"ip": "1.2.3.37:6828\/67281","crush_location": {"host": "stor05","root": "default"}
}
[root@controller01 ~]#

监控PG

PG全称Placement Grouops,是一个逻辑的概念,一个PG包含多个OSD。引入PG这一层其实是为了更好的分配数据和定位数据。

查看pg组映射(mgp)信息

ceph pg dump 【数据量较大】

[root@controller01 ~]# ceph pg dump | head -n9
dumped all in format plain
version 171789689
stamp 2023-07-22 07:43:02.180211
last_osdmap_epoch 11243
last_pg_scan 409
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip     degr    misp    unf     bytes   log     disklog state   state_stamp     v       reported        up      up_primary      acting  acting_primary  last_scrub scrub_stamp     last_deep_scrub deep_scrub_stamp
4.1a9   0       0       0       0       0       0       0       0       active+clean    2023-07-21 16:32:13.394875      0'0     11243:2131      [4,14,27]       4       [4,14,27]  4       0'0     2023-07-21 16:32:13.394774      0'0     2023-07-21 16:32:13.394774
2.1af   885     0       0       0       0       6815938086      3015    3015    active+clean    2023-07-18 06:50:32.661846      11243'24356258  11243:21207828  [17,0,24] 17       [17,0,24]       17      11243'24283327  2023-07-18 06:50:32.661597      11243'24143270  2023-07-13 11:49:43.455008
Traceback (most recent call last):File "/usr/bin/ceph", line 914, in <module>retval = main()File "/usr/bin/ceph", line 901, in mainsys.stdout.write(prefix + outbuf + suffix)
IOError: [Errno 32] Broken pipe
[root@controller01 ~]#

获取PG状态

ceph pg stat

[root@controller01 ~]# ceph pg stat
v171789708: 2624 pgs: 10 active+undersized+degraded, 2 active+remapped, 2612 active+clean; 41083 GB data, 120 TB used, 67008 GB / 185 TB avail; 58666 kB/s rd, 13462 kB/s wr, 559 op/s; 21571/30526647 objects degraded (0.071%); 9448/30526647 objects misplaced (0.031%)
[root@controller01 ~]#

查看一个PG的map的详细信息

  • ceph pg map 2.7d(ceph pg dump 中第一栏信息)
[root@controller01 ~]# ceph pg dump | head -n9
...
4.1a9   0       0       0       0       0       0       0       0       active+clean    2023-07-21 16:32:13.394875      0'0     11243:2131      [4,14,27]       4       [4,14,27]  4       0'0     2023-07-21 16:32:13.394774      0'0     2023-07-21 16:32:13.394774
...
[root@controller01 ~]#
[root@controller01 ~]# ceph pg 4.1a9 query
{"state": "active+clean","snap_trimq": "[]","epoch": 11243,"up": [4,14,27],"acting": [4,14,27],"actingbackfill": ["4","14","27"],"info": {"pgid": "4.1a9","last_update": "0'0","last_complete": "0'0","log_tail": "0'0","last_user_version": 0,"last_backfill": "MAX","purged_snaps": "[]","history": {...
  • 上面中获取的值可能没意义,但下面这个获取到的值就有意义了【第一列内容】
[root@controller01 ~]# ceph pg dump_stuck unclean
ok
pg_stat state   up      up_primary      acting  acting_primary
2.7c    active+undersized+degraded      [7,24]  7       [7,24]  7
1.c2    active+undersized+degraded      [2,15]  2       [2,15]  2
4.8a    active+remapped [6,18]  6       [6,18,34]       6
3.e7    active+undersized+degraded      [4,30]  4       [4,30]  4
3.28    active+undersized+degraded      [25,5]  25      [25,5]  25
4.74    active+undersized+degraded      [5,27]  5       [5,27]  5
4.83    active+undersized+degraded      [30,3]  30      [30,3]  30
1.198   active+undersized+degraded      [1,37]  1       [1,37]  1
1.26    active+undersized+degraded      [6,10]  6       [6,10]  6
3.151   active+remapped [5,36]  5       [5,36,38]       5
4.126   active+undersized+degraded      [6,26]  6       [6,26]  6
4.9     active+undersized+degraded      [36,5]  36      [36,5]  36
[root@controller01 ~]#

查看pg中stuck状态

ceph pg dump_stuck unclean
ceph pg dump_stuck inactive
ceph pg dump_stuck stale

[root@controller01 ~]# ceph pg dump_stuck unclean
ok
pg_stat state   up      up_primary      acting  acting_primary
2.7c    active+undersized+degraded      [7,24]  7       [7,24]  7
1.c2    active+undersized+degraded      [2,15]  2       [2,15]  2
4.8a    active+remapped [6,18]  6       [6,18,34]       6
3.e7    active+undersized+degraded      [4,30]  4       [4,30]  4
3.28    active+undersized+degraded      [25,5]  25      [25,5]  25
4.74    active+undersized+degraded      [5,27]  5       [5,27]  5
4.83    active+undersized+degraded      [30,3]  30      [30,3]  30
1.198   active+undersized+degraded      [1,37]  1       [1,37]  1
1.26    active+undersized+degraded      [6,10]  6       [6,10]  6
3.151   active+remapped [5,36]  5       [5,36,38]       5
4.126   active+undersized+degraded      [6,26]  6       [6,26]  6
4.9     active+undersized+degraded      [36,5]  36      [36,5]  36
[root@controller01 ~]#
[root@controller01 ~]# ceph pg dump_stuck inactive
ok
[root@controller01 ~]# ceph pg dump_stuck stale
ok
[root@controller01 ~]#

显示集群所有pg统计

ceph pg dump --format plain 【大量输出】

[root@controller01 ~]# ceph pg dump --format plain | head -n9
dumped all in format plain
version 171790029
stamp 2023-07-22 07:48:44.489170
last_osdmap_epoch 11243
last_pg_scan 409
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip     degr    misp    unf     bytes   log     disklog state   state_stamp     v       reported        up      up_primary      acting  acting_primary  last_scrub scrub_stamp     last_deep_scrub deep_scrub_stamp
4.1a9   0       0       0       0       0       0       0       0       active+clean    2023-07-21 16:32:13.394875      0'0     11243:2131      [4,14,27]       4       [4,14,27]  4       0'0     2023-07-21 16:32:13.394774      0'0     2023-07-21 16:32:13.394774
2.1af   885     0       0       0       0       6815938086      3063    3063    active+clean    2023-07-18 06:50:32.661846      11243'24356306  11243:21207876  [17,0,24] 17       [17,0,24]       17      11243'24283327  2023-07-18 06:50:32.661597      11243'24143270  2023-07-13 11:49:43.455008
Traceback (most recent call last):File "/usr/bin/ceph", line 914, in <module>retval = main()File "/usr/bin/ceph", line 901, in mainsys.stdout.write(prefix + outbuf + suffix)
IOError: [Errno 32] Broken pipe
[root@controller01 ~]#

恢复一个丢失的pg

ceph pg <pg-id> mark_unfound_lost revert 【谨慎操作】

监控MDS

检查mds的状态

ceph mds stat

[root@controller01 ~]# ceph mds stat
e1: 0/0/0 up
[root@controller01 ~]#

查看元数据服务器的详细信息

ceph mds dump

[root@controller01 ~]# ceph mds dump
dumped mdsmap epoch 1
epoch   1
flags   0
created 0.000000
modified        2018-01-03 04:16:36.106453
tableserver     0
root    0
session_timeout 0
session_autoclose       0
max_file_size   0
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up      {}
failed
stopped
data_pools
metadata_pool   0
inline_data     disabled
[root@controller01 ~]#

查看rados存储池

rados lspools

[root@controller01 ~]# rados lspools
rbd
images
vms
volumes
backups
[root@controller01 ~]#

删除pool的标签

ceph osd pool application disable volumes-app rb 【谨慎操作】

创建池

ceph osd pool create test 96 96 【谨慎操作】

删除pool

ceph osd pool delete images images --yes-i-really-really-mean-it 【谨慎操作】

使用命令查看池中的对象

  • rados -p metadata ls 【可能没有】
  • 查看pool中的对象 rados -p volumes ls【数据量较大】
  • rados -p volume rm object【删除对象,谨慎操作】
[root@controller01 ~]# rados -p volumes ls | head -n9
rbd_data.20d62ac336b8d7b.0000000000042622
rbd_data.3bfd8321d31b6.00000000001a1cfa
rbd_data.20d56fdb391d80.000000000005ad01
rbd_data.3bfd8321d31b6.0000000000273107
rbd_data.3bfd8321d31b6.00000000001d0e9e
rbd_data.3bfd8321d31b6.0000000000044821
rbd_data.3bfd8321d31b6.00000000000a02d4
rbd_data.20d56fdb391d80.0000000000078be9
rbd_data.3bfd8321d31b6.000000000048bca4
^C
[root@controller01 ~]#

查看集群空间的使用情况

rados df

[root@controller01 ~]# rados df
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
backups                    0            0            0            0           0            0            0            0            0
images             416776472        50957            0          288           0     17857658   2165587332       723962   1983018508
rbd                        0            0            0            0           0            0            0            0            0
vms               3576148682       476736            0         1036           0  27705085828 7444802170165  42416744640 1682347766253
volumes          39086109068      9647861           22        20247           0    929313077 101805383246  65227643836 651966853229total used    128913605856     10175554total avail    70263032608total space   199176638464
[root@controller01 ~]#

查看osd map 和状态

ceph osd stat

[root@controller01 ~]# ceph osd statosdmap e11243: 40 osds: 34 up, 34 in; 3 remapped pgs
[root@controller01 ~]#

查看osd 的ID

ceph osd ls

[root@controller01 ~]# ceph osd ls
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@controller01 ~]#

查看OSD的树形图

ceph osd tree

[root@controller01 ~]# ceph osd tree
ID WEIGHT    TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 218.39966 root default
-2  43.67993     host stor01                                   0   5.45999         osd.0        up  1.00000          1.00000 1   5.45999         osd.1        up  0.84999          1.00000 2   5.45999         osd.2        up  1.00000          1.00000 3   5.45999         osd.3        up  1.00000          1.00000 4   5.45999         osd.4        up  1.00000          1.00000 5   5.45999         osd.5        up  1.00000          1.00000 6   5.45999         osd.6        up  1.00000          1.00000 7   5.45999         osd.7        up  1.00000          1.00000
-3  43.67993     host stor02                                   8   5.45999         osd.8        up  1.00000          1.00000
10   5.45999         osd.10       up  1.00000          1.00000
11   5.45999         osd.11       up  1.00000          1.00000
12   5.45999         osd.12     down        0          1.00000
13   5.45999         osd.13       up  0.84999          1.00000
14   5.45999         osd.14       up  1.00000          1.00000
15   5.45999         osd.15       up  0.96999          1.00000 9   5.45999         osd.9        up  1.00000          1.00000
-4  43.67993     host stor03
16   5.45999         osd.16       up  1.00000          1.00000
17   5.45999         osd.17       up  1.00000          1.00000
18   5.45999         osd.18       up  1.00000          1.00000
19   5.45999         osd.19     down        0          1.00000
20   5.45999         osd.20     down        0          1.00000
21   5.45999         osd.21       up  1.00000          1.00000
22   5.45999         osd.22       up  1.00000          1.00000
23   5.45999         osd.23       up  1.00000          1.00000
-5  43.67993     host stor04
24   5.45999         osd.24       up  0.87000          1.00000
25   5.45999         osd.25       up  1.00000          1.00000
26   5.45999         osd.26       up  0.95000          1.00000
27   5.45999         osd.27       up  1.00000          1.00000
28   5.45999         osd.28       up  1.00000          1.00000
29   5.45999         osd.29       up  0.95000          1.00000
30   5.45999         osd.30       up  1.00000          1.00000
31   5.45999         osd.31     down        0          1.00000
-6  43.67993     host stor05
32   5.45999         osd.32       up  0.89999          1.00000
33   5.45999         osd.33     down        0          1.00000
34   5.45999         osd.34       up  0.87000          1.00000
35   5.45999         osd.35     down        0          1.00000
36   5.45999         osd.36       up  1.00000          1.00000
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#

检查集群的monitor map信息

ceph mon dump

[root@controller01 ~]# ceph osd tree
ID WEIGHT    TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 218.39966 root default
-2  43.67993     host stor01                                   0   5.45999         osd.0        up  1.00000          1.00000 1   5.45999         osd.1        up  0.84999          1.00000 2   5.45999         osd.2        up  1.00000          1.00000 3   5.45999         osd.3        up  1.00000          1.00000 4   5.45999         osd.4        up  1.00000          1.00000 5   5.45999         osd.5        up  1.00000          1.00000 6   5.45999         osd.6        up  1.00000          1.00000 7   5.45999         osd.7        up  1.00000          1.00000
-3  43.67993     host stor02                                   8   5.45999         osd.8        up  1.00000          1.00000
10   5.45999         osd.10       up  1.00000          1.00000
11   5.45999         osd.11       up  1.00000          1.00000
12   5.45999         osd.12     down        0          1.00000
13   5.45999         osd.13       up  0.84999          1.00000
14   5.45999         osd.14       up  1.00000          1.00000
15   5.45999         osd.15       up  0.96999          1.00000 9   5.45999         osd.9        up  1.00000          1.00000
-4  43.67993     host stor03
16   5.45999         osd.16       up  1.00000          1.00000
17   5.45999         osd.17       up  1.00000          1.00000
18   5.45999         osd.18       up  1.00000          1.00000
19   5.45999         osd.19     down        0          1.00000
20   5.45999         osd.20     down        0          1.00000
21   5.45999         osd.21       up  1.00000          1.00000
22   5.45999         osd.22       up  1.00000          1.00000
23   5.45999         osd.23       up  1.00000          1.00000
-5  43.67993     host stor04
24   5.45999         osd.24       up  0.87000          1.00000
25   5.45999         osd.25       up  1.00000          1.00000
26   5.45999         osd.26       up  0.95000          1.00000
27   5.45999         osd.27       up  1.00000          1.00000
28   5.45999         osd.28       up  1.00000          1.00000
29   5.45999         osd.29       up  0.95000          1.00000
30   5.45999         osd.30       up  1.00000          1.00000
31   5.45999         osd.31     down        0          1.00000
-6  43.67993     host stor05
32   5.45999         osd.32       up  0.89999          1.00000
33   5.45999         osd.33     down        0          1.00000
34   5.45999         osd.34       up  0.87000          1.00000
35   5.45999         osd.35     down        0          1.00000
36   5.45999         osd.36       up  1.00000          1.00000
37   5.45999         osd.37       up  1.00000          1.00000
38   5.45999         osd.38       up  1.00000          1.00000
39   5.45999         osd.39       up  1.00000          1.00000
[root@controller01 ~]#

获取集群的osd map信息

ceph osd dump

[root@controller01 ~]# ceph osd dump | tail -n 9
osd.34 up   in  weight 0.869995 up_from 1938 up_thru 11132 down_at 1936 last_clean_interval [340,1937) 1.2.3.37:6808/61984 192.168.100.37:6805/3061984 192.168.100.37:6810/3061984 1.2.3.37:6805/3061984 exists,up eee52fc6-13e9-4d9d-914d-791ebf80ea5b
osd.35 down out weight 0 up_from 1970 up_thru 7081 down_at 7114 last_clean_interval [345,1969) 1.2.3.37:6812/62981 192.168.100.37:6803/3062981 192.168.100.37:6806/3062981 1.2.3.37:6803/3062981 autoout,exists bd0d95dd-2d90-49ce-b6f7-428162c2f0fd
osd.36 up   in  weight 1 up_from 7789 up_thru 10954 down_at 7784 last_clean_interval [350,7784) 1.2.3.37:6816/64001 192.168.100.37:6812/2064001 192.168.100.37:6821/2064001 1.2.3.37:6810/2064001 exists,up 3027ad75-10c9-4c2c-8fa8-76df18e00893
osd.37 up   in  weight 1 up_from 2288 up_thru 11097 down_at 2284 last_clean_interval [354,2286) 1.2.3.37:6820/65046 192.168.100.37:6819/1065046 192.168.100.37:6827/1065046 1.2.3.37:6807/1065046 exists,up 955c0711-edba-4324-ab6c-5910f0e77bd1
osd.38 up   in  weight 1 up_from 2287 up_thru 11178 down_at 2284 last_clean_interval [358,2285) 1.2.3.37:6824/66113 192.168.100.37:6814/1066113 192.168.100.37:6815/1066113 1.2.3.37:6804/1066113 exists,up a9bce0df-6ed9-4a6f-ab7f-64bc01b84b32
osd.39 up   in  weight 1 up_from 2287 up_thru 11218 down_at 2284 last_clean_interval [362,2285) 1.2.3.37:6828/67281 192.168.100.37:6817/1067281 192.168.100.37:6818/1067281 1.2.3.37:6806/1067281 exists,up fb279c38-1eb1-4863-80cc-f56dfd745401
pg_temp 1.198 [1,37,33]
pg_temp 3.151 [5,36,38]
pg_temp 4.8a [6,18,34]
[root@controller01 ~]#

查看osdmap图

ceph osd getmap -o osdmap.bin

[root@controller01 ~]# ceph osd getmap -o osdmap.bin
got osdmap epoch 11243
[root@controller01 ~]#

ceph集群定位常用命令

这下面的命令 谨慎操作,你不确定自己在干啥的情况下,主要以了解为主

禁止集群pg做从均衡,当出现问题时,可以设置,用于排查问题

ceph osd set norebalance

禁止修复数据 backfill,当出现问题时,我们暂时不想修复数据,可以使用,配合nobackfill 一起使用

ceph osd set nobackfill

禁止修复数据 recover,当出现问题时,我们暂时不想修复数据,可以使用,配合nobackfill 一起使用

ceph osd set norecover

当集群出现问题,osd一会儿up,一个down的时候,可以使用这个命令,禁止osd down

ceph osd set nodown

当集群出现问题,osd一会儿up,一个down的时候,可以使用这个命令,禁止osd up

ceph osd set noup

禁止集群中的osd自动因为长时间down,而out

ceph osd set noout

不做深度处理取消使用对应的unset即可,比如ceph osd unset noout

ceph osd set nodeeep-scrub

设置单个osd的状态为out

ceph osd out osd.xx

设置单个osd的状态为in

ceph osd in osd.xx

设置单个osd的状态为down

ceph osd down osd.xx

实时修改osd.xx的日志级别,不需要重启osd

ceph tell osd.xx injectargs --debug-osd 20

实时修改mon的日志级别,不需要重启mon

ceph tell mon.xx injectargs --debug-mon 20

单位秒,刚开始设置为1,怕服务器有压力,观察之后可以去掉设置为0

ceph tell osd.* injectargs --osd_recovery_sleep 1

调整恢复线程数,可以根据实际情况调整

ceph tell osd.* injectargs --osd_max_backfills 1

调整恢复线程的级别

ceph tell osd.* injectargs --osd_recovery_op_priority 60

将osdmap导出成json格式

ceph-dencoder type OSDMap import osdmap_197 decode dump_json

ceph官方文档链接

Ceph管理指南

ceph常用命令及其使用、ceph集群定位常用命令说明【如ceph osd set norebalance】、ceph官方文档查询地址相关推荐

  1. Ceph原理、部署、存储集群、块存储及对象存储centos7.5

    目录 ​编辑 一.Ceph概述 1.基础知识 1. 分布式存储定义 2. 常用的分布式文件系统 3. Ceph定义 4. Ceph组件 二.实验环境准备 1.实验拓扑图 2.配置yum源 3.配置SS ...

  2. Ceph (3) - 安装Ceph集群方法3:使用 ceph-ansible 离线安装 Red Hat Ceph Storage 4.1 集群

    <OpenShift 4.x HOL教程汇总> 文章目录 安装前说明 准备主机环境 创建虚拟主机并配置网络 配置主机名和域名 设置环境变量 设置主机hosts 配置免密登录 设置节点主机名 ...

  3. 使用cephadm部署单节点ceph集群,后期可扩容(基于官方文档,靠谱,读起来舒服)

    目录 ceph各种部署工具比较(来自官方文档的翻译,靠谱!) 材料准备 cephadm使用条件 服务器有外网访问能力 服务器没有外网访问能力 安装cephadm cephadm的功能 两种安装方式 基 ...

  4. redis常用命令及安全Redis集群环境搭建

    2019独角兽企业重金招聘Python工程师标准>>> redis 安装 在centos 上很简单 yum install redis 即可完成redis的安装 安装redis cd ...

  5. 执行redis命令redis-trib.rb查看集群信息报错cannot load such file -- redis (LoadError)

    问题描述: 在执行redis-trib.rb命令查看集群状态的时候,报错: [aiprd@hadoop1 ~]$ redis-trib.rb check 192.168.30.10:7000 Trac ...

  6. Kubernetes教程(十一)---使用 KubeClipper 通过一条命令快速创建 k8s 集群

    来自:指月 https://www.lixueduan.com 原文:https://www.lixueduan.com/posts/kubernetes/11-install-by-kubeclip ...

  7. 如何在ORACLE CLOUD中创建和访问容器集群丨内附官方文档链接

    墨墨导读:本文描述如何在Oracle Cloud中创建并访问容器服务.为了简单,所有的操作都是针对root隔离区. 创建允许容器运行的政策官方文档链接 这一步是必须的,否则可以增加容器容器. 官方文档 ...

  8. RabbitMQ官方文档知识点总结合集+代码注释(中文+Java版)

    全文代码.MD格式文档的github连接(求star~):https://github.com/Ruoyi-Chen/RabbitMQ-demos 文章目录 全文代码.MD格式文档的github连接( ...

  9. python数据科学和机器学习常用库的官方文档

    文章目录 Matplotlib Numpy Pandas sklearn sklearn_crfsuite SciPy Matplotlib 进入matplotlib官网地址:https://matp ...

最新文章

  1. 经验模式分解EMD算法原理
  2. vdsm的SSL证书验证过程
  3. Boost Asio总结(16)例子
  4. Eclipse和MyEclipse相关的快捷键
  5. iOS开发实战小知识点(五)——获取JS meta异常
  6. CORD 4.1:打造实现边缘计算的最佳平台
  7. 详解JavaScript变量类型判断及domReady原理 写得很好
  8. tableView中deselectRowAtIndexPath的作用
  9. 国企营业收入逾17万亿 同比增长24.2%
  10. jquery页面跳转带cookie_python socket编程:实现redirect函数、cookie和session
  11. C++自定义函数类型——typedef的使用
  12. Java课程设计/大作业合集
  13. 如何选择APP的推广渠道?
  14. 监控系统网络未找到dhcp服务器,监控显示未找到dhcp服务器
  15. 【数据结构】串(定长顺序串、堆串、块链串)的存储结构及基本运算(C语言)
  16. 使用Driftnet通过Wifi Pumpkin捕获移动图像
  17. CSDN如何搜索自己的博客;使用Google搜索自己的博客
  18. jacob.jar 操作word文件 添加水印、图片(附查阅Microsoft Office VBA参考文档方式)
  19. 十几岁的娃娃,下手咋这么狠
  20. 数字图像处理(绪言)

热门文章

  1. 电机驱动器为什么能广泛运用
  2. 基于深度学习的网络入侵检测研究综述
  3. NOPI读取Excel
  4. 数据清洗必须会的一些方法 - sql篇
  5. oracle rowID切片,Oracle中ROWID详解
  6. 一次勒索解密的全经理(转载)
  7. HTML创建一个两行两列的表格,HTML tr 标签
  8. Adaptec HBA和RAID 的兼容性报告中又添新成员—— HGST Helium HDD
  9. 查准率、查全率、P-R图与F1值
  10. 自媒体平台有哪些?怎么选择自己需要的?