一、集群
1、启动一个ceph进程
启动mon进程
[root@ceph-adm ~]#203.109 service ceph start mon.ceph-mon1

启动msd进程
[root@ceph-adm ~]#203.109 service ceph start mds.ceph-mds1

启动osd进程
[root@ceph-adm ~]#203.109 service ceph start osd.0

2、查看机器的监控状态
[root@ceph-adm ~]#203.109 ceph health
HEALTH_OK

3、查看ceph的实时运行状态
[root@ceph-adm ~]#203.109 ceph -w

cluster fdbb34ad-765a-420b-89e8-443aba4254dd
     health HEALTH_WARN
            clock skew detected on mon.ceph-mon3
            84 pgs stale
            84 pgs stuck stale
            5 requests are blocked > 32 sec
            mds cluster is degraded
            Monitor clock skew detected 
     monmap e1: 3 mons at {ceph-mon1=192.168.203.72:6789/0,ceph-mon2=192.168.203.73:6789/0,ceph-mon3=192.168.203.74:6789/0}
            election epoch 22, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
     mdsmap e16: 1/1/1 up {0=ceph-mon2=up:replay}, 2 up:standby
     osdmap e56: 2 osds: 2 up, 2 in
      pgmap v137: 596 pgs, 7 pools, 10442 bytes data, 63 objects
            77032 kB used, 30622 MB / 30697 MB avail
                 512 active+clean
                  84 stale+active+clean

2016-08-28 20:42:51.043538 osd.0 [WRN] slow request 120.842103 seconds old, received at 2016-08-28 20:40:50.201254: osd_op(mds.0.3:4 400.00000000 [read 0~0] 2.64e96f8f ack+read+known_if_redirected e49) currently no flag points reached

4、检查信息状态信息
[root@ceph-adm ~]#203.109 ceph -s
    cluster fdbb34ad-765a-420b-89e8-443aba4254dd
     health HEALTH_WARN
            clock skew detected on mon.ceph-mon3
            84 pgs stale
            84 pgs stuck stale
            5 requests are blocked > 32 sec
            mds cluster is degraded
            Monitor clock skew detected 
     monmap e1: 3 mons at {ceph-mon1=192.168.203.72:6789/0,ceph-mon2=192.168.203.73:6789/0,ceph-mon3=192.168.203.74:6789/0}
            election epoch 22, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
     mdsmap e16: 1/1/1 up {0=ceph-mon2=up:replay}, 2 up:standby
     osdmap e56: 2 osds: 2 up, 2 in
      pgmap v139: 596 pgs, 7 pools, 10442 bytes data, 63 objects
            77636 kB used, 30622 MB / 30697 MB avail
                 512 active+clean
                  84 stale+active+clean

5、查看ceph存储空间
[root@ceph-adm ~]#203.109 ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    30697M     30622M       77636k          0.25 
POOLS:
    NAME                ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                 0         0         0        15310M           0 
    cephfs_data         1         0         0        15310M           0 
    cephfs_metadata     2      9594         0        15310M          20

6、删除一个节点的所有的ceph数据包
[root@ceph-adm ~]#203.109 ceph-deploy purge ceph-mon1
[root@ceph-adm ~]#203.109 ceph-deploy purgedata cdph-mon1

7、为ceph创建一个admin用户并为admin用户创建一个密钥,把密钥保存到/etc/ceph目录下:
[root@ceph-adm ~]#203.109 ceph auth get-or-create client.admin mds ‘allow’ osd ‘allow *’ mon ‘allow *’ > /etc/ceph/ceph.client.admin.keyring

[root@ceph-adm ~]#203.109 ceph auth get-or-create client.admin mds ‘allow’ osd ‘allow *’ mon ‘allow *’ -o /etc/ceph/ceph.client.admin.keyring

8、为osd.0创建一个用户并创建一个key
[root@ceph-adm ~]#203.109 ceph auth get-or-create osd.0 mon ‘allow rwx’ osd ‘allow *’ -o /var/lib/ceph/osd/ceph-0/keyring

9、为mds.ceph-mon1创建一个用户并创建一个key
[root@ceph-adm ~]#203.109 ceph auth get-or-create mds.node1 mon ‘allow rwx’ osd ‘allow *’ mds ‘allow *’ -o /var/lib/ceph/mds/ceph-node1/keyring

10、查看ceph集群中的认证用户及相关的key
[root@ceph-adm ~]#203.109 ceph auth list

11、删除集群中的一个认证用户
[root@ceph-adm ~]#203.109 ceph auth del osd.0

12、查看集群健康状态细节
[root@ceph-adm ~]#203.109 ceph health detail
HEALTH_WARN clock skew detected on mon.ceph-mon3; 84 pgs stale; 84 pgs stuck stale; 5 requests are blocked > 32 sec; 2 osds have slow requests; mds cluster is degraded; Monitor clock skew detected 
pg 0.22 is stuck stale for 13269.231220, current state stale+active+clean, last acting [1,0]
pg 0.21 is stuck stale for 13269.231219, current state stale+active+clean, last acting [1,0]
pg 0.20 is stuck stale for 13269.231222, current state stale+active+clean, last acting [1,0]
pg 0.1f is stuck stale for 13269.231224, current state stale+active+clean, last acting [0,1]

13、查看ceph log日志所在的目录
[root@ceph-adm ~]#203.109 ceph-conf –name mon.ceph-adm –show-config-value log_file
/var/log/ceph/ceph-mon.ceph-adm.log

二、mon
1、查看mon的状态信息
[root@ceph-mon1 ~]# ceph mon stat
e1: 3 mons at {ceph-mon1=192.168.203.72:6789/0,ceph-mon2=192.168.203.73:6789/0,ceph-mon3=192.168.203.74:6789/0}, election epoch 22, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3

2、查看mon的选举状态
[root@ceph-mon1 ~]# ceph quorum_status
{“election_epoch”:22,”quorum”:[0,1,2],”quorum_names”:[“ceph-mon1″,”ceph-mon2″,”ceph-mon3″],”quorum_leader_name”:”ceph-mon1″,”monmap”:{“epoch”:1,”fsid”:”fdbb34ad-765a-420b-89e8-443aba4254dd”,”modified”:”0.000000″,”created”:”0.000000″,”mons”:[{“rank”:0,”name”:”ceph-mon1″,”addr”:”192.168.203.72:6789\/0″},{“rank”:1,”name”:”ceph-mon2″,”addr”:”192.168.203.73:6789\/0″},{“rank”:2,”name”:”ceph-mon3″,”addr”:”192.168.203.74:6789\/0″}]}}

3、查看mon的映射信息
[root@ceph-mon1 ~]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid fdbb34ad-765a-420b-89e8-443aba4254dd
last_changed 0.000000
created 0.000000
0: 192.168.203.72:6789/0 mon.ceph-mon1
1: 192.168.203.73:6789/0 mon.ceph-mon2
2: 192.168.203.74:6789/0 mon.ceph-mon3

4、删除一个mon节点
[root@node1 ~]# ceph mon remove node1
removed mon.node1 at 10.39.101.1:6789/0, there are now 3 monitors
2014-07-07 18:11:04.974188 7f4d16bfd700  0 monclient: hunting for new mon
5、获得一个正在运行的mon map,并保存在1.txt文件中
[root@ceph-mon1 ~]# ceph mon getmap -o 1.txt
got monmap epoch 1

6、查看上面获得的map
[root@ceph-mon1 ~]# monmaptool –print 1.txt 
monmaptool: monmap file 1.txt
epoch 1
fsid fdbb34ad-765a-420b-89e8-443aba4254dd
last_changed 0.000000
created 0.000000
0: 192.168.203.72:6789/0 mon.ceph-mon1
1: 192.168.203.73:6789/0 mon.ceph-mon2
2: 192.168.203.74:6789/0 mon.ceph-mon3

7、把上面的mon map注入新加入的节点
[root@ceph-mon1 ~]# ceph-mon -i ceph-mon3 –inject-monmap 1.txt

8、查看mon的amin socket
[root@ceph-mon1 ~]# ceph-conf –name mon.ceph-mon1 –show-config-value admin_socket
/var/run/ceph/ceph-mon.ceph-mon1.asok

9、查看mon的详细状态
[root@ceph-mon1 ~]# ceph daemon mon.ceph-mon1  mon_status 
{
    “name”: “ceph-mon1”,
    “rank”: 0,
    “state”: “leader”,
    “election_epoch”: 22,
    “quorum”: [
        0,
        1,
        2
    ],
    “outside_quorum”: [],
    “extra_probe_peers”: [],
    “sync_provider”: [],
    “monmap”: {
        “epoch”: 1,
        “fsid”: “fdbb34ad-765a-420b-89e8-443aba4254dd”,
        “modified”: “0.000000”,
        “created”: “0.000000”,
        “mons”: [
            {
                “rank”: 0,
                “name”: “ceph-mon1”,
                “addr”: “192.168.203.72:6789\/0”
            },
            {
                “rank”: 1,
                “name”: “ceph-mon2”,
                “addr”: “192.168.203.73:6789\/0”
            },
            {
                “rank”: 2,
                “name”: “ceph-mon3”,
                “addr”: “192.168.203.74:6789\/0”
            }
        ]
    }
}

10、删除一个mon节点
[root@os-node1 ~]# ceph mon remove ceph-mon1
removed mon.ceph-mon1 at 192.168.203.72:6789/0, there are now 3 monitors

三、msd
1、查看msd状态
[root@ceph-mon1 ~]# ceph mds dump
dumped mdsmap epoch 16
epoch 16
flags 0
created 2016-08-28 13:35:38.203824
modified 2016-08-28 20:40:50.048995
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 49
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
max_mds 1
in 0
up {0=44104}
failed
stopped
data_pools 1
metadata_pool 2
inline_data disabled
44098: 192.168.203.72:6800/1163 ‘ceph-mon1’ mds.-1.0 up:standby seq 1
44099: 192.168.203.74:6800/1140 ‘ceph-mon3’ mds.-1.0 up:standby seq 1
44104: 192.168.203.73:6800/1140 ‘ceph-mon2’ mds.0.3 up:replay seq 1

3、删除一个mds节点
[root@ceph-mon1 ~]# ceph  mds rm 0 mds.ceph-mon1
 mds gid 0 dne

四、osd
1、查看ceph osd运行状态
[root@ceph-osd1 ~]# ceph osd stat
     osdmap e56: 2 osds: 2 up, 2 in

2、查看osd映射信息
[root@ceph-osd1 ~]# ceph osd dump
epoch 56
fsid fdbb34ad-765a-420b-89e8-443aba4254dd
created 2016-08-28 12:27:23.497369
modified 2016-08-28 20:41:10.565810
flags 
pool 0 ‘rbd’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 ‘cephfs_data’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10 pgp_num 10 last_change 12 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 2 ‘cephfs_metadata’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10 pgp_num 10 last_change 11 flags hashpspool stripe_width 0
pool 4 ‘.rgw.root’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 42 flags hashpspool stripe_width 0
pool 5 ‘.rgw.control’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 50 flags hashpspool stripe_width 0
pool 6 ‘.rgw’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 52 flags hashpspool stripe_width 0
pool 7 ‘.rgw.gc’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 53 flags hashpspool stripe_width 0
max_osd 10
osd.0 up   in  weight 1 up_from 47 up_thru 54 down_at 19 last_clean_interval [0,0) 192.168.203.75:6800/1954 192.168.203.75:6801/1954 192.168.203.75:6802/1954 192.168.203.75:6803/1954 exists,up 56d8c516-f3b4-4b84-a4b5-6d6d051f125f
osd.1 up   in  weight 1 up_from 50 up_thru 54 down_at 19 last_clean_interval [0,0) 192.168.203.76:6800/1954 192.168.203.76:6801/1954 192.168.203.76:6802/1954 192.168.203.76:6803/1954 exists,up c4572946-2d7e-4144-8416-c62bbadb6d2b
blacklist 192.168.203.73:6800/1173 expires 2016-08-28 21:04:49.243225

3、查看osd的目录树
[root@ceph-osd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.01999 root default                                         
-2 0.00999     host ceph-osd1                                   
 0 0.00999         osd.0           up  1.00000          1.00000 
-3 0.00999     host ceph-osd2                                   
 1 0.00999         osd.1           up  1.00000          1.00000

4、down掉一个osd硬盘
[root@ceph-osd1 ~]# ceph osd down 0   #down掉osd.0节点

5、在集群中删除一个osd硬盘
[root@ceph-osd1 ~]# ceph osd rm 0
removed osd.0

6、在集群中删除一个osd 硬盘 crush map
[root@ceph-osd1 ~]# ceph osd crush rm osd.0

7、在集群中删除一个osd的host节点
[root@ceph-osd1 ~]# ceph osd crush rm ceph-osd1
removed item id -2 name ‘ceph-osd1’ from crush map

8、查看最大osd的个数 
[root@ceph-osd1 ~]# ceph osd getmaxosd
max_osd = 4 in epoch 514           #默认最大是4个osd节点

8、设置最大的osd的个数(当扩大osd节点的时候必须扩大这个值)
[root@ceph-osd1 ~]# ceph osd setmaxosd 10

9、设置osd crush的权重为1.0
ceph osd crush set {id} {weight} [{loc1} [{loc2} …]]
例如:
[root@ceph-osd1 ~]# ceph osd crush set 1 3.0 host=ceph-osd1
set item id 1 name ‘osd.1’ weight 3 at location {host=ceph-osd2} to crush map

[root@ceph-osd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 3.00999 root default                                         
-2 0.00999     host ceph-osd1                                   
 0 0.00999         osd.0           up  1.00000          1.00000 
-3 3.00000     host ceph-osd2                                   
 1 3.00000         osd.1           up  1.00000          1.00000

或者用下面的方式
[root@ceph-osd1 ~]# ceph osd crush reweight osd.0 1.0
reweighted item id 0 name ‘osd.0’ to 1 in crush map

[root@ceph-osd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 2.00000 root default                                         
-2 1.00000     host ceph-osd1                                   
 0 1.00000         osd.0           up  1.00000          1.00000 
-3 1.00000     host ceph-osd2                                   
 1 1.00000         osd.1           up  1.00000          1.00000

10、设置osd的权重
[root@ceph-osd1 ~]# ceph osd reweight 1 0.5
reweighted osd.1 to 0.5 (8000)
[root@ceph-osd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 2.00000 root default                                         
-2 1.00000     host ceph-osd1                                   
 0 1.00000         osd.0           up  1.00000          1.00000 
-3 1.00000     host ceph-osd2                                   
 1 1.00000         osd.1           up  0.50000          1.00000

11、把一个osd节点逐出集群
[root@ceph-osd1 ~]# ceph osd out osd.1
marked out osd.1. 
[root@ceph-osd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 2.00000 root default                                         
-2 1.00000     host ceph-osd1                                   
 0 1.00000         osd.0           up  1.00000          1.00000 
-3 1.00000     host ceph-osd2                                   
 1 1.00000         osd.1           up        0          1.00000  # osd.1的reweight变为0了就不再分配数据,但是设备还是存活的
 
12、把逐出的osd加入集群
[root@ceph-osd1 ~]# ceph osd in osd.1
marked in osd.1. 
[root@ceph-osd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 2.00000 root default                                         
-2 1.00000     host ceph-osd1                                   
 0 1.00000         osd.0           up  1.00000          1.00000 
-3 1.00000     host ceph-osd2                                   
 1 1.00000         osd.1           up  1.00000          1.00000

13、暂停osd (暂停后整个集群不再接收数据)
[root@ceph-osd1 ~]# ceph osd pause
set pauserd,pausewr
      
14、再次开启osd (开启后再次接收数据) 
[root@ceph-osd1 ~]# ceph osd unpause
unset pauserd,pausewr

五、PG组
1、查看pg组的映射信息
[root@ceph-osd1 ~]# ceph pg dump |more
dumped all in format plain
version 204
stamp 2016-08-28 21:11:57.715242
last_osdmap_epoch 90
last_pg_scan 53
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_pr
imary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp
7.78 0 0 0 0 0 0 0 0 active+clean 2016-08-28 21:11:03.832762 0’0 88:53
[1,0] 1 [1,0] 1 0’0 2016-08-28 20:41:06.761508 0’0 2016-08-28 20:41:06.761508
6.79 0 0 0 0 0 0 0 0 active+clean 2016-08-28 21:11:03.111729 0’0 87:53
[0,1] 0 [0,1] 0 0’0 2016-08-28 20:41:04.937732 0’0 2016-08-28 20:41:04.937732
4.7b 0 0 0 0 0 0 0 0 active+clean 2016-08-28 21:11:03.834298 0’0 88:50
[1,0] 1 [1,0] 1 0’0 2016-08-28 20:38:50.323737 0’0 2016-08-28 20:38:50.323737
5.7a 0 0 0 0 0 0 0 0 active+clean 2016-08-28 21:11:03.816663 0’0 88:38
[1,0] 1 [1,0] 1 0’0 2016-08-28 20:40:51.277112 0’0 2016-08-28 20:40:51.277112
下面部分省略

2、查看一个PG的map
[root@ceph-osd1 ~]# ceph pg map 0.3f
osdmap e90 pg 0.3f (0.3f) -> up [0,1] acting [0,1]

3、查看PG状态
[root@ceph-osd1 ~]#  ceph pg stat
v206: 596 pgs: 84 stale+active+clean, 512 active+clean; 10442 bytes data, 89652 kB used, 30610 MB / 30697 MB avail

4、查询一个pg的详细信息
[root@ceph-osd1 ~]# ceph pg  7.78 query
{
    “state”: “active+clean”,
    “snap_trimq”: “[]”,
    “epoch”: 90,
    “up”: [
        1,
        0
下面部分省略

5、查看pg中stuck的状态
[root@ceph-osd1 ~]# ceph pg dump_stuck unclean
ok
[root@ceph-osd1 ~]# ceph pg dump_stuck inactive
ok
[root@ceph-osd1 ~]# ceph pg dump_stuck stale
ok
pg_stat state up up_primary acting acting_primary
0.22 stale+active+clean [1,0] 1 [1,0] 1
0.21 stale+active+clean [1,0] 1 [1,0] 1
0.20 stale+active+clean [1,0] 1 [1,0] 1

6、显示一个集群中的所有的pg统计
[root@ceph-osd1 ~]# ceph pg dump –format plain|more
dumped all in format plain
version 208
stamp 2016-08-28 21:15:57.802463
last_osdmap_epoch 90
last_pg_scan 53
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_pr
imary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp 
7.78 0 0 0 0 0 0 0 0 active+clean 2016-08-28 21:11:03.832762 0’0 88:53
[1,0] 1 [1,0] 1 0’0 2016-08-28 20:41:06.761508 0’0 2016-08-28 20:41:06.761508
下面部分省略

7、恢复一个丢失的pg
ceph pg {pg-id} mark_unfound_lost revert

8、显示非正常状态的pg
ceph pg dump_stuck inactive|unclean|stale

六、pool
1、查看ceph集群中的pool数量
[root@ceph-adm ~]#203.109 ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,4 .rgw.root,5 .rgw.control,6 .rgw,7 .rgw.gc,

2、在ceph集群中创建一个pool
[root@ceph-adm ~]#203.109 ceph osd pool create xiao 100  #这里的100指的是PG组
pool ‘xiao’ created

3、为一个ceph pool配置配额
[root@ceph-adm ~]#203.109 ceph osd pool set-quota xiao max_objects 10000
set-quota max_objects = 10000 for pool xiao

3、显示所有的pool
[root@ceph-adm ~]#203.109 ceph osd pool ls
rbd
cephfs_data
cephfs_metadata
xiao

4、在集群中删除一个pool
ceph osd pool delete xiao xiao –yes-i-really-really-mean-it  #集群名字需要重复两次

5、显示集群中pool的详细信息
[root@ceph-adm ~]#203.109 rados df
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
cephfs_data                0            0            0            0           0            0            0            0            0
cephfs_metadata           10           20            0            0           0            0            0           75           52
rbd                        0            0            0            0           0            0            0            0            0
xiao                       0            0            0            0           0            0            0            0            0
  total used           91444           20
  total avail       31343268
  total space       31434712

6、给一个pool创建一个快照
[root@ceph-adm ~]#203.109 ceph osd pool mksnap xiao xiao-snap
created pool xiao snap xiao-snap

7、删除pool的快照
[root@ceph-adm ~]#203.109 ceph osd pool rmsnap xiao xiao-snap
removed pool xiao snap xiao-snap

8、查看data池的pg数量
[root@ceph-adm ~]#203.109 ceph osd pool get xiao pg_num
pg_num: 100

9、设置data池的最大存储空间为100T(默认是1T)
[root@ceph-adm ~]#203.109 ceph osd pool set xiao target_max_bytes 100000000000000
set pool 0 target_max_bytes to 100000000000000

10、设置data池的副本数是3
[root@ceph-adm ~]#203.109 ceph osd pool set xiao size 3
set pool 0 size to 3

11、设置data池能接受写操作的最小副本为2
[root@ceph-adm ~]#203.109 ceph osd pool set xiao min_size 2
set pool 0 min_size to 2

12、查看集群中所有pool的副本尺寸
[root@ceph-adm ~]#203.109 ceph osd dump | grep ‘replicated size’
pool 0 ‘rbd’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 ‘cephfs_data’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10 pgp_num 10 last_change 12 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 2 ‘cephfs_metadata’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10 pgp_num 10 last_change 11 flags hashpspool stripe_width 0
pool 8 ‘xiao’ replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 104 flags hashpspool max_objects 10000 stripe_width 0

13、设置一个pool的pg数量
[root@ceph-adm ~]# ceph osd pool set xiao pg_num 100
set pool 0 pg_num to 100

14、设置一个pool的pgp数量
[root@admin ~]# ceph osd pool set xiao pgp_num 100
set pool 0 pgp_num to 100

七、rados和rbd指令
1、rados命令使用方法
(1)、查看ceph集群中有多少个pool (只是查看pool)
[root@ceph-adm ~]# rados lspools
rbd
cephfs_data
cephfs_metadata
xiao

(2)、查看ceph集群中有多少个pool,并且每个pool容量及利用情况
[root@ceph-adm ~]# rados df 
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
cephfs_data                0            0            0            0           0            0            0            0            0
cephfs_metadata           10           20            0            0           0            0            0           75           52
rbd                        0            0            0            0           0            0            0            0            0
xiao                       0            0            0            0           0            0            0            0            0
  total used           73960           20
  total avail       31360752
  total space       31434712

(3)、创建一个pool
[root@ceph-adm ~]# rados mkpool test
successfully created pool test

(4)、查看ceph pool中的ceph object (这里的object是以块形式存储的)
[root@node-44 ~]# rados ls -p volumes | more
rbd_data.348f21ba7021.0000000000000866
rbd_data.32562ae8944a.0000000000000c79
rbd_data.589c2ae8944a.00000000000031ba
rbd_data.58c9151ff76b.00000000000029af
rbd_data.58c9151ff76b.0000000000002c19
rbd_data.58c9151ff76b.0000000000000a5a
rbd_data.58c9151ff76b.0000000000001c69
rbd_data.58c9151ff76b.000000000000281d
rbd_data.58c9151ff76b.0000000000002de1
rbd_data.58c9151ff76b.0000000000002dae

(5)、创建一个对象object 
[root@ceph-adm ~]# rados create test-object -p test

[root@ceph-adm ~]# rados -p test ls
test-object

(6)、删除一个对象
[root@ceph-adm ~]# rados rm test-object -p test
[root@ceph-adm ~]# rados -p test ls

2、rbd命令的用法 
(1)、查看ceph中一个pool里的所有镜像
[root@ceph-osd2 ~]# rbd ls images
2014-05-24 17:17:37.043659 7f14caa6e700  0 — :/1025604 >> 10.49.101.9:6789/0 pipe(0x6c5400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x6c5660).fault
2182d9ac-52f4-4f5d-99a1-ab3ceacbf0b9
34e1a475-5b11-410c-b4c4-69b5f780f03c

[root@ceph-osd2 ~]# rbd ls volumes
2014-05-24 17:22:18.649929 7f9e98733700  0 — :/1010725 >> 10.49.101.9:6789/0 pipe(0x96a400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x96a660).fault
volume-0788fc6c-0dd4-4339-bad4-e9d78bd5365c
volume-0898c5b4-4072-4cae-affc-ec59c2375c51

(2)、查看ceph pool中一个镜像的信息
[root@ceph-osd2 ~]# rbd info -p images –image 74cb427c-cee9-47d0-b467-af217a67e60a
rbd image ’74cb427c-cee9-47d0-b467-af217a67e60a’:
        size 1048 MB in 131 objects
        order 23 (8192 KB objects)
        block_name_prefix: rbd_data.95c7783fc0d0
        format: 2
        features: layering
        
(3)、在test池中创建一个命名为zhanguo的10000M的镜像
[root@ceph-osd2 ~]# rbd create -p test –size 10000 zhanguo
[root@ceph-osd2 ~]# rbd -p test info zhanguo    #查看新建的镜像的信息
rbd image ‘zhanguo’:
        size 10000 MB in 2500 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.127d2.2ae8944a
        format: 1

(4)、删除一个镜像
[root@ceph-osd2 ~]# rbd rm  -p test  lizhanguo
Removing image: 100% complete…done.

(5)、调整一个镜像的尺寸
[root@ceph-osd2 ~]# rbd resize -p test –size 20000 zhanguo
Resizing image: 100% complete…done.
[root@ceph-osd2 ~]# rbd -p test info zhanguo   #调整后的镜像大小
rbd image ‘zhanguo’:
        size 20000 MB in 5000 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.127d2.2ae8944a
        format: 1
 
(6)、给一个镜像创建一个快照
[root@ceph-osd2 ~]# rbd  snap create  test/zhanguo@zhanguo123  #池/镜像@快照
[root@ceph-osd2 ~]# rbd   snap ls  -p test zhanguo
SNAPID NAME           SIZE 
     2 zhanguo123 20000 MB

[root@ceph-osd2 ~]# rbd info test/zhanguo@zhanguo123
rbd image ‘zhanguo’:
        size 20000 MB in 5000 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.127d2.2ae8944a
        format: 1
        protected: False

(7)、查看一个镜像文件的快照
[root@ceph-osd2 ~]# rbd snap ls  -p volumes volume-7687988d-16ef-4814-8a2c-3fbd85e928e4
SNAPID NAME                                               SIZE 
     5 snapshot-ee7862aa-825e-4004-9587-879d60430a12 102400 MB 
     
(8)、删除一个镜像文件的一个快照快照
                                 快照所在的池/        快照所在的镜像文件           @ 快照
[root@ceph-osd2 ~]# rbd snap rm volumes/volume-7687988d-16ef-4814-8a2c-3fbd85e928e4@snapshot-ee7862aa-825e-4004-9587-879d60430a12
rbd: snapshot ‘snapshot-60586eba-b0be-4885-81ab-010757e50efb’ is protected from removal.
2014-08-18 19:23:42.099301 7fd0245ef760 -1 librbd: removing snapshot from header failed: (16) Device or resource busy
上面不能删除显示的报错信息是此快照备写保护了,下面命令是删除写保护后再进行删除。
[root@ceph-osd2 ~]# rbd snap unprotect volumes/volume-7687988d-16ef-4814-8a2c-3fbd85e928e4@snapshot-ee7862aa-825e-4004-9587-879d60430a12
[root@ceph-osd2 ~]# rbd snap rm volumes/volume-7687988d-16ef-4814-8a2c-3fbd85e928e4@snapshot-ee7862aa-825e-4004-9587-879d60430a12

(9)删除一个镜像文件的所有快照
[root@ceph-osd2 ~]# rbd snap purge  -p volumes volume-7687988d-16ef-4814-8a2c-3fbd85e928e4
Removing all snapshots: 100% complete…done.

(10)、把ceph pool中的一个镜像导出
导出镜像
[root@ceph-osd2 ~]# rbd export -p images –image 74cb427c-cee9-47d0-b467-af217a67e60a /root/aaa.img
2014-05-24 17:16:15.197695 7ffb47a9a700  0 — :/1020493 >> 10.49.101.9:6789/0 pipe(0x1368400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x1368660).fault
Exporting image: 100% complete…done.

导出云硬盘
[root@ceph-osd2 ~]# rbd export -p volumes –image volume-470fee37-b950-4eef-a595-d7def334a5d6 /var/lib/glance/ceph-pool/volumes/Message-JiaoBenJi-10.40.212.24
2014-05-24 17:28:18.940402 7f14ad39f700  0 — :/1032237 >> 10.49.101.9:6789/0 pipe(0x260a400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x260a660).fault
Exporting image: 100% complete…done.

(11)、把一个镜像导入ceph中 (但是直接导入是不能用的,因为没有经过openstack,openstack是看不到的)
[root@ceph-osd2 ~]# rbd import /root/aaa.img -p images –image 74cb427c-cee9-47d0-b467-af217a67e60a  
Importing image: 100% complete…done.

转载于:https://www.cnblogs.com/ityunv/p/5909425.html

ceph运维命令合集相关推荐

  1. shell遍历根目录_大厂运维高手如何打造核心竞争力?这些Shell命令合集得知道!...

    作者简介:牧客,前阿里巴巴运维专家.本文选自:拉勾教育专栏<运维高手的36项修炼> 你好,我是牧客.我在运维领域深耕10余年,现在是一家知名互联网公司架构师.我曾就职于大型互联网公司阿里巴 ...

  2. awk取列 shell 读文件_大厂运维高手如何打造核心竞争力?这些Shell命令合集得知道!...

    作者简介:牧客,前阿里巴巴运维专家. 本文选自:拉勾教育专栏<运维高手的36项修炼> 你好,我是牧客.我在运维领域深耕10余年,现在是一家知名互联网公司架构师.我曾就职于大型互联网公司阿里 ...

  3. 【Ceph】Ceph错误记录 Ceph 运维手册

    Ceph 运维手册 第一部分:常用操作 - 12. 日志和调试 - <Ceph 运维手册> - 书栈网 · BookStack 分布式存储ceph运维操作 (摘抄自:https://www ...

  4. 工具猿之Linux运维命令总结以及场景运用

    Linux运维命令汇总与使用 一.线上查询及帮助命令 命令 功能说明 何时使用(举例不全) man 查看命令帮助,命令的词典,更复杂的还有info.但不常用. 当你需要查看某个命令的参数时只要man一 ...

  5. ceph运维常用指令

    集群 启动一个ceph 进程 启动mon进程 service ceph start mon.node1 启动msd进程 service ceph start mds.node1 启动osd进程 ser ...

  6. 实战为上!深入解析20个运维命令

    实战为上!深入解析20个运维命令 http://mp.weixin.qq.com/s?__biz=MjM5NTU2MTQwNA==&mid=2650652657&idx=1&s ...

  7. 【微学堂】实战为上!深入解析20个运维命令

    [微学堂]实战为上!深入解析20个运维命令 原创 2016-08-24 微学堂 CU技术社区 第19期微学堂预告: 拒绝套路!Docker技术快速精通指南 本文编辑整理自[微学堂]第十八期活动实录. ...

  8. Ceph运维告诉你分布式存储的那些“坑”

    转载自:https://dbaplus.cn/news-134-2079-1.html 作者介绍 汪涉洋,来自美国视频网站hulu的工程师,毕业于北京理工大学计算机专业,目前从事大数据基础架构方面的工 ...

  9. 【运维】K8S集群部署系列之ETCD集群搭建(四)

    ETCD集群扩容和缩容 本文将介绍生产环境下如何对ETCD集群进行扩容和缩容. 文章目录 ETCD集群扩容和缩容 新节点环境准备(node3) 下载安装包并初始化环境 网络准备 生成`node3`对等 ...

最新文章

  1. python教学视频-Python入门视频课程
  2. 我们学校也在使用IPV6
  3. 现在就启用 HTTPS,免费的!
  4. Aspnetpage ie10下 __dopost方法未找到 不能翻页的问题
  5. Wannafly挑战赛10F-小H和遗迹【Trie,树状数组】
  6. java中content啥意思_JSTL标签中的body-content标签体内容输出格式的介绍
  7. b/s c/s结构的区别!
  8. 笔记本删除隐藏分区 释放固态硬盘空间
  9. 嵌套访问_利用Idea重构功能及Java8语法特性——优化深层嵌套代码
  10. Hdu - 1210 - Eddy's 洗牌问题
  11. 什么是驻点和拐点_拐点和驻点的区别有哪些
  12. C#添加 / 创建本地数据库连接
  13. 【论文笔记】Semantic Parsing on Freebase from Question-Answer Pairs
  14. 《合约星期五》OKEx BTC季度合约 0726周报
  15. KDEUnivariate.fit 参数详解
  16. 数据库和数据库软件的安装
  17. (python代码+讲解)重叠社区发现EAGLE层次算法的实现
  18. VINS-mono 论文解读:IMU预积分+Marg边缘化
  19. 计算机itunes无法安装,itunes无法安装电脑
  20. 电脑拓展显示器软件显示不清晰问题

热门文章

  1. java web统计网站访问次数,实现一个统计网站访问量的效能
  2. 快速查看服务器中 Redis 的版本
  3. 创业邦团队是如何进行高效设计协作的?
  4. 简谈基于fpga设计9/7小波变换原理
  5. Linux中的软链接与硬链接
  6. 大数据融合技术:问题与挑战
  7. 昇腾A200DK【Mindstudio】推理测试
  8. cahtgpt 生成测试数据
  9. 浮点数运算——加减乘除都有哈
  10. 用js在控制台打印html页面,vue 使用print-js 打印html页面