作为分布式存储, ceph由很多组件组成, 例如有osd, mon, mds等, 这些组件各自承担了各自的功能, 同时每种组件都可以由多个节点组成, 提供高可用. 监控ceph集群的状态, 包括各个组件的状态监控.
通过ceph命令, 在mon, osd, mds任意节点都可以获取到集群的信息.
例如 : 
如果配置文件和KEY不在默认路径, 或集群名不是ceph的话, 请注意提供配置文件和key文件路径.
ceph -c /path/to/conf -k /path/to/keyring
ceph命令交互模式举例 :

ceph> health
HEALTH_OKceph> statuscluster f649b128-963c-4802-ae17-5a76f36c4c76health HEALTH_OKmonmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5osdmap e29: 4 osds: 4 up, 4 inpgmap v17442: 128 pgs, 1 pools, 0 bytes data, 0 objects42197 MB used, 1595 GB / 1636 GB avail128 active+cleanceph> quorum_status
{"election_epoch":38,"quorum":[0,1,2,3,4],"quorum_names":["mon1","mon2","mon3","mon4","mon5"],"quorum_leader_name":"mon1","monmap":{"epoch":8,"fsid":"f649b128-963c-4802-ae17-5a76f36c4c76","modified":"2014-12-10 11:52:43.961159","created":"2014-12-09 15:24:33.339570","mons":[{"rank":0,"name":"mon1","addr":"172.17.0.2:6789\/0"},{"rank":1,"name":"mon2","addr":"172.17.0.3:6789\/0"},{"rank":2,"name":"mon3","addr":"172.17.0.4:6789\/0"},{"rank":3,"name":"mon4","addr":"172.17.0.9:6789\/0"},{"rank":4,"name":"mon5","addr":"172.17.0.10:6789\/0"}]}}ceph> mon_status
{"name":"mon2","rank":1,"state":"peon","election_epoch":38,"quorum":[0,1,2,3,4],"outside_quorum":[],"extra_probe_peers":[],"sync_provider":[],"monmap":{"epoch":8,"fsid":"f649b128-963c-4802-ae17-5a76f36c4c76","modified":"2014-12-10 11:52:43.961159","created":"2014-12-09 15:24:33.339570","mons":[{"rank":0,"name":"mon1","addr":"172.17.0.2:6789\/0"},{"rank":1,"name":"mon2","addr":"172.17.0.3:6789\/0"},{"rank":2,"name":"mon3","addr":"172.17.0.4:6789\/0"},{"rank":3,"name":"mon4","addr":"172.17.0.9:6789\/0"},{"rank":4,"name":"mon5","addr":"172.17.0.10:6789\/0"}]}}

监控集群状态 :

ceph> quit
[root@mon1 ~]# ceph -wcluster f649b128-963c-4802-ae17-5a76f36c4c76health HEALTH_OKmonmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5osdmap e29: 4 osds: 4 up, 4 inpgmap v17444: 128 pgs, 1 pools, 0 bytes data, 0 objects42199 MB used, 1595 GB / 1636 GB avail128 active+clean2014-12-15 14:35:07.694309 mon.0 [INF] pgmap v17444: 128 pgs: 128 active+clean; 0 bytes data, 42199 MB used, 1595 GB / 1636 GB avail
2014-12-15 14:36:08.924492 mon.0 [INF] pgmap v17445: 128 pgs: 128 active+clean; 0 bytes data, 42201 MB used, 1595 GB / 1636 GB avail
2014-12-15 14:36:10.969529 mon.0 [INF] pgmap v17446: 128 pgs: 128 active+clean; 0 bytes data, 42203 MB used, 1595 GB / 1636 GB avail

输出内容包括

Cluster ID
Cluster health status
The monitor map epoch and the status of the monitor quorum
The OSD map epoch and the status of OSDs
The placement group map version
The number of placement groups and pools
The notional amount of data stored and the number of objects stored; and,
The total amount of data stored.

查看集群容量使用情况

[root@mon1 ~]# ceph df
GLOBAL:SIZE      AVAIL     RAW USED     %RAW USED 1636G     1595G       42206M          2.52
POOLS:NAME     ID     USED     %USED     MAX AVAIL     OBJECTS rbd      0         0         0          797G           0 

检查集群状态

ceph status
Or:
ceph -s
In interactive mode, type status and press Enter.
ceph> status[root@mon1 ~]# ceph statuscluster f649b128-963c-4802-ae17-5a76f36c4c76health HEALTH_OKmonmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5osdmap e29: 4 osds: 4 up, 4 inpgmap v17454: 128 pgs, 1 pools, 0 bytes data, 0 objects42210 MB used, 1595 GB / 1636 GB avail128 active+clean
[root@mon1 ~]# ceph -scluster f649b128-963c-4802-ae17-5a76f36c4c76health HEALTH_OKmonmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5osdmap e29: 4 osds: 4 up, 4 inpgmap v17454: 128 pgs, 1 pools, 0 bytes data, 0 objects42210 MB used, 1595 GB / 1636 GB avail128 active+clean
[root@mon1 ~]# ceph
ceph> statuscluster f649b128-963c-4802-ae17-5a76f36c4c76health HEALTH_OKmonmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5osdmap e29: 4 osds: 4 up, 4 inpgmap v17454: 128 pgs, 1 pools, 0 bytes data, 0 objects42210 MB used, 1595 GB / 1636 GB avail128 active+clean

检查OSD状态

ceph osd stat
Or:ceph osd dump
You can also check view OSDs according to their position in the CRUSH map.ceph osd tree

输出示例

[root@mon1 ~]# ceph osd dump
epoch 29
fsid f649b128-963c-4802-ae17-5a76f36c4c76
created 2014-12-09 15:47:07.241109
modified 2014-12-09 16:08:25.501726
flags
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 28 flags hashpspool stripe_width 0
max_osd 4
osd.0 up   in  weight 1 up_from 24 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.5:6800/435 172.18.0.5:6800/435 172.18.0.5:6801/435 172.17.0.5:6801/435 exists,up 854777b2-c188-4509-9df4-02f57bd17e12
osd.1 up   in  weight 1 up_from 22 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.6:6800/474 172.18.0.6:6800/474 172.18.0.6:6801/474 172.17.0.6:6801/474 exists,up d13959a4-a3fc-4589-91cd-7104fcd8dbe9
osd.2 up   in  weight 1 up_from 20 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.7:6800/447 172.18.0.7:6800/447 172.18.0.7:6801/447 172.17.0.7:6801/447 exists,up d28bdb68-e8ee-4ca8-be5d-7e86438e7663
osd.3 up   in  weight 1 up_from 18 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.8:6800/653 172.18.0.8:6800/653 172.18.0.8:6801/653 172.17.0.8:6801/653 exists,up c2e86146-fedc-4486-8d2f-ead6f473e841
[root@mon1 ~]# ceph osd tree
# id    weight  type name       up/down reweight
-1      4       root default
-2      1               host osd1
0       1                       osd.0   up      1
-3      1               host osd2
1       1                       osd.1   up      1
-4      1               host osd3
2       1                       osd.2   up      1
-5      1               host osd4
3       1                       osd.3   up      1

监控MON状态

ceph mon stat
Or:ceph mon dump
To check the quorum status for the monitor cluster, execute the following:ceph quorum_status

输出示例 :

[root@mon1 ~]# ceph mon stat
e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
[root@mon1 ~]# ceph mon dump
dumped monmap epoch 8
epoch 8
fsid f649b128-963c-4802-ae17-5a76f36c4c76
last_changed 2014-12-10 11:52:43.961159
created 2014-12-09 15:24:33.339570
0: 172.17.0.2:6789/0 mon.mon1
1: 172.17.0.3:6789/0 mon.mon2
2: 172.17.0.4:6789/0 mon.mon3
3: 172.17.0.9:6789/0 mon.mon4
4: 172.17.0.10:6789/0 mon.mon5
[root@mon1 ~]# ceph quorum_status
{"election_epoch":38,"quorum":[0,1,2,3,4],"quorum_names":["mon1","mon2","mon3","mon4","mon5"],"quorum_leader_name":"mon1","monmap":{"epoch":8,"fsid":"f649b128-963c-4802-ae17-5a76f36c4c76","modified":"2014-12-10 11:52:43.961159","created":"2014-12-09 15:24:33.339570","mons":[{"rank":0,"name":"mon1","addr":"172.17.0.2:6789\/0"},{"rank":1,"name":"mon2","addr":"172.17.0.3:6789\/0"},{"rank":2,"name":"mon3","addr":"172.17.0.4:6789\/0"},{"rank":3,"name":"mon4","addr":"172.17.0.9:6789\/0"},{"rank":4,"name":"mon5","addr":"172.17.0.10:6789\/0"}]}}

监控mds状态

[root@mon1 ~]# ceph mds stat
e1: 0/0/0 up[root@mon1 ~]# ceph mds dump
dumped mdsmap epoch 1
epoch   1
flags   0
created 0.000000
modified        2014-12-09 15:47:07.240898
tableserver     0
root    0
session_timeout 0
session_autoclose       0
max_file_size   0
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up      {}
failed
stopped
data_pools
metadata_pool   0
inline_data     disabled

查看placement group状态

[root@mon1 ~]# ceph pg dump
dumped all in format plain
version 17466
stamp 2014-12-15 14:46:11.079706
last_osdmap_epoch 29
last_pg_scan 26
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip     degr    misp    unf     bytes   log     disklog state   state_stamp     v       reported        up      up_primary  acting  acting_primary  last_scrub      scrub_stamp     last_deep_scrub deep_scrub_stamp
0.2d    0       0       0       0       0       0       0       0       active+clean    2014-12-14 16:09:51.515740      0'0     29:33       [1,3]   1       [1,3]   1       0'0     2014-12-14 16:09:51.515668      0'0     2014-12-09 16:05:52.202755
................ceph pg map {pool-num}.{pg-id}[root@mon1 ~]# ceph pg map 0.2d
osdmap e29 pg 0.2d (0.2d) -> up [1,3] acting [1,3]ceph pg {poolnum}.{pg-id} query
ceph pg dump -o {filename} --format=jsonIDENTIFYING TROUBLED PGS
ceph pg dump_stuck [unclean|inactive|stale]FINDING AN OBJECT LOCATION
ceph osd map {poolname} {object-name}

通过SOCK连接到ceph组件, 对当前连接的组件进行信息查询, 例如

ceph --admin-daemon {/path/to/admin/socket} config show[root@mon1 ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok config show
{ "name": "mon.mon1","cluster": "ceph","debug_none": "0\/5","debug_lockdep": "0\/1","debug_context": "0\/1","debug_crush": "1\/1","debug_mds": "1\/5","debug_mds_balancer": "1\/5","debug_mds_locker": "1\/5",
............
[参考]
1. http://ceph.com/docs/master/rados/operations/monitoring/
2. http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/
3. ceph -h

[root@mon1 ~]# ceph --helpGeneral usage:
==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE][--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER][--admin-daemon ADMIN_SOCKET] [--admin-socket ADMIN_SOCKET_NOPE][-s] [-w] [--watch-debug] [--watch-info] [--watch-sec][--watch-warn] [--watch-error] [--version] [--verbose] [--concise][-f {json,json-pretty,xml,xml-pretty,plain}][--connect-timeout CLUSTER_TIMEOUT]Ceph administration tooloptional arguments:-h, --help            request mon help-c CEPHCONF, --conf CEPHCONFceph configuration file-i INPUT_FILE, --in-file INPUT_FILEinput file-o OUTPUT_FILE, --out-file OUTPUT_FILEoutput file--id CLIENT_ID, --user CLIENT_IDclient id for authentication--name CLIENT_NAME, -n CLIENT_NAMEclient name for authentication--cluster CLUSTER     cluster name--admin-daemon ADMIN_SOCKETsubmit admin-socket commands ("help" for help--admin-socket ADMIN_SOCKET_NOPEyou probably mean --admin-daemon-s, --status          show cluster status-w, --watch           watch live cluster changes--watch-debug         watch debug events--watch-info          watch info events--watch-sec           watch security events--watch-warn          watch warn events--watch-error         watch error events--version, -v         display version--verbose             make verbose--concise             make less verbose-f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}--connect-timeout CLUSTER_TIMEOUTset a timeout for connecting to the clusterMonitor commands:
=================
[Contacting monitor, timeout after 5 seconds]
auth add <entity> {<caps> [<caps>...]}   add auth info for <entity> from input file, or random key if no input is given, and/or any caps specified in the command
auth caps <entity> <caps> [<caps>...]    update caps for <name> from caps specified in the command
auth del <entity>                        delete all caps for <name>
auth export {<entity>}                   write keyring for requested entity, or master keyring if none given
auth get <entity>                        write keyring file with requested key
auth get-key <entity>                    display requested key
auth get-or-create <entity> {<caps>      add auth info for <entity> from input [<caps>...]}                             file, or random key if no input given,and/or any caps specified in the command
auth get-or-create-key <entity> {<caps>  get, or add, key for <name> from [<caps>...]}                             system/caps pairs specified in the command.  If key already exists, any given caps must match the existing caps for that key.
auth import                              auth import: read keyring file from -i <file>
auth list                                list authentication state
auth print-key <entity>                  display requested key
auth print_key <entity>                  display requested key
compact                                  cause compaction of monitor's leveldb storage
config-key del <key>                     delete <key>
config-key exists <key>                  check for <key>'s existence
config-key get <key>                     get <key>
config-key list                          list keys
config-key put <key> {<val>}             put <key>, value <val>
df {detail}                              show cluster free space stats
fs ls                                    list filesystems
fs new <fs_name> <metadata> <data>       make new filesystem using named pools <metadata> and <data>
fs rm <fs_name> {--yes-i-really-mean-it} disable the named filesystem
fsid                                     show cluster FSID/UUID
health {detail}                          show cluster health
heap dump|start_profiler|stop_profiler|  show heap usage info (available only release|stats                            if compiled with tcmalloc)
injectargs <injected_args> [<injected_   inject config arguments into monitorargs>...]
log <logtext> [<logtext>...]             log supplied text to the monitor log
mds add_data_pool <pool>                 add data pool <pool>
mds cluster_down                         take MDS cluster down
mds cluster_up                           bring MDS cluster up
mds compat rm_compat <int[0-]>           remove compatible feature
mds compat rm_incompat <int[0-]>         remove incompatible feature
mds compat show                          show mds compatibility settings
mds deactivate <who>                     stop mds
mds dump {<int[0-]>}                     dump info, optionally from epoch
mds fail <who>                           force mds to status failed
mds getmap {<int[0-]>}                   get MDS map, optionally from epoch
mds newfs <int[0-]> <int[0-]> {--yes-i-  make new filesystem using pools really-mean-it}                          <metadata> and <data>
mds remove_data_pool <pool>              remove data pool <pool>
mds rm <int[0-]> <name (type.id)>        remove nonactive mds
mds rmfailed <int[0-]>                   remove failed mds
mds set max_mds|max_file_size|allow_new_ set mds parameter <var> to <val>snaps|inline_data <val> {<confirm>}
mds set_max_mds <int[0-]>                set max MDS index
mds set_state <int[0-]> <int[0-20]>      set mds state of <gid> to <numeric-state>
mds setmap <int[0-]>                     set mds map; must supply correct epoch number
mds stat                                 show MDS status
mds stop <who>                           stop mds
mds tell <who> <args> [<args>...]        send command to particular mds
mon add <name> <IPaddr[:port]>           add new monitor named <name> at <addr>
mon dump {<int[0-]>}                     dump formatted monmap (optionally from epoch)
mon getmap {<int[0-]>}                   get monmap
mon remove <name>                        remove monitor named <name>
mon stat                                 summarize monitor status
mon_status                               report status of monitors
osd blacklist add|rm <EntityAddr>        add (optionally until <expire> seconds {<float[0.0-]>}                          from now) or remove <addr> from blacklist
osd blacklist ls                         show blacklisted clients
osd blocked-by                           print histogram of which OSDs are blocking their peers
osd create {<uuid>}                      create new osd (with optional UUID)
osd crush add <osdname (id|osd.id)>      add or update crushmap position and <float[0.0-]> <args> [<args>...]         weight for <name> with <weight> and location <args>
osd crush add-bucket <name> <type>       add no-parent (probably root) crush bucket <name> of type <type>
osd crush create-or-move <osdname (id|   create entry or move existing entry osd.id)> <float[0.0-]> <args> [<args>..  for <name> <weight> at/to location .]                                       <args>
osd crush dump                           dump crush map
osd crush link <name> <args> [<args>...] link existing entry for <name> under location <args>
osd crush move <name> <args> [<args>...] move existing entry for <name> to location <args>
osd crush remove <name> {<ancestor>}     remove <name> from crush map (everywhere, or just at <ancestor>)
osd crush reweight <name> <float[0.0-]>  change <name>'s weight to <weight> in crush map
osd crush reweight-subtree <name>        change all leaf items beneath <name> <float[0.0-]>                            to <weight> in crush map
osd crush rm <name> {<ancestor>}         remove <name> from crush map (everywhere, or just at <ancestor>)
osd crush rule create-erasure <name>     create crush rule <name> for erasure {<profile>}                              coded pool created with <profile> (default default)
osd crush rule create-simple <name>      create crush rule <name> to start from <root> <type> {firstn|indep}             <root>, replicate across buckets of type <type>, using a choose mode of <firstn|indep> (default firstn; indep best for erasure pools)
osd crush rule dump {<name>}             dump crush rule <name> (default all)
osd crush rule list                      list crush rules
osd crush rule ls                        list crush rules
osd crush rule rm <name>                 remove crush rule <name>
osd crush set                            set crush map from input file
osd crush set <osdname (id|osd.id)>      update crushmap position and weight <float[0.0-]> <args> [<args>...]         for <name> to <weight> with location <args>
osd crush show-tunables                  show current crush tunables
osd crush tunables legacy|argonaut|      set crush tunables values to <profile>bobtail|firefly|optimal|default
osd crush unlink <name> {<ancestor>}     unlink <name> from crush map (everywhere, or just at <ancestor>)
osd deep-scrub <who>                     initiate deep scrub on osd <who>
osd down <ids> [<ids>...]                set osd(s) <id> [<id>...] down
osd dump {<int[0-]>}                     print summary of OSD map
osd erasure-code-profile get <name>      get erasure code profile <name>
osd erasure-code-profile ls              list all erasure code profiles
osd erasure-code-profile rm <name>       remove erasure code profile <name>
osd erasure-code-profile set <name>      create erasure code profile <name> {<profile> [<profile>...]}               with [<key[=value]> ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS)
osd find <int[0-]>                       find osd <id> in the CRUSH map and show its location
osd getcrushmap {<int[0-]>}              get CRUSH map
osd getmap {<int[0-]>}                   get OSD map
osd getmaxosd                            show largest OSD id
osd in <ids> [<ids>...]                  set osd(s) <id> [<id>...] in
osd lost <int[0-]> {--yes-i-really-mean- mark osd as permanently lost. THIS it}                                      DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL
osd ls {<int[0-]>}                       show all OSD ids
osd lspools {<int>}                      list pools
osd map <poolname> <objectname>          find pg for <object> in <pool>
osd metadata <int[0-]>                   fetch metadata for osd <id>
osd out <ids> [<ids>...]                 set osd(s) <id> [<id>...] out
osd pause                                pause osd
osd perf                                 print dump of OSD perf summary stats
osd pg-temp <pgid> {<id> [<id>...]}      set pg_temp mapping pgid:[<id> [<id>...]] (developers only)
osd pool create <poolname> <int[0-]>     create pool{<int[0-]>} {replicated|erasure}        {<erasure_code_profile>} {<ruleset>}    {<int>}
osd pool delete <poolname> {<poolname>}  delete pool{--yes-i-really-really-mean-it}
osd pool get <poolname> size|min_size|   get pool parameter <var>crash_replay_interval|pg_num|pgp_num|   crush_ruleset|hit_set_type|hit_set_     period|hit_set_count|hit_set_fpp|auid|  target_max_objects|target_max_bytes|    cache_target_dirty_ratio|cache_target_  full_ratio|cache_min_flush_age|cache_   min_evict_age|erasure_code_profile|min_ read_recency_for_promote
osd pool get-quota <poolname>            obtain object or byte limits for pool
osd pool mksnap <poolname> <snap>        make snapshot <snap> in <pool>
osd pool rename <poolname> <poolname>    rename <srcpool> to <destpool>
osd pool rmsnap <poolname> <snap>        remove snapshot <snap> from <pool>
osd pool set <poolname> size|min_size|   set pool parameter <var> to <val>crash_replay_interval|pg_num|pgp_num|   crush_ruleset|hashpspool|hit_set_type|  hit_set_period|hit_set_count|hit_set_   fpp|debug_fake_ec_pool|target_max_      bytes|target_max_objects|cache_target_  dirty_ratio|cache_target_full_ratio|    cache_min_flush_age|cache_min_evict_    age|auid|min_read_recency_for_promote   <val> {--yes-i-really-mean-it}
osd pool set-quota <poolname> max_       set object or byte limit on poolobjects|max_bytes <val>
osd pool stats {<name>}                  obtain stats from all pools, or from specified pool
osd primary-affinity <osdname (id|osd.   adjust osd primary-affinity from 0.0 <=id)> <float[0.0-1.0]>                     <weight> <= 1.0
osd primary-temp <pgid> <id>             set primary_temp mapping pgid:<id>|-1 (developers only)
osd repair <who>                         initiate repair on osd <who>
osd reweight <int[0-]> <float[0.0-1.0]>  reweight osd to 0.0 < <weight> < 1.0
osd reweight-by-pg <int[100-]>           reweight OSDs by PG distribution {<poolname> [<poolname>...]}             [overload-percentage-for-consideration, default 120]
osd reweight-by-utilization {<int[100-   reweight OSDs by utilization [overload-]>}                                      percentage-for-consideration, default 120]
osd rm <ids> [<ids>...]                  remove osd(s) <id> [<id>...] in
osd scrub <who>                          initiate scrub on osd <who>
osd set pause|noup|nodown|noout|noin|    set <key>nobackfill|norecover|noscrub|nodeep-    scrub|notieragent
osd setcrushmap                          set crush map from input file
osd setmaxosd <int[0-]>                  set new maximum osd value
osd stat                                 print summary of OSD map
osd thrash <int[0-]>                     thrash OSDs for <num_epochs>
osd tier add <poolname> <poolname> {--   add the tier <tierpool> (the second force-nonempty}                          one) to base pool <pool> (the first one)
osd tier add-cache <poolname>            add a cache <tierpool> (the second one)<poolname> <int[0-]>                     of size <size> to existing pool <pool> (the first one)
osd tier cache-mode <poolname> none|     specify the caching mode for cache writeback|forward|readonly|readforward   tier <pool>
osd tier remove <poolname> <poolname>    remove the tier <tierpool> (the second one) from base pool <pool> (the first one)
osd tier remove-overlay <poolname>       remove the overlay pool for base pool <pool>
osd tier set-overlay <poolname>          set the overlay pool for base pool <poolname>                               <pool> to be <overlaypool>
osd tree {<int[0-]>}                     print OSD tree
osd unpause                              unpause osd
osd unset pause|noup|nodown|noout|noin|  unset <key>nobackfill|norecover|noscrub|nodeep-    scrub|notieragent
pg debug unfound_objects_exist|degraded_ show debug info about pgspgs_exist
pg deep-scrub <pgid>                     start deep-scrub on <pgid>
pg dump {all|summary|sum|delta|pools|    show human-readable versions of pg map osds|pgs|pgs_brief [all|summary|sum|     (only 'all' valid with plain)delta|pools|osds|pgs|pgs_brief...]}
pg dump_json {all|summary|sum|pools|     show human-readable version of pg map osds|pgs [all|summary|sum|pools|osds|    in json onlypgs...]}
pg dump_pools_json                       show pg pools info in json only
pg dump_stuck {inactive|unclean|stale    show information about stuck pgs[inactive|unclean|stale...]} {<int>}
pg force_create_pg <pgid>                force creation of pg <pgid>
pg getmap                                get binary pg map to -o/stdout
pg map <pgid>                            show mapping of pg to osds
pg repair <pgid>                         start repair on <pgid>
pg scrub <pgid>                          start scrub on <pgid>
pg send_pg_creates                       trigger pg creates to be issued
pg set_full_ratio <float[0.0-1.0]>       set ratio at which pgs are considered full
pg set_nearfull_ratio <float[0.0-1.0]>   set ratio at which pgs are considered nearly full
pg stat                                  show placement group status.
quorum enter|exit                        enter or exit quorum
quorum_status                            report status of monitor quorum
report {<tags> [<tags>...]}              report full status of cluster, optional title tag strings
scrub                                    scrub the monitor stores
status                                   show cluster status
sync force {--yes-i-really-mean-it} {--  force sync of and clear monitor storei-know-what-i-am-doing}
tell <name (type.id)> <args> [<args>...] send a command to a specific daemon

ceph cluster monitor相关推荐

  1. ceph cluster client(RBD)

    前期准备 关闭防火墙.selinux,配置hosts文件,配置ceph.repo,配置NTP,创建用户和SSL免密登陆. [root@ ceph1~]#systemctl stop firewalld ...

  2. ceph中monitor节点基本解释与图解

    Monitor说明 monitor:集群状态的管理者,维护整个集群的状态,与其它mon共同形成一个cluster map Cluster map:集群视图,显示mon map,osd map,pg m ...

  3. Ceph (1) - 安装Ceph集群方法 1:使用ceph-deploy安装Nautilus版Ceph集群

    <OpenShift 4.x HOL教程汇总> 文章目录 环境说明 Ceph集群节点说明 Ceph集群主机环境说明 用ceph-deploy部署Ceph集群 准备节点环境 设置环境变量 设 ...

  4. 最新CentOS7.5部署L版ceph 20190610及额外手册告警升级多活等

    https://blog.csdn.net/cojn52/article/details/85793902 链接:使用ceph-ansible完成ceph L版本的部署 首先需要准备机器,虚拟机或者物 ...

  5. CentOS7.5 手动部署ceph

    1      环境配置 1.1  设备列表 功能 主机名 IP mon node1 192.168.1.10 mon node2 192.168.1.11 mon node3 192.168.1.12 ...

  6. ceph 查看是否在恢复_Ceph monitor故障恢复探讨

    1 问题 一般来说,在实际运行中,ceph monitor的个数是2n+1(n>=0)个,在线上至少3个,只要正常的节点数>=n+1,ceph的paxos算法能保证系统的正常运行.所以,对 ...

  7. ceph monitor 选举leader和peon的过程

    ceph中monitor 可以分为leader节点和peon节点,leader节点是分解rank值来决定,rank值又和ip地址有关. 在ceph/mon/elector.cc 这个文件中实现的Ele ...

  8. proxmox 宕机转义_Proxmox+Ceph的HCI环境搭建

    PVE支持Ceph,包括内置Ceph及外连Ceph.通过PVE内建Ceph集群,可以构建出超融合架构,并实现集中 统一管理.通过外连Ceph集群,无法实现集中管理,但是Ceph集群可以提供给其它平台使 ...

  9. Ceph mon节点故障处理案例分解

    Ceph monitor故障恢复 查看ceph健康状态 [root@bgw-os-node151 ~]# ceph health HEALTH_OK [root@bgw-os-node151 ~]# ...

  10. Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储Ceph

    Linux与云计算--第二阶段Linux服务器架设 第五章:存储Storage服务器架设-分布式存储Ceph 1 Ceph 配置Ceph集群 Install Distributed File Syst ...

最新文章

  1. 数据集标注工具_如何提高数据标注质量,提供精细化标注数据集?丨曼孚科技...
  2. Mac下Git与Github的简单使用
  3. 简单易懂的 pwnable.kr 第二题[collision]Writeupt
  4. 通过Github Teams进行代码仓库的权限访问控制
  5. Win32+API学习笔记:创建基本的窗口控件
  6. 嵌入式开发之davinci--- spi 中的时钟极性CPOL和相位CPHA
  7. 牛客小bai月赛43——C 木棍游戏(DFS)
  8. 什么是13薪,真的有18薪、25薪的不?
  9. js 设置style属性
  10. 初始化列表和构造函数内赋值的区别
  11. Python字符串join()方法
  12. 微型计算机 介绍 gtx980m,卡皇GTX980M的横空出世_笔记本评测-中关村在线
  13. 关于iOS 报Command failed with exit 128: git错误额解决方案
  14. 【PCL】PCL点云库介绍及VS环境配置
  15. Java 8 reduce 是什么
  16. 智能音箱天猫精灵使用体验--写在前面的话
  17. 接口测试[PostMan]
  18. 读书笔记:《狼图腾》
  19. 信号量优先级反转问题记录(总是遗忘)
  20. 百度想象空间还有多少?

热门文章

  1. 数据库c3p0配置SQL Server与MySQL
  2. ArcMAP 用不同颜色区分地类
  3. Java微信消息推送(二)
  4. 实习成长之路:MySQL十二:为什么我删除了表的不少数据,但是大小没变呢?
  5. Spring Boot 打包成的可执行 jar ,为什么不能被其他项目依赖?
  6. JVM垃圾回收策略与垃圾收集器
  7. LeakCanary的原理,你知道么?
  8. php使用框架优缺点,PHP四大主流框架的优缺点总结(上)
  9. php 图像编程库,php – 数据图像库64
  10. android中的oom,Android OOM Adjustments