1、安装Ceph集群

1.1 设置ceph的yum源

ceph版本:12.2.5 ceph-deploy版本: 2.0.0
注:此处用控制节点部署mod和mgr ,OSD部署在计算节点上

[root@m$c$:/root]# vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc[root@m$c$:/root]# yum clean all && yum repolist

1.2 安装ceph-deploy(在admin server上)

# 在规划的全部控制管理节点安装ceph-deploy工具
[root@cont01:/root]# yum -y install ceph-deploy
[root@cont02:/root]# yum -y install ceph-deploy
[root@cont03:/root]# yum -y install ceph-deploy
注:如果报错,需要安装yum install python-setuptools
[root@cont01:/root]# ceph-deploy --version
2.0.1

1.3 安装Ceph包(在admin server上)

注:在所有server上安装deltarpm(yum install -y deltarpm)
[root@m&c:/root]#  yum install -y deltarpm
[root@cont03:/root]# ceph-deploy install --release=luminous cont01 cont02 cont03 comp01 comp02 comp03
[root@cont03:/root]# ceph -v
ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)[root@cont03:/root]#  rpm -qa | grep ceph
ceph-common-12.2.13-0.el7.x86_64
ceph-radosgw-12.2.13-0.el7.x86_64
ceph-deploy-2.0.1-0.noarch
ceph-mgr-12.2.13-0.el7.x86_64
libcephfs2-12.2.13-0.el7.x86_64
ceph-selinux-12.2.13-0.el7.x86_64
ceph-osd-12.2.13-0.el7.x86_64
centos-release-ceph-luminous-1.1-2.el7.centos.noarch
ceph-mon-12.2.13-0.el7.x86_64
ceph-12.2.13-0.el7.x86_64
ceph-mds-12.2.13-0.el7.x86_64
ceph-release-1-1.el7.noarch
python-cephfs-12.2.13-0.el7.x86_64
ceph-base-12.2.13-0.el7.x86_64

1.4 创建ceph集群

1.4.1 创建mon&mgr

##先以cont03作为initial monitor创建集群(后续再增加cont01和cont02)
[root@cont03:/root]# mkdir -p /etc/ceph && cd /etc/ceph
[root@cont03:/etc/ceph]# ceph-deploy new cont03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new cont03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f82f6448ed8>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f82f5bc42d8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['cont03']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[cont03][DEBUG ] connected to host: cont03
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[cont03][DEBUG ] find the location of an executable
[cont03][INFO  ] Running command: /usr/sbin/ip link show
[cont03][INFO  ] Running command: /usr/sbin/ip addr show
[cont03][DEBUG ] IP addresses found: [u'192.168.10.23', u'192.168.7.123']
[ceph_deploy.new][DEBUG ] Resolving host cont03
[ceph_deploy.new][DEBUG ] Monitor cont03 at 192.168.10.23
[ceph_deploy.new][DEBUG ] Monitor initial members are ['cont03']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.10.23']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@cont03:/etc/ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap

1.4.2 修改集群配置文件(optional)

[root@cont03:/etc/ceph]# vim /etc/ceph/ceph.conf
[global]
fsid = bc616791-7d5a-4b1a-ab1d-30414312fcfd
mon_initial_members = cont03, cont02, cont01
mon_host = 192.168.10.23,192.168.10.22,192.168.10.21
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# 默认的副本数为3osd_pool_default_size = 3
public network = 192.168.10.0/24
cluster network = 192.168.7.0/24
# public network:前端mon网络,client访问网络;确保public network与mon_host在相同网段,否则初始化时可能会有错误;
##ceph集群使用两个网络:public network和cluster network。前者用于服务client;后者用于集群内部通信,例如osd之间迁移数据。另外,两个网络上都有heartbeat。
##我们配置主机名解析的时候,把主机名解析为public network的地址。这是因为,ceph-deploy是作为client 来操作集群的,ceph集群通过public network服务于clientmonitor是运行于public network上的。这也很容易理解,ceph的client都需要访问monitor,若monitor运行于cluster network上,client无法访问。
# cluster network:后端osd心跳,数据/流复制恢复等网络# 默认保护机制不允许删除pool,根据情况设置
[mon]
mon_allow_pool_delete = true[root@cont03:/etc/ceph]# cat /etc/ceph/ceph.mon.keyring
[mon.]
key = AQAZOmpeAAAAABAA9k58FrBYzKXjC2F414eKkA==
caps mon = allow *

1.4.3 部署initial monitor

[root@cont03:/etc/ceph]# ceph-deploy mon create cont03
[cont03][DEBUG ] ********************************************************************************
[cont03][INFO  ] monitor: mon.cont03 is running
[cont03][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cont03.asok mon_status[root@cont03:/etc/ceph]# ps -ef | grep ceph
ceph       26332       1  0 17:21 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id cont03 --setuser ceph --setgroup ceph
root       26412   16878  0 17:22 pts/0    00:00:00 grep --color=auto ceph
[root@cont03:/etc/ceph]# netstat -anpl | grep 6789 | grep LISTEN
tcp        0      0 192.168.10.23:6789      0.0.0.0:*               LISTEN      26332/ceph-mon 

1.4.4 创建ceph keyring

[root@cont03:/etc/ceph]# ceph-deploy gatherkeys cont03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy gatherkeys cont03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1498d4be60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['cont03']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x7f14995c4aa0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpHbM7Ns
[cont03][DEBUG ] connected to host: cont03
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[cont03][DEBUG ] get remote short hostname
[cont03][DEBUG ] fetch remote file
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.cont03.asok mon_status
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.admin
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * mgr allow *
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-mds
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-mgr
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-osd
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-rgw
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpHbM7Ns[root@cont03:/etc/ceph]# ll
total 88
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-mds.keyring
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-mgr.keyring
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-osd.keyring
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-rgw.keyring
-rw-------. 1 root root    63 Mar 12 22:23 ceph.client.admin.keyring
-rw-r--r--. 1 root root   423 Mar 12 22:11 ceph.conf
-rw-r--r--. 1 root root 54271 Mar 12 22:23 ceph-deploy-ceph.log
-rw-------. 1 root root    73 Mar 12 21:33 ceph.mon.keyring
-rw-r--r--. 1 root root    92 Jan 31 05:37 rbdmap
[root@cont03:/etc/ceph]# cat ceph.client.admin.keyring
[client.admin]key = AQDJRWpePC/0MRAAP1+o23HgRFOnUIvU+9F6Rw==
[root@cont03:/etc/ceph]# cat ceph.bootstrap-osd.keyring
[client.bootstrap-osd]key = AQDMRWpenbmIGRAA4tCcF2ZtAgmBUQWqeAgIUQ==
//admin的key保存在ceph.client.admin.keyring文件里,通过–keyring提供
[root@cont03:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf -scluster:id:     bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_OKservices:mon: 1 daemons, quorum cont01mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   0B used, 0B / 0B availpgs:     [root@cont03:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf auth get client.admin
exported keyring for client.admin
[client.admin]key = AQDJRWpePC/0MRAAP1+o23HgRFOnUIvU+9F6Rw==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"

1.4.5 分发ceph keyring

执行admin的命令,要提供admin的key(–keyring ceph.client.admin.keyring)以及配置文件(-c
ceph.conf)。在后续的运维中,我们经常需要在某个server上执行admin命令。每次都提供这些参数比较麻烦。实际上,ceph会默认地从/etc/ceph/中找keyring和ceph.conf。因此,我们可以把ceph.client.admin.keyring和ceph.conf放到每个server的/etc/ceph/。ceph-deploy可以帮我做这些。

[root@cont03:/etc/ceph]# ceph-deploy admin cont03 cont01 cont02 comp01 comp02 comp03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin cont03 cont01 cont02 comp01 comp02 comp03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f709694dc20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['cont03', 'cont01', 'cont02', 'comp01', 'comp02', 'comp03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f70973fc320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cont03
[cont03][DEBUG ] connected to host: cont03
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[cont03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cont01
[cont01][DEBUG ] connected to host: cont01
[cont01][DEBUG ] detect platform information from remote host
[cont01][DEBUG ] detect machine type
[cont01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cont02
[cont02][DEBUG ] connected to host: cont02
[cont02][DEBUG ] detect platform information from remote host
[cont02][DEBUG ] detect machine type
[cont02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to comp01
[comp01][DEBUG ] connected to host: comp01
[comp01][DEBUG ] detect platform information from remote host
[comp01][DEBUG ] detect machine type
[comp01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to comp02
[comp02][DEBUG ] connected to host: comp02
[comp02][DEBUG ] detect platform information from remote host
[comp02][DEBUG ] detect machine type
[comp02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to comp03
[comp03][DEBUG ] connected to host: comp03
[comp03][DEBUG ] detect platform information from remote host
[comp03][DEBUG ] detect machine type
[comp03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf//检查每个server,发现/etc/ceph/下都多了ceph.client.admin.keyring和ceph.conf这两个文件,现在就不用提供那些参数了:
[root@cont01:/etc/ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpH2C5VD
[root@cont01:/etc/ceph]# ceph -scluster:id:      bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_OKservices:mon: 1 daemons, quorum cont03mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   0B used, 0B / 0B availpgs:
[root@cont01:/etc/ceph]# ceph auth get client.admin
exported keyring for client.admin
[client.admin]key = AQDJRWpePC/0MRAAP1+o23HgRFOnUIvU+9F6Rw==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"[root@comp01:/root]# cd /etc/ceph
[root@comp01:/etc/ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpt3YKNe
[root@comp01:/etc/ceph]# ceph -scluster:id:      bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_OKservices:mon: 1 daemons, quorum cont03mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   0B used, 0B / 0B availpgs:     [root@comp01:/etc/ceph]# ceph auth get client.admin
exported keyring for client.admin
[client.admin]key = AQA81Vxe/zKVOxAA0Y7VQWCoY2Wb9opdeIbk8Q==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"

1.4.6 创建ceph mgr

从ceph 12(luminous)开始,需要为每个monitor创建一个mgr

[root@cont03:/etc/ceph]# ceph-deploy mgr create cont03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create cont03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('cont03', 'cont03')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f70a79945f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f70a820b230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts cont03:cont03
[cont03][DEBUG ] connected to host: cont03
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to cont03
[cont03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cont03][WARNIN] mgr keyring does not exist yet, creating one
[cont03][DEBUG ] create a keyring file
[cont03][DEBUG ] create path recursively if it doesn't exist
[cont03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.cont03 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-cont03/keyring
[cont03][INFO  ] Running command: systemctl enable ceph-mgr@cont03
[cont03][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@cont03.service to /usr/lib/systemd/system/ceph-mgr@.service.
[cont03][INFO  ] Running command: systemctl start ceph-mgr@cont03
[cont03][INFO  ] Running command: systemctl enable ceph.target
[root@cont03:/etc/ceph]# ceph -scluster:id:     bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_OKservices:mon: 1 daemons, quorum cont03mgr: cont03(active)osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   0B used, 0B / 0B availpgs:     

1.5 添加OSD

ceph-deploy osd
create通过调用ceph-volume来创建OSD。使用bluestore时(默认),需要指定3个device:

device 如何指定 说明 block –data 主要存储,必选。可以是磁盘,分区或者lv
block.db –block-db 可选。若不指定,则对应内容存储于block。可以是分区或者lv
block.wal –block-wal 可选。若不指定,则对应内容存储于block。可以是分区或者lv 注意:

  1. 不可以使用磁盘作为block.db或者block.wal,否则会报错:blkid could not detect a PARTUUID for device;
  2. 若使用磁盘或者分区作block,则ceph-volume会在其上创建lv来使用。若使用分区作block.db或block.wal,则直接使用分区而不创建lv。

1.5.1 添加osd.0(磁盘作block,无block.db,无block.wal)

1.添加comp02的OSD
---------------------------------------------------------------------------------
[root@cont03:/etc/ceph]# ceph-deploy  osd create comp02 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create comp02 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7facdca60680>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : comp02
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7facdd2dd9b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[comp02][DEBUG ] connected to host: comp02
[comp02][DEBUG ] detect platform information from remote host
[comp02][DEBUG ] detect machine type
[comp02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to comp02
[comp02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[comp02][DEBUG ] find the location of an executable
[comp02][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[comp02][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp02][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d8618bea-3d42-46ca-be75-c94b048ba538
[comp02][WARNIN] Running command: vgcreate --force --yes ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73 /dev/sda
[comp02][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[comp02][WARNIN]  stdout: Volume group "ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73" successfully created
[comp02][WARNIN] Running command: lvcreate --yes -l 100%FREE -n osd-block-d8618bea-3d42-46ca-be75-c94b048ba538 ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73
[comp02][WARNIN]  stdout: Wiping vfat signature on /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538.
[comp02][WARNIN]  stdout: Wiping vfat signature on /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538.
[comp02][WARNIN]  stdout: Wiping vfat signature on /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538.
[comp02][WARNIN]  stdout: Logical volume "osd-block-d8618bea-3d42-46ca-be75-c94b048ba538" created.
[comp02][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp02][WARNIN] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[comp02][WARNIN] Running command: restorecon /var/lib/ceph/osd/ceph-0
[comp02][WARNIN] Running command: chown -h ceph:ceph /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538
[comp02][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[comp02][WARNIN] Running command: ln -s /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538 /var/lib/ceph/osd/ceph-0/block
[comp02][WARNIN] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[comp02][WARNIN]  stderr: got monmap epoch 1
[comp02][WARNIN] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDA3lxerVOmOhAAQc79JB8KOB25Fz9+hXc1Xg==
[comp02][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[comp02][WARNIN] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDA3lxerVOmOhAAQc79JB8KOB25Fz9+hXc1Xg== with 0 caps)
[comp02][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[comp02][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[comp02][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d8618bea-3d42-46ca-be75-c94b048ba538 --setuser ceph --setgroup ceph
[comp02][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[comp02][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[comp02][WARNIN] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538 --path /var/lib/ceph/osd/ceph-0
[comp02][WARNIN] Running command: ln -snf /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538 /var/lib/ceph/osd/ceph-0/block
[comp02][WARNIN] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[comp02][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[comp02][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[comp02][WARNIN] Running command: systemctl enable ceph-volume@lvm-0-d8618bea-3d42-46ca-be75-c94b048ba538
[comp02][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-d8618bea-3d42-46ca-be75-c94b048ba538.service to /usr/lib/systemd/system/ceph-volume@.service.
[comp02][WARNIN] Running command: systemctl enable --runtime ceph-osd@0
[comp02][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[comp02][WARNIN] Running command: systemctl start ceph-osd@0
[comp02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[comp02][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[comp02][INFO  ] checking OSD status...
[comp02][DEBUG ] find the location of an executable
[comp02][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host comp02 is now ready for osd use.[root@cont03:/etc/ceph]# ceph -scluster:id:     14dd185e-463b-438a-bfd5-dbdf13e925a7health: HEALTH_WARNOSD count 1 < osd_pool_default_size 3services:mon: 1 daemons, quorum cont03mgr: cont03(active)osd: 1 osds: 1 up, 1 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   1.00GiB used, 931GiB / 932GiB availpgs:     [root@comp02:/root]# ceph -scluster:id:     14dd185e-463b-438a-bfd5-dbdf13e925a7health: HEALTH_WARNOSD count 1 < osd_pool_default_size 3services:mon: 1 daemons, quorum cont03mgr: cont03(active)osd: 1 osds: 1 up, 1 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   1.00GiB used, 931GiB / 932GiB availpgs:     [root@comp02:/root]# mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-0 type tmpfs (rw,relatime,seclabel)
[root@comp02:/root]# ll /var/lib/ceph/osd/ceph-0/
total 52
-rw-r--r--. 1 ceph ceph 186 Mar  2 18:24 activate.monmap
lrwxrwxrwx. 1 ceph ceph  93 Mar  2 18:24 block -> /dev/ceph-c81cb5d1-9428-4b94-8c80-43573d59cc73/osd-block-d8618bea-3d42-46ca-be75-c94b048ba538
-rw-r--r--. 1 ceph ceph   2 Mar  2 18:24 bluefs
-rw-r--r--. 1 ceph ceph  37 Mar  2 18:24 ceph_fsid
-rw-r--r--. 1 ceph ceph  37 Mar  2 18:24 fsid
-rw-------. 1 ceph ceph  55 Mar  2 18:24 keyring
-rw-r--r--. 1 ceph ceph   8 Mar  2 18:24 kv_backend
-rw-r--r--. 1 ceph ceph  21 Mar  2 18:24 magic
-rw-r--r--. 1 ceph ceph   4 Mar  2 18:24 mkfs_done
-rw-r--r--. 1 ceph ceph  41 Mar  2 18:24 osd_key
-rw-r--r--. 1 ceph ceph   6 Mar  2 18:24 ready
-rw-r--r--. 1 ceph ceph   2 Mar  2 18:24 require_osd_release
-rw-r--r--. 1 ceph ceph  10 Mar  2 18:24 type
-rw-r--r--. 1 ceph ceph   2 Mar  2 18:24 whoami
注:
可见:
1. 使用磁盘vdb创建lv供block使用;
2. osd是mount到tmpfs的(bluefs, ceph_fsid, fsid, keyring等等都存于集群中);2.添加comp01的OSD
----------------------------------------------------------------------------------
[root@cont03:/etc/ceph]# ceph-deploy  osd create comp01 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create comp01 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcc64bde680>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : comp01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fcc6545b9b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[comp01][DEBUG ] connected to host: comp01
[comp01][DEBUG ] detect platform information from remote host
[comp01][DEBUG ] detect machine type
[comp01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to comp01
[comp01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[comp01][DEBUG ] find the location of an executable
[comp01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[comp01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8bc04df9-02d9-463b-b687-c1355e32c86c
[comp01][WARNIN] Running command: vgcreate --force --yes ceph-7e07e091-3e20-4283-835c-5dbbe618b34c /dev/sda
[comp01][WARNIN]  stdout: Wiping dos signature on /dev/sda.
[comp01][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[comp01][WARNIN]  stdout: Volume group "ceph-7e07e091-3e20-4283-835c-5dbbe618b34c" successfully created
[comp01][WARNIN] Running command: lvcreate --yes -l 100%FREE -n osd-block-8bc04df9-02d9-463b-b687-c1355e32c86c ceph-7e07e091-3e20-4283-835c-5dbbe618b34c
[comp01][WARNIN]  stdout: Logical volume "osd-block-8bc04df9-02d9-463b-b687-c1355e32c86c" created.
[comp01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp01][WARNIN] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[comp01][WARNIN] Running command: restorecon /var/lib/ceph/osd/ceph-1
[comp01][WARNIN] Running command: chown -h ceph:ceph /dev/ceph-7e07e091-3e20-4283-835c-5dbbe618b34c/osd-block-8bc04df9-02d9-463b-b687-c1355e32c86c
[comp01][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[comp01][WARNIN] Running command: ln -s /dev/ceph-7e07e091-3e20-4283-835c-5dbbe618b34c/osd-block-8bc04df9-02d9-463b-b687-c1355e32c86c /var/lib/ceph/osd/ceph-1/block
[comp01][WARNIN] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[comp01][WARNIN]  stderr: got monmap epoch 1
[comp01][WARNIN] Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQDe4lxeRO8JOhAAfg9dSlkmY9DFY3iIQTjFXw==
[comp01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[comp01][WARNIN] added entity osd.1 auth auth(auid = 18446744073709551615 key=AQDe4lxeRO8JOhAAfg9dSlkmY9DFY3iIQTjFXw== with 0 caps)
[comp01][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[comp01][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[comp01][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 8bc04df9-02d9-463b-b687-c1355e32c86c --setuser ceph --setgroup ceph
[comp01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[comp01][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[comp01][WARNIN] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-7e07e091-3e20-4283-835c-5dbbe618b34c/osd-block-8bc04df9-02d9-463b-b687-c1355e32c86c --path /var/lib/ceph/osd/ceph-1
[comp01][WARNIN] Running command: ln -snf /dev/ceph-7e07e091-3e20-4283-835c-5dbbe618b34c/osd-block-8bc04df9-02d9-463b-b687-c1355e32c86c /var/lib/ceph/osd/ceph-1/block
[comp01][WARNIN] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[comp01][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[comp01][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[comp01][WARNIN] Running command: systemctl enable ceph-volume@lvm-1-8bc04df9-02d9-463b-b687-c1355e32c86c
[comp01][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-8bc04df9-02d9-463b-b687-c1355e32c86c.service to /usr/lib/systemd/system/ceph-volume@.service.
[comp01][WARNIN] Running command: systemctl enable --runtime ceph-osd@1
[comp01][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
[comp01][WARNIN] Running command: systemctl start ceph-osd@1
[comp01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[comp01][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[comp01][INFO  ] checking OSD status...
[comp01][DEBUG ] find the location of an executable
[comp01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host comp01 is now ready for osd use.[root@cont03:/etc/ceph]#  ceph -scluster:id:     14dd185e-463b-438a-bfd5-dbdf13e925a7health: HEALTH_WARNOSD count 2 < osd_pool_default_size 3services:mon: 1 daemons, quorum cont03mgr: cont03(active)osd: 2 osds: 2 up, 2 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   2.00GiB used, 1.82TiB / 1.82TiB availpgs:     3.添加comp03的OSD------------------------------------------------------------------------------
[root@cont03:/etc/ceph]# ceph-deploy  osd create comp03 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create comp03 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc77b7ed680>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : comp03
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fc77c06a9b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[comp03][DEBUG ] connected to host: comp03
[comp03][DEBUG ] detect platform information from remote host
[comp03][DEBUG ] detect machine type
[comp03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to comp03
[comp03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[comp03][DEBUG ] find the location of an executable
[comp03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[comp03][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp03][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 20bdeca7-08de-47ad-a527-0547712629cd
[comp03][WARNIN] Running command: vgcreate --force --yes ceph-81075c99-cdf2-42cd-867c-40bd54dace97 /dev/sda
[comp03][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[comp03][WARNIN]  stdout: Volume group "ceph-81075c99-cdf2-42cd-867c-40bd54dace97" successfully created
[comp03][WARNIN] Running command: lvcreate --yes -l 100%FREE -n osd-block-20bdeca7-08de-47ad-a527-0547712629cd ceph-81075c99-cdf2-42cd-867c-40bd54dace97
[comp03][WARNIN]  stdout: Wiping vfat signature on /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd.
[comp03][WARNIN]  stdout: Wiping vfat signature on /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd.
[comp03][WARNIN]   Wiping vfat signature on /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd.
[comp03][WARNIN]  stdout: Logical volume "osd-block-20bdeca7-08de-47ad-a527-0547712629cd" created.
[comp03][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp03][WARNIN] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[comp03][WARNIN] Running command: restorecon /var/lib/ceph/osd/ceph-2
[comp03][WARNIN] Running command: chown -h ceph:ceph /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd
[comp03][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[comp03][WARNIN] Running command: ln -s /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd /var/lib/ceph/osd/ceph-2/block
[comp03][WARNIN] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[comp03][WARNIN]  stderr: got monmap epoch 1
[comp03][WARNIN] Running command: ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQD8ql1e37K9BxAAA0K+/5eaO1rxr1/sp0wVtA==
[comp03][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[comp03][WARNIN] added entity osd.2 auth auth(auid = 18446744073709551615 key=AQD8ql1e37K9BxAAA0K+/5eaO1rxr1/sp0wVtA== with 0 caps)
[comp03][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[comp03][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[comp03][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 20bdeca7-08de-47ad-a527-0547712629cd --setuser ceph --setgroup ceph
[comp03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[comp03][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[comp03][WARNIN] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd --path /var/lib/ceph/osd/ceph-2
[comp03][WARNIN] Running command: ln -snf /dev/ceph-81075c99-cdf2-42cd-867c-40bd54dace97/osd-block-20bdeca7-08de-47ad-a527-0547712629cd /var/lib/ceph/osd/ceph-2/block
[comp03][WARNIN] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[comp03][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[comp03][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[comp03][WARNIN] Running command: systemctl enable ceph-volume@lvm-2-20bdeca7-08de-47ad-a527-0547712629cd
[comp03][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-20bdeca7-08de-47ad-a527-0547712629cd.service to /usr/lib/systemd/system/ceph-volume@.service.
[comp03][WARNIN] Running command: systemctl enable --runtime ceph-osd@2
[comp03][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service to /usr/lib/systemd/system/ceph-osd@.service.
[comp03][WARNIN] Running command: systemctl start ceph-osd@2
[comp03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[comp03][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[comp03][INFO  ] checking OSD status...
[comp03][DEBUG ] find the location of an executable
[comp03][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host comp03 is now ready for osd use.
[root@cont01:/etc/ceph]# ceph -scluster:id:     bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_OKservices:mon: 1 daemons, quorum cont01mgr: cont01(active)osd: 3 osds: 3 up, 3 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   3.01GiB used, 2.73TiB / 2.73TiB availpgs:     [root@comp03:/etc/ceph]# ceph -scluster:id:     bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_WARN1 osds down1 host (1 osds) downservices:mon: 1 daemons, quorum cont01mgr: cont01(active)osd: 3 osds: 2 up, 3 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   3.01GiB used, 2.73TiB / 2.73TiB availpgs:     

1.6 添加2mon+2mgr

[root@cont03:/etc/ceph]#  ceph-deploy mon add cont01
[root@cont03:/etc/ceph]#  ceph-deploy mgr create cont01
[root@cont03:/etc/ceph]#  ceph-deploy mon add cont02
[root@cont03:/etc/ceph]#  ceph-deploy mgr create cont02
[root@cont03:/etc/ceph]#  ceph -scluster:id:     14dd185e-463b-438a-bfd5-dbdf13e925a7health: HEALTH_OKservices:mon: 3 daemons, quorum cont01,cont02,cont03mgr: cont03(active), standbys: cont01, cont02osd: 3 osds: 3 up, 3 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   3.01GiB used, 2.73TiB / 2.73TiB availpgs:     

1.7 启动mgr并查看状态

//查看状态
[root@cont03:/etc/ceph]# systemctl status ceph-mgr@cont03 ceph-mgr@cont02 ceph-mgr@cont01
● ceph-mgr@cont03.service - Ceph cluster manager daemonLoaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2020-03-03 09:25:23 CST; 2h 4min agoMain PID: 2784 (ceph-mgr)CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@cont03.service└─2784 /usr/bin/ceph-mgr -f --cluster ceph --id cont03 --setuser ceph --setgroup cephMar 03 09:25:23 cont03 systemd[1]: Started Ceph cluster manager daemon.● ceph-mgr@cont02.service - Ceph cluster manager daemonLoaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)Active: inactive (dead)● ceph-mgr@cont01.service - Ceph cluster manager daemonLoaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)Active: inactive (dead)[root@cont03:/etc/ceph]# netstat -tunlp | grep mg
tcp        0      0 192.168.10.23:6800      0.0.0.0:*      LISTEN     2784/ceph-mgr [root@cont03:/etc/ceph]# ceph -scluster:id:     14dd185e-463b-438a-bfd5-dbdf13e925a7health: HEALTH_OKservices:mon: 3 daemons, quorum cont01,cont02,cont03mgr: cont03(active), standbys: cont01, cont02osd: 3 osds: 3 up, 3 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   3.01GiB used, 2.73TiB / 2.73TiB availpgs:
-------------------------------------------------------------------
//可查看mgr默认开启的服务: ceph mgr module ls;
//默认dashboard服务在可开启列表中,但并未启动,需要手工开启
[root@cont03:/etc/ceph]# ceph mgr module enable dashboard
[root@cont03:/etc/ceph]# ceph mgr module ls
{"enabled_modules": ["balancer","dashboard","restful","status"],"disabled_modules": ["influx","localpool","prometheus","selftest","telemetry","zabbix"]
}
[root@cont02:/root]# ceph mgr module enable dashboard
[root@cont01:/root]# ceph mgr module enable dashboard
-----------------------------------------------------------------------
// dashboard服务已开启,默认监听全部地址的tcp7000端口;
//如果需要设置dashboard的监听地址与端口,如下:
//设置监听地址:ceph config-key put mgr/dashboard/server_addr x.x.x.x
//设置监听端口:ceph config-key put mgr/dashboard/server_port x
[root@cont03:/etc/ceph]# netstat -tunlp | grep mgr
tcp        0      0 192.168.10.23:6800      0.0.0.0:*       LISTEN      2784/ceph-mgr
tcp6       0      0 :::7000                 :::*       LISTEN      2784/ceph-mgr
//访问地址: http://192.168.10.23:7000/

1.8 查看ceph集群状态

1.//查看monitor状态
[root@cont03:/etc/ceph]# ceph mon stat
e3: 3 mons at {cont01=192.168.10.21:6789/0,cont02=192.168.10.22:6789/0,cont03=192.168.10.23:6789/0}, election epoch 16, leader 0 cont01, quorum 0,1,2 cont01,cont02,cont032.//查看ceph状态:ceph health (detail),ceph -s,ceph -w等;
//状态显示mgr处于active-standby模式
[root@cont03:/etc/ceph]# ceph -scluster:id:     14dd185e-463b-438a-bfd5-dbdf13e925a7health: HEALTH_OKservices:mon: 3 daemons, quorum cont01,cont02,cont03mgr: cont03(active), standbys: cont02, cont01osd: 3 osds: 3 up, 3 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   3.01GiB used, 2.73TiB / 2.73TiB availpgs:     3.//可在各节点查看认证信息等
[root@cont03:/etc/ceph]# ceph auth list
installed auth entries:osd.0key: AQDA3lxerVOmOhAAQc79JB8KOB25Fz9+hXc1Xg==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *
osd.1key: AQDe4lxeRO8JOhAAfg9dSlkmY9DFY3iIQTjFXw==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *
osd.2key: AQD8ql1e37K9BxAAA0K+/5eaO1rxr1/sp0wVtA==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *
client.adminkey: AQA81Vxe/zKVOxAA0Y7VQWCoY2Wb9opdeIbk8Q==caps: [mds] allow *caps: [mgr] allow *caps: [mon] allow *caps: [osd] allow *
client.bootstrap-mdskey: AQA91VxelvtSNhAAX85/egnzqGpexqeOoZPHug==caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgrkey: AQA+1VxeEYBAMxAAEJc/NcvzMmf8ddBz8RrFlg==caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osdkey: AQA/1Vxe8TrwLxAA7guybVDfqOYes55tBlKfhQ==caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgwkey: AQBA1Vxe/uc6KxAA/aiHfdwROXHoRD/z1gts0w==caps: [mon] allow profile bootstrap-rgw
mgr.cont01key: AQBCy11e2+JrGBAAYo5Sz6UUG2avJVxn6l8Z1w==caps: [mds] allow *caps: [mon] allow profile mgrcaps: [osd] allow *
mgr.cont02key: AQBxy11eQygnERAAus++r3SmI4QWxjZMU1WhEA==caps: [mds] allow *caps: [mon] allow profile mgrcaps: [osd] allow *
mgr.cont03key: AQBt2FxeTR8yLRAAZfBKTe28eQJO8WpG8I8KtA==caps: [mds] allow *caps: [mon] allow profile mgrcaps: [osd] allow *
4.//检查状态:ceph quorum_status --format json-pretty
5.//查看集群详细配置:ceph daemon mon.{CEPH-NODE} config show | more
6.//查看mon详细状态:ceph daemon mon.{CEPH-NODE} mon_status
7.//查看ceph log所在目录:ceph-conf --name mon.{CEPH-NODE} --show-config-value log_fil
8.//查看mon节点的admin socket:ceph-conf --name mon.ceph01 --show-config-value admin_socket
[root@cont03:/etc/ceph]# ceph quorum_status --format json-pretty
[root@cont03:/etc/ceph]# ceph daemon mon.cont03 config show
[root@cont03:/etc/ceph]# ceph daemon mon.cont03 mon_status----------------------------------------------------------------------
9.//osd位于存储节点,查看存储节点磁盘状况 或在管理节点采用命令:ceph-deploy disk list NODE1 NODE2 … NODEN;
[root@cont03:/etc/ceph]# ceph-deploy disk list comp01 comp02 comp03---------------------------------------------------------------------------
10.//在管理节点查看osd状态等
[root@cont03:/etc/ceph]# ceph osd stat
3 osds: 3 up, 3 in
[root@cont03:/etc/ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF
-1       2.72910 root default
-5       0.90970     host comp01                         1   hdd 0.90970         osd.1       up  1.00000 1.00000
-3       0.90970     host comp02                         0   hdd 0.90970         osd.0       up  1.00000 1.00000
-7       0.90970     host comp03                         2   hdd 0.90970         osd.2       up  1.00000 1.00000 11.//在管理节点查看容量及使用情况
[root@cont03:/etc/ceph]# ceph df
GLOBAL:SIZE        AVAIL       RAW USED     %RAW USED 2.73TiB     2.73TiB      3.01GiB          0.11
POOLS:NAME     ID     USED     %USED     MAX AVAIL     OBJECTS 

1.9 Ceph-deploy命令详解

ceph-deploy new [initial-monitor-node(s)]
开始部署一个集群,生成配置文件、keyring、一个日志文件。ceph-deploy install [HOST] [HOST…]
在远程主机上安装ceph相关的软件包, –release可以指定版本,默认是firefly。ceph-deploy mon create-initial
部署初始monitor成员,即配置文件中mon initial members中的monitors。部署直到他们形成表决团,然后搜集keys,并且在这个过程中报告monitor的状态。ceph-deploy mon create [HOST] [HOST…]
显示的部署monitor,如果create后面不跟参数,则默认是mon initial members里的主机。ceph-deploy mon add [HOST]
将一个monitor加入到集群之中。ceph-deploy mon destroy [HOST]
在主机上完全的移除monitor,它会停止了ceph-mon服务,并且检查是否真的停止了,创建一个归档文件夹mon-remove在/var/lib/ceph目录下。ceph-deploy gatherkeys [HOST] [HOST…]
获取提供新节点的验证keys。这些keys会在新的MON/OSD/MD加入的时候使用。ceph-deploy disk list [HOST]
列举出远程主机上的磁盘。实际上调用ceph-disk命令来实现功能。ceph-deploy disk prepare [HOST:[DISK]]
为OSD准备一个目录、磁盘,它会创建一个GPT分区,用ceph的uuid标记这个分区,创建文件系统,标记该文件系统可以被ceph使用。ceph-deploy disk activate [HOST:[DISK]]
激活准备好的OSD分区。它会mount该分区到一个临时的位置,申请OSD ID,重新mount到正确的位置/var/lib/ceph/osd/ceph-{osd id}, 并且会启动ceph-osd。ceph-deploy disk zap [HOST:[DISK]]
擦除对应磁盘的分区表和内容。实际上它是调用sgdisk –zap-all来销毁GPT和MBR, 所以磁盘可以被重新分区。ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
为osd准备一个目录、磁盘。它会检查是否超过MAX PIDs,读取bootstrap-osd的key或者写一个(如果没有找到的话),然后它会使用ceph-disk的prepare命令来准备磁盘、日志,并且把OSD部署到指定的主机上。ceph-deploy osd active HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
激活上一步的OSD。实际上它会调用ceph-disk的active命令,这个时候OSD会up and in。ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
上两个命令的综合。ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
列举磁盘分区。ceph-deploy admin [HOST] [HOST…]
将client.admin的key push到远程主机。将ceph-admin节点下的client.admin keyring push到远程主机/etc/ceph/下面。ceph-deploy push [HOST] [HOST…]
将ceph-admin下的ceph.conf配置文件push到目标主机下的/etc/ceph/目录。 ceph-deploy pull [HOST]是相反的过程。ceph-deploy uninstall [HOST] [HOST…]
从远处主机上卸载ceph软件包。有些包是不会删除的,像librbd1, librados2。ceph-deploy purge [HOST] [HOST…]
类似上一条命令,增加了删除data。ceph-deploy purgedata [HOST] [HOST…]
删除/var/lib/ceph目录下的数据,它同样也会删除/etc/ceph下的内容。ceph-deploy forgetkeys
删除本地目录下的所有验证keyring, 包括client.admin, monitor, bootstrap系列。ceph-deploy pkg –install/–remove [PKGs] [HOST] [HOST…]
在远程主机上安装或者卸载软件包。[PKGs]是逗号分隔的软件包名列表。# 清除磁盘上的逻辑卷
ceph-volume lvm zap --destroy /dev/vdc  # 本机操作
ceph-deploy disk zap lab4 /dev/sdb  # 远程操作
# 创建osd
ceph-deploy osd create lab4 --fs-type btrfs --data vg1/lvol0## 删除osd节点的node4
# 查看节点node4上的所有osd,比如osd.9 osd.10:
ceph osd tree #查看目前cluster状态
# 把node4上的所欲osd踢出集群:(node1节点上执行)
ceph osd out osd.9
ceph osd out osd.10
# 让node4上的所有osd停止工作:(node4上执行)
service ceph stop osd.9
service ceph stop osd.10
# 查看node4上osd的状态是否为down,权重为0
ceph osd tree
# 移除node4上的所有osd:
ceph osd crush remove osd.9
ceph osd crush remove osd.10
# 删除节点node4:
ceph osd crush remove ceph-node4## 替换一个失效的磁盘驱动
# 首先ceph osd tree 查看down掉的osd,将因磁盘问题down掉的osd及相关key删除
ceph osd out osd.0       # 都在node1节点下执行
ceph osd crush rm osd.0
ceph auth del osd.0
ceph osd rm osd.0
#zap新磁盘 清理新磁盘:
ceph-deploy disk zap node1 /dev/sdb
#在磁盘上新建一个osd,ceph会把它添加为osd:0:
ceph-deploy --overwrite-conf osd create --data node1 /dev/sdb

1.10 ceph排错

1-a.如果出现以下错误代码

[root@cont01:/etc/ceph]# ceph-deploy osd create comp01 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create comp01 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f618b95c680>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : comp01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f618c1d99b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[comp01][DEBUG ] connected to host: comp01
[comp01][DEBUG ] detect platform information from remote host
[comp01][DEBUG ] detect machine type
[comp01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to comp01
[comp01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[comp01][WARNIN] osd keyring does not exist yet, creating one
[comp01][DEBUG ] create a keyring file
[comp01][DEBUG ] find the location of an executable
[comp01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[comp01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[comp01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f061d5c7-cb5b-448c-a75d-f10b2d21e480
[comp01][WARNIN] Running command: vgcreate --force --yes ceph-3fc07070-318b-4670-be61-d3296df2c90c /dev/sda
[comp01][WARNIN]  stderr: Physical volume '/dev/sda' is already in volume group 'ceph-7e07e091-3e20-4283-835c-5dbbe618b34c'
[comp01][WARNIN]   Unable to add physical volume '/dev/sda' to volume group 'ceph-7e07e091-3e20-4283-835c-5dbbe618b34c'
[comp01][WARNIN]   /dev/sda: physical volume not initialized.
[comp01][WARNIN] --> Was unable to complete a new OSD, will rollback changes
[comp01][WARNIN] --> OSD will be fully purged from the cluster, because the ID was generated
[comp01][WARNIN] Running command: ceph osd purge osd.0 --yes-i-really-mean-it
[comp01][WARNIN]  stderr: purged osd.0
[comp01][WARNIN] -->  RuntimeError: command returned non-zero exit status: 5
[comp01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

1-b执行下列操作

[root@comp01:/root]# ceph-disk activate-all
[root@comp01:/root]# parted /dev/sda mklabel gpt -s
[root@comp01:/root]# cd /etc/ceph/
[root@comp01:/etc/ceph]# ceph-volume lvm zap /dev/sda
--> Zapping: /dev/sda
--> --destroy was not specified, but zapping a whole device will remove the partition table
Running command: dd if=/dev/zero of=/dev/sda bs=1M count=10stderr: 10+0 records in
10+0 records out
10485760 bytes (10 MB) copiedstderr: , 0.00997203 s, 1.1 GB/s
--> Zapping successful for: <Raw Device: /dev/sda>

2-a 如果出现以下错误代码
报错“error: GPT headers found, they must be removed on: /dev/sdb”,使用“# sgdisk --zap-all /dev/sdb”解决

[comp02][WARNIN] ceph-volume lvm create: error: GPT headers found, they must be removed on: /dev/sda
[comp02][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

2-b 执行下列操作

[root@comp02:/root]# sgdisk --zap-all /dev/sda
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.

2、Openstack集成Ceph前期部署

2.1 ceph相关知识

Openstack环境中,数据存储可分为临时性存储与永久性存储。

临时性存储:主要由本地文件系统提供,并主要用于nova虚拟机的本地系统与临时数据盘,以及存储glance上传的系统镜像;

永久性存储:主要由cinder提供的块存储与swift提供的对象存储构成,以cinder提供的块存储应用最为广泛,块存储通常以云盘的形式挂载到虚拟机中使用。

Openstack中需要进行数据存储的三大项目主要是nova项目(虚拟机镜像文件),glance项目(共用模版镜像)与cinder项目(块存储)。

下图为cinder,glance与nova访问ceph集群的逻辑图:

  1. ceph与openstack集成主要用到ceph的rbd服务,ceph底层为rados存储集群,ceph通过librados库实现对底层rados的访问;
  2. openstack各项目客户端调用librbd,再由librbd调用librados访问底层rados;
  3. 实际使用中,nova需要使用libvirtdriver驱动以通过libvirt与qemu调用librbd;cinder与glance可直接调用librbd;
  4. 写入ceph集群的数据被条带切分成多个object,object通过hash函数映射到pg(构成pg容器池pool),然后pg通过几圈crush算法近似均匀地映射到物理存储设备osd(osd是基于文件系统的物理存储设备,如xfs,ext4等)。

2.2 创建pool

//Ceph默认使用pool的形式存储数据,pool是对若干pg进行组织管理的逻辑划分,pg里的对象被映射到不同的osd,因此pool分布到整个集群里。
//可以将不同的数据存入1个pool,但如此操作不便于客户端数据区分管理,因此一般是为每个客户端分别创建pool。
//为cinder,nova,glance分别创建pool,命名为:volumes,vms,images
//这里volumes池是永久性存储,vms是实例临时后端存储,images是镜像存储
//若少于5个OSD, 设置pg_num为128。
//5~10个OSD,设置pg_num为512。
//10~50个OSD,设置pg_num为4096。
//超过50个OSD,可以参考pgcalc计算。
[root@cont01:/etc/ceph]# ceph osd pool create volumes 64
pool 'volumes' created
[root@cont01:/etc/ceph]# ceph osd pool create vms 32
pool 'vms' created
[root@cont01:/etc/ceph]# ceph osd pool create images 32
pool 'images' created
[root@cont01:/etc/ceph]# ceph pg stat
128 pgs: 128 active+clean; 0B data, 3.03GiB used, 2.73TiB / 2.73TiB avail
[root@cont01:/etc/ceph]# ceph osd lspools
1 volumes,2 vms,3 images,排错补充:由于volumes创建时,创建pg_num过多,删除pool的操作
[root@cont03:/etc/ceph]# ceph osd lspools
1 volumes,
执行[root@cont03:/etc/ceph]# ceph osd pool delete volumes
此时会报错,内容提示如下:
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool volumes.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.
然后执行[root@cont03:/etc/ceph]# ceph osd pool delete volumes volumes --yes-i-really-really-mean-it
此时会报错,内容提示如下:
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
根据提示需要将mon_allow_pool_delete的value设置为true,可以查看mon_allow_pool_delete的默认值。
然后执行[root@cont03:/etc/ceph]# ceph --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = false
因此需要在mon节点上面修改ceph.conf文件,添加内容: mon_allow_pool_delete = true, 然后重启osd进程mgr进程
[root@cont03:/etc/ceph]# vim /etc/ceph/ceph.conf
添加
[mon]
mon_allow_pool_delete = true
重启ceph-mon服务
[root@cont03:/etc/ceph]# systemctl restart ceph-mon.target
最后删除volumes
[root@cont03:/etc/ceph]# ceph osd pool delete volumes volumes --yes-i-really-really-mean-it
pool 'volumes' removed
[root@cont03:/etc/ceph]# ceph osd lspools

2.3 安装ceph客户端

注:glance-api服务所在节点需要安装python-rbd,  cinder-volume与nova-compute服务所在节点需要安装ceph-common;
[root@cont03:/etc/ceph]# yum install python-rbd ceph-common -y
[root@cont02:/etc/ceph]# yum install python-rbd ceph-common -y
[root@cont01:/etc/ceph]# yum install python-rbd ceph-common -y
[root@comp01:/root]# yum install ceph-common -y
[root@comp02:/root]# yum install ceph-common -y
[root@comp03:/root]# yum install ceph-common -y

2.4 授权设置

2.4.1 创建用户

//ceph默认启用cephx authentication(见ceph.conf),需要为nova/cinder与glance客户端创建新的用户并授权;
//可在管理节点上分别为运行cinder-volume与glance-api服务的节点创建client.glance与client.cinder用户并设置权限;
//针对pool设置权限,pool名对应创建的pool
[root@cont01:/etc/ceph]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]key = AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==
[root@cont01:/etc/ceph]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]key = AQCWimxevKUPDBAA62aZBcrRAGsrBzRQfdtykA==

2.4.2 推送client.glance秘钥

//将创建client.cinder用户生成的秘钥推送到运行nova-compute服务的节点
[root@cont01:/etc/ceph]# ceph auth get-or-create client.cinder | ssh root@comp01 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]key = AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==
[root@cont01:/etc/ceph]# ceph auth get-or-create client.cinder | ssh root@comp02 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]key = AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==
[root@cont01:/etc/ceph]# ceph auth get-or-create client.cinder | ssh root@comp03 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]key = AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==

2.4.5 libvirt秘钥

//nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;
//在管理节点向计算(存储)节点推送client.cinder秘钥文件,生成的文件是临时性的,将秘钥添加到libvirt后可删除
[root@cont01:/etc/ceph]# ceph auth get-key client.cinder | ssh root@comp01 tee /etc/ceph/client.cinder.key
AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==
[root@cont01:/etc/ceph]# ceph auth get-key client.cinder | ssh root@comp02 tee /etc/ceph/client.cinder.key
AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==
[root@cont01:/etc/ceph]# ceph auth get-key client.cinder | ssh root@comp03 tee /etc/ceph/client.cinder.key
AQCKimxe0DVcIhAAlE9wbClvCmnu4HZ3c5kiyQ==
//在计算(存储)节点将秘钥加入libvirt,以comp01节点为例;
//首先生成1个uuid,全部计算(存储)节点可共用此uuid(其他节点不用操作此步);但是每个计算节点必须生成secret.xml
//uuid后续配置nova.conf文件时也会用到,请保持一致
[root@comp01:/root]# uuidgen
c6fce3d0-f874-44c8-a26c-8f42bdba0991
[root@comp01:/root]# touch secret.xml
[root@comp01:/root]# vim secret.xml
<secret ephemeral='no' private='no'><uuid>c6fce3d0-f874-44c8-a26c-8f42bdba0991</uuid><usage type='ceph'><name>client.cinder secret</name></usage>
</secret>[root@comp01:/root]# virsh secret-define --file secret.xml
Secret c6fce3d0-f874-44c8-a26c-8f42bdba0991 created[root@comp01:/etc/ceph]# virsh secret-set-value --secret c6fce3d0-f874-44c8-a26c-8f42bdba0991 --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set[root@comp02:/root]# touch secret.xml
[root@comp02:/root]# vim secret.xml
<secret ephemeral='no' private='no'><uuid>c6fce3d0-f874-44c8-a26c-8f42bdba0991</uuid><usage type='ceph'><name>client.cinder secret</name></usage>
</secret>[root@comp02:/root]# virsh secret-define --file secret.xml
Secret c6fce3d0-f874-44c8-a26c-8f42bdba0991 created[root@comp02:/etc/ceph]# virsh secret-set-value --secret c6fce3d0-f874-44c8-a26c-8f42bdba0991 --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set[root@comp03:/root]# touch secret.xml
[root@comp03:/root]# vim secret.xml
<secret ephemeral='no' private='no'><uuid>c6fce3d0-f874-44c8-a26c-8f42bdba0991</uuid><usage type='ceph'><name>client.cinder secret</name></usage>
</secret>[root@comp03:/root]# virsh secret-define --file secret.xml
Secret c6fce3d0-f874-44c8-a26c-8f42bdba0991 created[root@comp03:/etc/ceph]# virsh secret-set-value --secret c6fce3d0-f874-44c8-a26c-8f42bdba0991 --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set

3、 Glance集成Ceph

3.1 配置glance-api.conf

//在运行glance-api服务的节点修改glance-api.conf文件,3个控制节点
注:下面只列出需要修改glance集成ceph的section
[root@cont03:/etc/ceph]# vim /etc/glance/glance-api.conf
# 打开copy-on-write功能1 [DEFAULT]
343 show_image_direct_url = True
# 变更默认使用的本地文件存储为ceph rbd存储;注释掉2006 2007 2008 2009
2006 [glance_store]
2007 #stores = file,http
2008 #default_store = file
2009 #filesystem_store_datadir = /var/lib/glance/images/
2010 stores = rbd
2011 default_store = rbd
2012 rbd_store_chunk_size = 8
2013 rbd_store_pool = images
2014 rbd_store_user = glance
2015 rbd_store_ceph_conf = /etc/ceph/ceph.conf[root@cont02:/etc/ceph]# vim /etc/glance/glance-api.conf
# 打开copy-on-write功能1 [DEFAULT]
343 show_image_direct_url = True
# 变更默认使用的本地文件存储为ceph rbd存储;注释掉2006 2007 2008 2009
2006 [glance_store]
2007 #stores = file,http
2008 #default_store = file
2009 #filesystem_store_datadir = /var/lib/glance/images/
2010 stores = rbd
2011 default_store = rbd
2012 rbd_store_chunk_size = 8
2013 rbd_store_pool = images
2014 rbd_store_user = glance
2015 rbd_store_ceph_conf = /etc/ceph/ceph.conf[root@cont01:/etc/ceph]# vim /etc/glance/glance-api.conf
# 打开copy-on-write功能1 [DEFAULT]
343 show_image_direct_url = True
# 变更默认使用的本地文件存储为ceph rbd存储;注释掉2006 2007 2008 2009
2006 [glance_store]
2007 #stores = file,http
2008 #default_store = file
2009 #filesystem_store_datadir = /var/lib/glance/images/
2010 stores = rbd
2011 default_store = rbd
2012 rbd_store_chunk_size = 8
2013 rbd_store_pool = images
2014 rbd_store_user = glance
2015 rbd_store_ceph_conf = /etc/ceph/ceph.confsystemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service

3.2 上传镜像

//镜像上传后,默认地址为ceph集群(ID)的images pool下
[root@cont01:/root]# source openrc
[root@cont01:/root]# openstack image create "cirros-qcow2" --file ~/cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public+------------------+---------------------------------------------------------------+
| Field            | Value                                                           -----------------------------------------------------------+
| checksum         | 9baf482045b9ce692f11d0d0f93cc5e0
| container_format | bare
| created_at       | 2020-03-14T08:06:49Z
| disk_format      | qcow2
| file             | /v2/images/5fcec4bd-ecc6-4f0a-b19c-7e9ab2a3d784/file
| id               | 5fcec4bd-ecc6-4f0a-b19c-7e9ab2a3d784
| min_disk         | 0
| min_ram          | 0
| name             | cirros-qcow2
| owner            | 04a11d4762be4cac9968b2fb0d8c438f
| properties       | direct_url='rbd://bc616791-7d5a-4b1a-ab1d-30414312fcfd/images/5fcec4bd-ecc6-4f0a-b19c-7e9ab2a3d784/snap', os_hash_algo='sha512', os_hash_value='2b4d77c6904416962b044c0dd15c496a8ad2bbfb084fc2caa6f71276ee20616693326703c4b9faa29ef303cb6366a444500b3419b026943129d9fc896eec7538', os_hidden='False' |
| protected        | False
| schema           | /v2/schemas/image
| size             | 1042984
| status           | active
| tags             |
| updated_at       | 2020-03-14T08:06:52Z
| virtual_size     | None
| visibility       | public                                                              ----------------------------------------------------------+//验证
[root@cont01:/root]# rbd ls images
5fcec4bd-ecc6-4f0a-b19c-7e9ab2a3d784

3.3 定义pool类型

/images启用后,ceph集群状态变为:HEALTH_WARN
[root@cont01:/root]# ceph -scluster:id:     bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_WARNapplication not enabled on 1 pool(s)services:mon: 3 daemons, quorum cont01,cont02,cont03mgr: cont01(active), standbys: cont02, cont03osd: 3 osds: 3 up, 3 indata:pools:   3 pools, 128 pgsobjects: 7 objects, 1019KiBusage:   3.03GiB used, 2.73TiB / 2.73TiB availpgs:     128 active+clean//使用”ceph health detail”,能给出解决办法;
//未定义pool池类型,可定义为'cephfs', 'rbd', 'rgw'等[root@cont01:/root]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)application not enabled on pool 'images'use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.解决问题__同时解决volumes与vms两个pool的问题
[root@cont01:/root]# ceph osd pool application enable images rbd
enabled application 'rbd' on pool 'images'
[root@cont01:/root]# ceph osd pool application enable volumes rbd
enabled application 'rbd' on pool 'volumes'
[root@cont01:/root]# ceph osd pool application enable vms rbd
enabled application 'rbd' on pool 'vms'验证查看
[root@cont01:/root]# ceph health detail
HEALTH_OK
[root@cont01:/root]# ceph osd pool application get images
{"rbd": {}
}
[root@cont01:/root]# ceph osd pool application get volumes
{"rbd": {}
}
[root@cont01:/root]# ceph osd pool application get vms
{"rbd": {}
}
[root@cont01:/root]# ceph -scluster:id:     bc616791-7d5a-4b1a-ab1d-30414312fcfdhealth: HEALTH_OKservices:mon: 3 daemons, quorum cont01,cont02,cont03mgr: cont01(active), standbys: cont02, cont03osd: 3 osds: 3 up, 3 indata:pools:   3 pools, 128 pgsobjects: 7 objects, 1019KiBusage:   3.03GiB used, 2.73TiB / 2.73TiB availpgs:     128 active+clean

4、 Cinder集成Ceph

4.1 配置cinder.conf

//cinder利用插件式结构,支持同时使用多种后端存储;
//在cinder-volume所在节点设置cinder.conf中设置相应的ceph rbd驱动即可;
注:下面只列出需要修改cinder集成ceph的section .在三个控制节点上操作
[root@cont03:/root]# vim /etc/cinder/cinder.conf
# 后端使用ceph存储1 [DEFAULT]
10 enabled_backends = ceph
# 新增[ceph] section;
4285 [ceph]
4286
4287 # ceph rbd驱动
4288 volume_driver = cinder.volume.drivers.rbd.RBDDriver
4289 rbd_pool = volumes
4290 rbd_ceph_conf = /etc/ceph/ceph.conf
4291 rbd_flatten_volume_from_snapshot = false
4292 rbd_max_clone_depth = 5
4293 rbd_store_chunk_size = 4
4294 rados_connect_timeout = -1
# 如果配置多后端,则“glance_api_version”必须配置在[DEFAULT] section
4295 glance_api_version = 2
4296 rbd_user = cinder
4297 rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba0991
4298 volume_backend_name = ceph[root@cont02:/root]# vim /etc/cinder/cinder.conf
# 后端使用ceph存储1 [DEFAULT]
10 enabled_backends = ceph
# 新增[ceph] section;
4285 [ceph]
4286
4287 # ceph rbd驱动
4288 volume_driver = cinder.volume.drivers.rbd.RBDDriver
4289 rbd_pool = volumes
4290 rbd_ceph_conf = /etc/ceph/ceph.conf
4291 rbd_flatten_volume_from_snapshot = false
4292 rbd_max_clone_depth = 5
4293 rbd_store_chunk_size = 4
4294 rados_connect_timeout = -1
# 如果配置多后端,则“glance_api_version”必须配置在[DEFAULT] section
4295 glance_api_version = 2
4296 rbd_user = cinder
4297 rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba0991
4298 volume_backend_name = ceph[root@cont01:/root]# vim /etc/cinder/cinder.conf
# 后端使用ceph存储1 [DEFAULT]
10 enabled_backends = ceph
# 新增[ceph] section;
4285 [ceph]
4286
4287 # ceph rbd驱动
4288 volume_driver = cinder.volume.drivers.rbd.RBDDriver
4289 rbd_pool = volumes
4290 rbd_ceph_conf = /etc/ceph/ceph.conf
4291 rbd_flatten_volume_from_snapshot = false
4292 rbd_max_clone_depth = 5
4293 rbd_store_chunk_size = 4
4294 rados_connect_timeout = -1
# 如果配置多后端,则“glance_api_version”必须配置在[DEFAULT] section
4295 glance_api_version = 2
4296 rbd_user = cinder
4297 rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba0991
4298 volume_backend_name = ceph重启服务
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-cinder-volume.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-cinder-volume.service

4.2 验证cinder服务

//查看cinder服务状态,cinder-volume集成ceph后,状态”up”;
[root@cont01:/root]# openstack volume service list
+------------------+-------------+------+---------+-------+----------------------+
| Binary           | Host        | Zone | Status  | State | Updated At             |
+------------------+-------------+------+---------+-------+------------------------+
| cinder-scheduler | cont03      | nova | enabled | up | 2020-03-14T08:20:38.000000 |
| cinder-scheduler | cont02      | nova | enabled | up | 2020-03-14T08:20:39.000000 |
| cinder-scheduler | cont01      | nova | enabled | up | 2020-03-14T08:20:36.000000 |
| cinder-volume    | cont03@ceph | nova | enabled | up | 2020-03-14T08:20:39.000000 |
| cinder-volume    | cont02@ceph | nova | enabled | up | 2020-03-14T08:20:35.000000 |
| cinder-volume    | cont01@ceph | nova | enabled | up | 2020-03-14T08:20:42.000000 |
+------------------+-------------+------+---------+-------+------------------------+

4.3 生成volume

4.3.1 设置卷类型

//在控制节点为cinder的ceph后端存储创建对应的type,在配置多存储后端时可区分类型;
//可通过“cinder type-list”查看
[root@cont01:/root]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 1dd16f31-4449-440d-a97a-bdfedfb5acb2 | ceph | -           | True      |
+--------------------------------------+------+-------------+-----------+// 为ceph type设置扩展规格,键值” volume_backend_name”,value值”ceph”
[root@cont01:/root]#  cinder type-key ceph set volume_backend_name=ceph
[root@cont01:/root]# cinder extra-specs-list
+--------------------------------------+------+---------------------------------+
| ID                                   | Name | extra_specs                     |
+--------------------------------------+------+---------------------------------+
| 1dd16f31-4449-440d-a97a-bdfedfb5acb2 | ceph | {'volume_backend_name': 'ceph'} |
+--------------------------------------+------+---------------------------------+

4.3.2 生成volume

//生成volume;最后的数字”100”代表容量为100G
[root@cont01:/root]# cinder create --volume-type ceph --name ceph-volume1 100
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2020-03-14T08:24:00.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 1bec4e01-19b9-4d5c-925a-5bcb6b655bd8 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | ceph-volume1                         |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 04a11d4762be4cac9968b2fb0d8c438f     |
| replication_status             | None                                 |
| size                           | 100                                  |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 45fd61b53e2c45499e6e05037ce4e03c     |
| volume_type                    | ceph                                 |
+--------------------------------+--------------------------------------+//检查生成的volume
[root@cont01:/root]# openstack volume list
+--------------------------------------+--------------+-----------+------+---------+
| ID                                   | Name         | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+--------+
| 1bec4e01-19b9-4d5c-925a-5bcb6b655bd8 | ceph-volume1 | available |  100 |             |
+--------------------------------------+--------------+-----------+------+--------+
[root@cont01:/root]# cinder list
+------------------+-----------+------------+------+-------+----------+-------------+
| ID              | Status    | Name  | Size | Volume Type | Bootable | Attached to |
+---------+-----------+--------------+------+-------------+----------+-------------+
| 1bec4e01-19b9-4d5c-925a-5bcb6b655bd8 | available | ceph-volume1 | 100  | ceph        | false    |             |
+----------+-----------+--------------+------+-------------+----------+-------------+//检查ceph集群的volumes pool
[root@cont01:/root]# rbd ls volumes
volume-1bec4e01-19b9-4d5c-925a-5bcb6b655bd8

5、 Nova集成Ceph

5.1 配置ceph.conf

//如果需要从ceph rbd中启动虚拟机,必须将ceph配置为nova的临时后端;
//推荐在计算节点的配置文件中启用rbd cache功能;
//为了便于故障排查,配置admin socket参数,这样每个使用ceph rbd的虚拟机都有1个socket将有利于虚拟机性能分析与故障解决;
//相关配置只涉及全部计算节点ceph.conf文件的[client]与[client.cinder]字段
[root@comp01:/etc/ceph]# vim /etc/ceph/ceph.conf
[global]
fsid = bc616791-7d5a-4b1a-ab1d-30414312fcfd
mon_initial_members = cont01,cont02,cont03
mon_host = 192.168.10.21,192.168.10.22,192.168.10.23
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
public network = 192.168.10.0/24
cluster network = 192.168.7.0/24
[mon]
mon_allow_pool_delete = true
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring[root@comp02:/etc/ceph]# vim /etc/ceph/ceph.conf
[global]
fsid = bc616791-7d5a-4b1a-ab1d-30414312fcfd
mon_initial_members = cont01,cont02,cont03
mon_host = 192.168.10.21,192.168.10.22,192.168.10.23
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
public network = 192.168.10.0/24
cluster network = 192.168.7.0/24
[mon]
mon_allow_pool_delete = true
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring[root@comp03:/etc/ceph]# vim /etc/ceph/ceph.conf
[global]
fsid = bc616791-7d5a-4b1a-ab1d-30414312fcfd
mon_initial_members = cont01,cont02,cont03
mon_host = 192.168.10.21,192.168.10.22,192.168.10.23
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
public network = 192.168.10.0/24
cluster network = 192.168.7.0/24
[mon]
mon_allow_pool_delete = true
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring//创建ceph.conf文件中指定的socker与log相关的目录,并更改属主
[root@comp01:/etc/ceph]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@comp01:/etc/ceph]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/
[root@comp02:/etc/ceph]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@comp02:/etc/ceph]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/
[root@comp03:/etc/ceph]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@comp03:/etc/ceph]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/

5.2 配置nova.conf

// 在全部计算节点配置nova后端使用ceph集群的vms池
[root@comp01:/etc/ceph]# vim /etc/nova/nova.conf6260 [libvirt]6261 images_type = rbd6262 images_rbd_pool = vms6263 images_rbd_ceph_conf = /etc/ceph/ceph.conf6264 rbd_user = cinder6265 rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba09916266 disk_cachemodes="network=writeback"6267 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"# 禁用文件注入6268 inject_password = false6269 inject_key = false6270 inject_partition = -2# 虚拟机临时root磁盘discard功能,”unmap”参数在scsi接口类型磁盘释放后可立即释放空间6271 hw_disk_discard = unmap6272 virt_type = qemu[root@comp01:/root]# egrep -v "^$|^#" /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.10.25
use_neutron = true
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:adminopenstack@cont01:5672,openstack:adminopenstack@cont02:5672,openstack:adminopenstack@cont03:5672
[api]
auth_strategy=keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://VirtualIP:9293
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://VirtualIP:5001/v3
memcached_servers=cont01:11211,cont02:11211,cont03:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova_typora
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba0991
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
hw_disk_discard = unmap
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://VirtualIP:9697
auth_url = http://VirtualIP:5001
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_typora
[notifications]
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://VirtualIP:5001/v3
username = placement
password = placement_typora
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.10.20:6081/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm][root@comp02:/etc/ceph]# vim /etc/nova/nova.conf
[root@comp02:/root]# egrep -v "^$|^#" /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.10.26
use_neutron = true
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:adminopenstack@cont01:5672,openstack:adminopenstack@cont02:5672,openstack:adminopenstack@cont03:5672
[api]
auth_strategy=keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://VirtualIP:9293
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://VirtualIP:5001/v3
memcached_servers=cont01:11211,cont02:11211,cont03:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova_typora
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba0991
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
hw_disk_discard = unmap
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://VirtualIP:9697
auth_url = http://VirtualIP:5001
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_typora
[notifications]
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://VirtualIP:5001/v3
username = placement
password = placement_typora
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.10.20:6081/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm][root@comp03:/root]# vim /etc/nova/nova.conf
[root@comp03:/root]# egrep -v "^$|^#" /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.10.27
use_neutron = true
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:adminopenstack@cont01:5672,openstack:adminopenstack@cont02:5672,openstack:adminopenstack@cont03:5672
[api]
auth_strategy=keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://VirtualIP:9293
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://VirtualIP:5001/v3
memcached_servers=cont01:11211,cont02:11211,cont03:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova_typora
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = c6fce3d0-f874-44c8-a26c-8f42bdba0991
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
hw_disk_discard = unmap
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://VirtualIP:9697
auth_url = http://VirtualIP:5001
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron_typora
[notifications]
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://VirtualIP:5001/v3
username = placement
password = placement_typora
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.10.20:6081/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

5.3 配置live-migration

5.3.1 修改/etc/libvirt/libvirtd.conf

//在全部计算节点操作
[root@comp01:/etc/ceph]# vim /etc/libvirt/libvirtd.conf
# 取消以下三行的注释
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"
# 取消注释,并修改监听端口
55:listen_addr = "192.168.10.25"
# 取消注释,同时取消认证
158:auth_tcp = "none" [root@comp02:/etc/ceph]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"
55:listen_addr = "192.168.10.26"
158:auth_tcp = "none"[root@comp03:/etc/ceph]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"
55:listen_addr = "192.168.10.27"
158:auth_tcp = "none"

5.3.2 修改/etc/sysconfig/libvirtd

//在全部计算节点操作
[root@comp01:/etc/ceph]# vim /etc/sysconfig/libvirtd
# 取消注释
9:LIBVIRTD_ARGS="--listen" [root@comp02:/etc/ceph]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
9:LIBVIRTD_ARGS="--listen"[root@comp03:/etc/ceph]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
9:LIBVIRTD_ARGS="--listen"

5.3.3 重启libvirtd与nova-compute服务

在所有计算节点操作


systemctl restart libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
netstat -tunlp | grep 16509 

5.4 验证

5.4.1 创建基于ceph存储的bootable存储卷

注:当nova从rbd启动instance时,镜像格式必须是raw格式,否则虚拟机在启动时glance-api与cinder均会报错;
首先进行格式转换,将*.img文件转换为*.raw文件
[root@cont01:/root]# qemu-img convert -f qcow2 -O raw ~/cirros-0.4.0-x86_64-disk.img ~/cirros-0.4.0-x86_64-disk.raw//生成raw格式镜像
[root@cont01:/root]# openstack image create "cirros-raw" --file ~/cirros-0.4.0-x86_64-disk.raw --disk-format raw --container-format bare --public
//生成CentOS7 RAW镜像
[root@cont01:/data]# ls
CentOS-7-x86_64-GenericCloud-1907.raw  CentOS-7-x86_64-GenericCloud-1907.raw.tar.gz
[root@cont01:/data]# openstack image create "CentOS-7" \--file /data/CentOS-7-x86_64-GenericCloud-1907.raw \--disk-format raw \--container-format bare \--public

//使用新镜像创建bootable卷
[root@cont01:/root]# cinder create --image-id ade6c754-ff1e-4538-836a-46570e7a96ca --volume-type ceph --name ceph-bootable1 2
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2020-03-14T08:51:36.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 39f6a147-7319-477a-8a95-3bd9e85bd01b |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | ceph-bootable1                       |
| os-vol-host-attr:host          | cont02@ceph#ceph                     |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 04a11d4762be4cac9968b2fb0d8c438f     |
| replication_status             | None                                 |
| size                           | 2                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | 2020-03-14T08:51:37.000000           |
| user_id                        | 45fd61b53e2c45499e6e05037ce4e03c     |
| volume_type                    | ceph                                 |
+--------------------------------+--------------------------------------+
[root@cont01:/root]# cinder list
+----------------------------+-----------+-------+------+-------------+---------+---+
| ID                | Status    | Name | Size | Volume Type | Bootable | Attached to |
+-------------------+-----------+------+------+-------------+----------+------------+
| 1bec4e01-19b9-4d5c-925a-5bcb6b655bd8 | available | ceph-volume1 | 100| ceph | false |             |
| 39f6a147-7319-477a-8a95-3bd9e85bd01b | available | ceph-bootable1| 2 | ceph | true     |             |
+---------------+-----------+----------------+------+-------------+----------+-------------+

Openstack集群-Ceph集群作为存储的部署相关推荐

  1. Cluster04 - Ceph概述 部署Ceph集群 Ceph块存储

    ceph 快照:可用做备份 一.ceph概述 1.1 什么是分布式文件系统 •  分布式文件系统(Distributed File System)是指文 件系统管理的物理存储资源不一定直接连接在本地节 ...

  2. 集群基础之04(部署ceph实验环境、部署ceph集群、创建Ceph块存储、块存储应用、挂载Ceph文件系统、创建对象存储服务器)

    目录 前言: Ceph简介 Ceph特点 Ceph架构 Ceph核心组件及概念介绍 1.部署ceph实验环境: 2 .部署ceph集群 3.创建Ceph块存储 4.块存储应用 5.挂载Ceph文件系统 ...

  3. 【CEPH-初识篇】ceph详细介绍+“ 一 ” 篇解决ceph集群搭建, “ 三 ” 大(对象、块、文件)存储使用

    文章目录 前言 简介(理论篇) 逻辑结构 数据存储原理 三大存储 RADOSGW(对象网关) BRD(块存储) CEPHFS(文件存储) 所有组件结合起来 POOL.PG简介 组件结合 搭建ceph( ...

  4. 使用Ceph集群作为Kubernetes的动态分配持久化存储

    2019独角兽企业重金招聘Python工程师标准>>> 使用Docker快速部署Ceph集群 , 然后使用这个Ceph集群作为Kubernetes的动态分配持久化存储. Kubern ...

  5. 玩转Docker Ceph集群及对象存储

    为什么80%的码农都做不了架构师?>>>    [编者按]Ceph是一种集高性能.高可靠性和高可扩展性为一体的统一的.分布式的存储系统."统一的"意味着Ceph可 ...

  6. 部署Ceph集群(块存储,文件系统存储,对象存储)

    一 前言 分布式文件系统(Distributed File System):文件系统管理的物理存储资源不一定直接连接在本地节点上,而是通过计算机网络与节点相连.分布式文件系统的设计基于C/S模式 1, ...

  7. Ceph集群搭建及其运用(块存储、ceph文件系统)

    一.ceph简介 ceph被称作面向未来的存储, 可以实现的存储方式: 块存储:提供像普通硬盘一样的存储,为使用者提供"硬盘" 文件系统存储:类似于NFS的共享方式,为使用者提供共 ...

  8. OpenStack+Ceph集群 虚机实例扩容

    OpenStack+Ceph集群 虚机实例扩容 在虚机上进行minio测试操作的时候,提示fatal error: runtime: out of memory 怀疑是虚机分配的1G内存不够用了,需要 ...

  9. 第⑩讲:Ceph集群CephFS文件存储核心概念及部署使用

    文章目录 1.CephFS文件存储核心概念 1.1.CephFS文件存储简介 1.2.CephFS文件存储架构 1.3.CephFS文件系统的应用场景与特性 2.在Ceph集群中部署MDS组件 3.在 ...

最新文章

  1. java基本数据类型自动转包装类,Java String和基本数据类型之间的转换(包装类)
  2. python编程做什么工作-学python编程语言能找什么工作
  3. [转:Pro ASP.NET MVC 5中的例子]使用MVC4,Ninject,EF,Moq,构建一个真实的应用电子商务SportsStore...
  4. Apache ZooKeeper - 线上系统日志清理
  5. CCIE学习(7)——VLAN相关命令汇总
  6. andengine游戏引擎总结基础篇
  7. linux 线程退出 signal,Linux signal 那些事儿 (3)
  8. [错误记录] --- clickhouse报错Decimal value is too small
  9. [转载] python数组的使用
  10. Java 基础-面试题
  11. 【Elasticsearch】es IK分词器的安装
  12. Maven 插件开发
  13. 输入一组数,找出满足某条件的数
  14. 如何查看 MySQL 数据库的引擎
  15. PPT+VBA实现计时(倒计时)展示
  16. 泛微OA系统远程命令执行漏洞
  17. 互联网产品用户体验设计方法和用户体验优化方法
  18. 24V转12V10A带使能脚同步整流AH2305D
  19. 苹果id是什么格式的_可以修改微信号了,怎样起一个好看又好记的微信号ID?...
  20. Jenkins Pipeline 一键部署SpringBoot项目

热门文章

  1. 解决Windows与Ubuntu双系统时间同步问题
  2. web调用身份证读卡器品牌选择及技术实现
  3. 使用Teleport Ultra批量克隆网站,使用Easy CHM合并生成chm文件
  4. Android中设置字体居中,【Android】TextView中不同大小字体如何上下垂直居中?
  5. 程序员是怎样的一群人
  6. 【PMAC】Chapter4:PMAC的C#开发
  7. 利用Python处理excel表格数据
  8. JSON.stringify()方法时报错 Converting circular structure
  9. 宏碁暗影骑士3 win10 和 ubuntu18.04双系统安装
  10. PX4多传感器优先级判断