参考文档

https://www.linuxidc.com/Linux/2017-09/146760.htm
https://www.cnblogs.com/luohaixian/p/8087591.html
http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos

简介

Ceph的核心组件包括Ceph OSD、Ceph Monitor、Ceph MDS和Ceph RWG。
Ceph OSD:OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,与其它OSD间进行心跳检查等,并将一些变化情况上报给Ceph Monitor。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理,当然一个分区也可以成为一个OSD。
Ceph Monitor:由该英文名字我们可以知道它是一个监视器,负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。
Ceph MDS:全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。
Ceph RWG:RGW为Rados Gateway的缩写,ceph通过RGW为互联网云服务提供商提供对象存储服务。RGW在librados之上向应用提供访问ceph集群的RestAPI, 支持Amazon S3和openstack swift两种接口。对RGW最直接的理解就是一个协议转换层,把从上层应用符合S3或Swift协议的请求转换成rados的请求, 将数据保存在rados集群中。

架构图

安装部署

一、基础环境

0、服务分布

mon ceph0、ceph2、cphe3 注意mon为奇数节点
osd ceph0、ceph1、ceph2、ceph3
rgw ceph1
deploy ceph0

1、host解析

[root@idcv-ceph0 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
172.20.1.138 idcv-ceph0
172.20.1.139 idcv-ceph1
172.20.1.140 idcv-ceph2
172.20.1.141 idcv-ceph3

2、ntp时间同步

[root@idcv-ceph0 ~]# ntpdate 172.20.0.63

3、ssh免密码登陆

[root@idcv-ceph0 ~]# ssh-keygen
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph1
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph2
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph3

4、update系统

[root@idcv-ceph0 ~]# yum update

5、关闭selinux

[root@idcv-ceph0 ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config

6、关闭iptables

[root@idcv-ceph0 ~]# systemctl disable firewalld

7、reboot

[root@idcv-ceph0 ~]# reboot

二、安装部署deploy节点

1、设置国内yum源

[root@idcv-ceph0 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

2、安装ceph-deploy

[root@idcv-ceph0 ~]# yum install ceph-deploy
[root@idcv-ceph0 ~]# ceph-deploy --version
1.5.39
[root@idcv-ceph0 ~]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

3、创建部署目录并部署集群

[root@idcv-ceph0 ~]# mkdir cluster
[root@idcv-ceph0 ~]# cd cluster
[root@idcv-ceph0 cluster]# ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f7c607aa5f0>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7c5ff1bcf8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ip link show
[idcv-ceph0][INFO ] Running command: /usr/sbin/ip addr show
[idcv-ceph0][DEBUG ] IP addresses found: [u'172.20.1.138']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph0
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph0 at 172.20.1.138
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph1][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph1][DEBUG ] IP addresses found: [u'172.20.1.139']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph1
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph1 at 172.20.1.139
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph2][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph2
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph2][DEBUG ] IP addresses found: [u'172.20.1.140']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph2
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph2 at 172.20.1.140
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph3][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph3
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph3][DEBUG ] IP addresses found: [u'172.20.1.141']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph3
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph3 at 172.20.1.141
[ceph_deploy.new][DEBUG ] Monitor initial members are ['idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.20.1.138', '172.20.1.139', '172.20.1.140', '172.20.1.141']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

三、安装mon服务

1、修改cpeh.conf文件
注意mon为奇数,如果为偶数,有一个不会安装,另外设置好public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s)

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2

2、开始部署mon服务

[root@idcv-ceph0 cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd263377368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fd26335c6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] deploying mon to idcv-ceph0
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] remote hostname: idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][DEBUG ] create the mon path if it does not exist
[idcv-ceph0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring
[idcv-ceph0][DEBUG ] create the monitor keyring file
[idcv-ceph0][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i idcv-ceph0 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph0][DEBUG ] ceph-mon: renaming mon.noname-a 172.20.1.138:6789/0 to mon.idcv-ceph0
[idcv-ceph0][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph0][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph0 for mon.idcv-ceph0
[idcv-ceph0][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring
[idcv-ceph0][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph0][DEBUG ] create the init path if it does not exist
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mon@idcv-ceph0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph0.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][DEBUG ] ****
[idcv-ceph0][DEBUG ] status for monitor: mon.idcv-ceph0
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "election_epoch": 0,
[idcv-ceph0][DEBUG ] "extra_probe_peers": [
[idcv-ceph0][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "monmap": {
[idcv-ceph0][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "epoch": 0,
[idcv-ceph0][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph0][DEBUG ] "modified": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "mons": [
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "rank": 0
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/1",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph0][DEBUG ] "rank": 1
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph0][DEBUG ] "rank": 2
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/3",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph0][DEBUG ] "rank": 3
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ]
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "outside_quorum": [
[idcv-ceph0][DEBUG ] "idcv-ceph0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "quorum": [],
[idcv-ceph0][DEBUG ] "rank": 0,
[idcv-ceph0][DEBUG ] "state": "probing",
[idcv-ceph0][DEBUG ] "sync_provider": []
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ****
[idcv-ceph0][INFO ] monitor: mon.idcv-ceph0 is running
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph1][DEBUG ] get remote short hostname
[idcv-ceph1][DEBUG ] deploying mon to idcv-ceph1
[idcv-ceph1][DEBUG ] get remote short hostname
[idcv-ceph1][DEBUG ] remote hostname: idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph2 ...
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph2][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] deploying mon to idcv-ceph2
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] remote hostname: idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][DEBUG ] create the mon path if it does not exist
[idcv-ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring
[idcv-ceph2][DEBUG ] create the monitor keyring file
[idcv-ceph2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph2 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph2][DEBUG ] ceph-mon: renaming mon.noname-c 172.20.1.140:6789/0 to mon.idcv-ceph2
[idcv-ceph2][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph2 for mon.idcv-ceph2
[idcv-ceph2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring
[idcv-ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph2][DEBUG ] create the init path if it does not exist
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph2
[idcv-ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph2.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[idcv-ceph2][DEBUG ] ****
[idcv-ceph2][DEBUG ] status for monitor: mon.idcv-ceph2
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "election_epoch": 0,
[idcv-ceph2][DEBUG ] "extra_probe_peers": [
[idcv-ceph2][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "monmap": {
[idcv-ceph2][DEBUG ] "created": "2018-07-03 11:06:15.703352",
[idcv-ceph2][DEBUG ] "epoch": 0,
[idcv-ceph2][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph2][DEBUG ] "modified": "2018-07-03 11:06:15.703352",
[idcv-ceph2][DEBUG ] "mons": [
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph2][DEBUG ] "rank": 0
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "rank": 1
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph2][DEBUG ] "rank": 2
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/3",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph2][DEBUG ] "rank": 3
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ]
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "outside_quorum": [
[idcv-ceph2][DEBUG ] "idcv-ceph0",
[idcv-ceph2][DEBUG ] "idcv-ceph2"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "quorum": [],
[idcv-ceph2][DEBUG ] "rank": 1,
[idcv-ceph2][DEBUG ] "state": "probing",
[idcv-ceph2][DEBUG ] "sync_provider": []
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ****
[idcv-ceph2][INFO ] monitor: mon.idcv-ceph2 is running
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph3 ...
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph3][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] deploying mon to idcv-ceph3
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] remote hostname: idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][DEBUG ] create the mon path if it does not exist
[idcv-ceph3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring
[idcv-ceph3][DEBUG ] create the monitor keyring file
[idcv-ceph3][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph3 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph3][DEBUG ] ceph-mon: renaming mon.noname-d 172.20.1.141:6789/0 to mon.idcv-ceph3
[idcv-ceph3][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph3 for mon.idcv-ceph3
[idcv-ceph3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring
[idcv-ceph3][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph3][DEBUG ] create the init path if it does not exist
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph3
[idcv-ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph3.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[idcv-ceph3][DEBUG ] ****
[idcv-ceph3][DEBUG ] status for monitor: mon.idcv-ceph3
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "election_epoch": 1,
[idcv-ceph3][DEBUG ] "extra_probe_peers": [
[idcv-ceph3][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.140:6789/0"
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "monmap": {
[idcv-ceph3][DEBUG ] "created": "2018-07-03 11:06:18.695039",
[idcv-ceph3][DEBUG ] "epoch": 0,
[idcv-ceph3][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph3][DEBUG ] "modified": "2018-07-03 11:06:18.695039",
[idcv-ceph3][DEBUG ] "mons": [
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph3][DEBUG ] "rank": 0
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph3][DEBUG ] "rank": 1
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "rank": 2
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph3][DEBUG ] "rank": 3
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ]
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "outside_quorum": [],
[idcv-ceph3][DEBUG ] "quorum": [],
[idcv-ceph3][DEBUG ] "rank": 2,
[idcv-ceph3][DEBUG ] "state": "electing",
[idcv-ceph3][DEBUG ] "sync_provider": []
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ****
[idcv-ceph3][INFO ] monitor: mon.idcv-ceph3 is running
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors

3、注意mon节点只能是奇数,根据上面报错有一个节点没有安装成功mon服务,需要把idcv-ceph1删掉

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2
[root@idcv-ceph0 cluster]# ceph mon remove idcv-ceph1
removing mon.idcv-ceph1 at 0.0.0.0:0/1, there will be 3 monitors
[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating

4、也可以修改ceph.conf文件,再覆盖部署一次

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2

[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fce9cf7a368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fce9cf5f6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph2 idcv-ceph3
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] deploying mon to idcv-ceph0
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] remote hostname: idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][DEBUG ] create the mon path if it does not exist
[idcv-ceph0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph0][DEBUG ] create the init path if it does not exist
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][DEBUG ] ****
[idcv-ceph0][DEBUG ] status for monitor: mon.idcv-ceph0
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "election_epoch": 8,
[idcv-ceph0][DEBUG ] "extra_probe_peers": [
[idcv-ceph0][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "monmap": {
[idcv-ceph0][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "epoch": 2,
[idcv-ceph0][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph0][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph0][DEBUG ] "mons": [
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "rank": 0
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph0][DEBUG ] "rank": 1
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph0][DEBUG ] "rank": 2
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ]
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "outside_quorum": [],
[idcv-ceph0][DEBUG ] "quorum": [
[idcv-ceph0][DEBUG ] 0,
[idcv-ceph0][DEBUG ] 1,
[idcv-ceph0][DEBUG ] 2
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "rank": 0,
[idcv-ceph0][DEBUG ] "state": "leader",
[idcv-ceph0][DEBUG ] "sync_provider": []
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ****
[idcv-ceph0][INFO ] monitor: mon.idcv-ceph0 is running
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph2 ...
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph2][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] deploying mon to idcv-ceph2
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] remote hostname: idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][DEBUG ] create the mon path if it does not exist
[idcv-ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph2][DEBUG ] create the init path if it does not exist
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[idcv-ceph2][DEBUG ] ****
[idcv-ceph2][DEBUG ] status for monitor: mon.idcv-ceph2
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "election_epoch": 8,
[idcv-ceph2][DEBUG ] "extra_probe_peers": [
[idcv-ceph2][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "monmap": {
[idcv-ceph2][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph2][DEBUG ] "epoch": 2,
[idcv-ceph2][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph2][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph2][DEBUG ] "mons": [
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph2][DEBUG ] "rank": 0
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "rank": 1
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph2][DEBUG ] "rank": 2
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ]
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "outside_quorum": [],
[idcv-ceph2][DEBUG ] "quorum": [
[idcv-ceph2][DEBUG ] 0,
[idcv-ceph2][DEBUG ] 1,
[idcv-ceph2][DEBUG ] 2
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "rank": 1,
[idcv-ceph2][DEBUG ] "state": "peon",
[idcv-ceph2][DEBUG ] "sync_provider": []
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ****
[idcv-ceph2][INFO ] monitor: mon.idcv-ceph2 is running
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph3 ...
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph3][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] deploying mon to idcv-ceph3
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] remote hostname: idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][DEBUG ] create the mon path if it does not exist
[idcv-ceph3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph3][DEBUG ] create the init path if it does not exist
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[idcv-ceph3][DEBUG ] ****
[idcv-ceph3][DEBUG ] status for monitor: mon.idcv-ceph3
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "election_epoch": 8,
[idcv-ceph3][DEBUG ] "extra_probe_peers": [
[idcv-ceph3][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.140:6789/0"
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "monmap": {
[idcv-ceph3][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph3][DEBUG ] "epoch": 2,
[idcv-ceph3][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph3][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph3][DEBUG ] "mons": [
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph3][DEBUG ] "rank": 0
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph3][DEBUG ] "rank": 1
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "rank": 2
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ]
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "outside_quorum": [],
[idcv-ceph3][DEBUG ] "quorum": [
[idcv-ceph3][DEBUG ] 0,
[idcv-ceph3][DEBUG ] 1,
[idcv-ceph3][DEBUG ] 2
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "rank": 2,
[idcv-ceph3][DEBUG ] "state": "peon",
[idcv-ceph3][DEBUG ] "sync_provider": []
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ****
[idcv-ceph3][INFO ] monitor: mon.idcv-ceph3 is running
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph0
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph0 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph2
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph3
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpBqY1be
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] fetch remote file
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.admin
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mds
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mgr
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-osd
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpBqY1be

[root@idcv-ceph0 cluster]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.lo

五、部署OSD角色

先准备后激活
ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1

[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] disk : [('idcv-ceph0', '/dev/sdb', None), ('idcv-ceph1', '/dev/sdb', None), ('idcv-ceph2', '/dev/sdb', None), ('idcv-ceph3', '/dev/sdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f103c7f35a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f103c846f50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : True
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks idcv-ceph0:/dev/sdb: idcv-ceph1:/dev/sdb: idcv-ceph2:/dev/sdb: idcv-ceph3:/dev/sdb:
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph0 disk /dev/sdb journal None activate False
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph0][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[idcv-ceph0][WARNIN] backup header from main header.
[idcv-ceph0][WARNIN]
[idcv-ceph0][DEBUG ] ****
[idcv-ceph0][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[idcv-ceph0][DEBUG ] verification and recovery are STRONGLY recommended.
[idcv-ceph0][DEBUG ] ****
[idcv-ceph0][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph0][DEBUG ] other utilities.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] Creating new GPT entries.
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph0][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:3b210c8e-b2ac-4266-9e59-623c031ebb89 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph0][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph0][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph0][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph0][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph0][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph0][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph0][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph0][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph0][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph0][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.kvs_nq with options noatime,inode64
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp
[idcv-ceph0][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.kvs_nq/journal -> /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph0][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph0][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph0][INFO ] checking OSD status...
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph0 is now ready for osd use.
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph1 disk /dev/sdb journal None activate False
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph1][DEBUG ] Creating new GPT entries.
[idcv-ceph1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph1][DEBUG ] other utilities.
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] Creating new GPT entries.
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:09dad07a-985e-4733-a228-f7b1105b7385 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:2809f370-e6ad-4d29-bf6b-57fe1f2004c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph1][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph1][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph1][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph1][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph1][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph1][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.HAg1vC with options noatime,inode64
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp
[idcv-ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HAg1vC/journal -> /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph1][INFO ] checking OSD status...
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph1 is now ready for osd use.
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph2 disk /dev/sdb journal None activate False
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph2][DEBUG ] Creating new GPT entries.
[idcv-ceph2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph2][DEBUG ] other utilities.
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] Creating new GPT entries.
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:857f0966-30d5-4ad1-9e0c-abff0fbbbc4e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:dac63cc2-6876-4004-ba3b-7786be39d392 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph2][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph2][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph2][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph2][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph2][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph2][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.jhzVmR with options noatime,inode64
[idcv-ceph2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp
[idcv-ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jhzVmR/journal -> /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph2][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph2][INFO ] checking OSD status...
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph2 is now ready for osd use.
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph3 disk /dev/sdb journal None activate False
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph3][DEBUG ] Creating new GPT entries.
[idcv-ceph3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph3][DEBUG ] other utilities.
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] Creating new GPT entries.
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:52677a68-3cf4-4d9a-b2d4-8c823e1cb901 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph3][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:a85b0288-85ce-4887-8249-497ba880fe10 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph3][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph3][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph3][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph3][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph3][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph3][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.gjITlj with options noatime,inode64
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp
[idcv-ceph3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjITlj/journal -> /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph3][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph3][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph3][INFO ] checking OSD status...
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph3 is now ready for osd use.
[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc94a47f5a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fc94a4d2f50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('idcv-ceph0', '/dev/sdb1', None), ('idcv-ceph1', '/dev/sdb1', None), ('idcv-ceph2', '/dev/sdb1', None), ('idcv-ceph3', '/dev/sdb1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks idcv-ceph0:/dev/sdb1: idcv-ceph1:/dev/sdb1: idcv-ceph2:/dev/sdb1: idcv-ceph3:/dev/sdb1:
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] activating host idcv-ceph0 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
[idcv-ceph0][WARNIN] main_activate: path = /dev/sdb1
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/sdb1
[idcv-ceph0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.X6wbv9 with options noatime,inode64
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] activate: Cluster uuid is 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph0][WARNIN] activate: Cluster name is ceph
[idcv-ceph0][WARNIN] activate: OSD uuid is 3b210c8e-b2ac-4266-9e59-623c031ebb89
[idcv-ceph0][WARNIN] activate: OSD id is 0
[idcv-ceph0][WARNIN] activate: Marking with init system systemd
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.X6wbv9/systemd
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.X6wbv9/systemd
[idcv-ceph0][WARNIN] activate: ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] mount_activate: ceph osd.0 already mounted in position; unmounting ours.
[idcv-ceph0][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] start_daemon: Starting ceph osd.0...
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0
[idcv-ceph0][WARNIN] Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0 --runtime
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@0
[idcv-ceph0][INFO ] checking OSD status...
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] activating host idcv-ceph1 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
[idcv-ceph1][WARNIN] main_activate: path = /dev/sdb1
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph1][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdb1
[idcv-ceph1][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph1][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.zUV3_1 with options noatime,inode64
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] activate: Cluster uuid is 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph1][WARNIN] activate: Cluster name is ceph
[idcv-ceph1][WARNIN] activate: OSD uuid is 2809f370-e6ad-4d29-bf6b-57fe1f2004c6
[idcv-ceph1][WARNIN] allocate_osd_id: Allocating OSD id...
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 2809f370-e6ad-4d29-bf6b-57fe1f2004c6
[idcv-ceph1][WARNIN] mount_activate: Failed to activate
[idcv-ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] Traceback (most recent call last):
[idcv-ceph1][WARNIN] File "/usr/sbin/ceph-disk", line 9, in <module>
[idcv-ceph1][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5371, in run
[idcv-ceph1][WARNIN] main(sys.argv[1:])
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5322, in main
[idcv-ceph1][WARNIN] args.func(args)
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3445, in main_activate
[idcv-ceph1][WARNIN] reactivate=args.reactivate,
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3202, in mount_activate
[idcv-ceph1][WARNIN] (osd_id, cluster) = activate(path, activate_key_template, init)
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3365, in activate
[idcv-ceph1][WARNIN] keyring=keyring,
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1013, in allocate_osd_id
[idcv-ceph1][WARNIN] raise Error('ceph osd create failed', e, e.output)
[idcv-ceph1][WARNIN] ceph_disk.main.Error: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1: 2018-07-03 11:47:35.463545 7f8310450700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted
[idcv-ceph1][WARNIN] Error connecting to cluster: PermissionError
[idcv-ceph1][WARNIN]
[idcv-ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

2、查看了下idcv-ceph1没有加上去

[root@idcv-ceph0 cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 99.5G 0 part
└─centos-root 253:0 0 99.5G 0 lvm /
sdb 8:16 0 100G 0 disk
├─sdb1 8:17 0 95G 0 part /var/lib/ceph/osd/ceph-0
└─sdb2 8:18 0 5G 0 part
sr0 11:0 1 1024M 0 rom
[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_OK
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
100 MB used, 284 GB / 284 GB avail
64 active+clean
[root@idcv-ceph0 cluster]#

3、使用这个方法赋予角色OSD

[root@idcv-ceph0 cluster]# ceph-deploy install --no-adjust-repos --osd idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --osd idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f19c0ebd440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7f19c1f96d70>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : True
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts idcv-ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][INFO ] installing Ceph on idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo yum clean all
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph1][DEBUG ] Cleaning up everything
[idcv-ceph1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph1][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph1][INFO ] Running command: sudo yum -y install ceph
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Determining fastest mirrors
[idcv-ceph1][DEBUG ] base: mirrors.tuna.tsinghua.edu.cn
[idcv-ceph1][DEBUG ]
epel: mirrors.huaweicloud.com
[idcv-ceph1][DEBUG ] extras: mirror.bit.edu.cn
[idcv-ceph1][DEBUG ]
updates: mirrors.huaweicloud.com
[idcv-ceph1][DEBUG ] 12 packages excluded due to repository priority protections
[idcv-ceph1][DEBUG ] Package 1:ceph-10.2.10-0.el7.x86_64 already installed and latest version
[idcv-ceph1][DEBUG ] Nothing to do
[idcv-ceph1][INFO ] Running command: sudo ceph --version
[idcv-ceph1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

4、节点cpeh1 还是安装不上osd角色,这边准备初始化ceph1重新添加

ceph-deploy purge 节点
ceph-deploy purgedata 节点
清楚安装包和残余数据
ceph-dpeloy install --no-adjust-repos --osd ceph1
直接装包 赋予OSD存储角色之后在添加OSD
具体步骤如下:
ceph-deploy purge idcv-ceph1
ceph-deploy purgedata idcv-ceph1
ceph-deploy --overwrite-conf osd prepare idcv-ceph1:/dev/sdb
ceph-deploy --overwrite-conf osd activate idcv-ceph1:/dev/sdb1

5、部署成功osd查看集群状态

[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_OK
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e27: 4 osds: 4 up, 4 in
flags sortbitwise,require_jewel_osds
pgmap v64: 104 pgs, 6 pools, 1588 bytes data, 171 objects
138 MB used, 379 GB / 379 GB avail
104 active+clean

六、部署RGW服务

1、部署cdph1为对象网关

[root@idcv-ceph0 cluster]# ceph-deploy install --no-adjust-repos --rgw idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --rgw idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fba6af12440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7fba6bfe9d70>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] install_rgw : True
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts idcv-ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][INFO ] installing Ceph on idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo yum clean all
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph1][DEBUG ] Cleaning up everything
[idcv-ceph1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph1][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph1][INFO ] Running command: sudo yum -y install ceph-radosgw
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Determining fastest mirrors
[idcv-ceph1][DEBUG ] base: mirrors.aliyun.com
[idcv-ceph1][DEBUG ]
epel: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] extras: mirrors.aliyun.com
[idcv-ceph1][DEBUG ]
updates: mirror.bit.edu.cn
[idcv-ceph1][DEBUG ] 12 packages excluded due to repository priority protections
[idcv-ceph1][DEBUG ] Resolving Dependencies
[idcv-ceph1][DEBUG ] --> Running transaction check
[idcv-ceph1][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be installed
[idcv-ceph1][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Dependencies Resolved
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Package Arch Version Repository Size
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Installing:
[idcv-ceph1][DEBUG ] ceph-radosgw x86_64 1:10.2.10-0.el7 Ceph 266 k
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Transaction Summary
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Install 1 Package
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Total download size: 266 k
[idcv-ceph1][DEBUG ] Installed size: 795 k
[idcv-ceph1][DEBUG ] Downloading packages:
[idcv-ceph1][DEBUG ] Running transaction check
[idcv-ceph1][DEBUG ] Running transaction test
[idcv-ceph1][DEBUG ] Transaction test succeeded
[idcv-ceph1][DEBUG ] Running transaction
[idcv-ceph1][DEBUG ] Installing : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1
[idcv-ceph1][DEBUG ] Verifying : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Installed:
[idcv-ceph1][DEBUG ] ceph-radosgw.x86_64 1:10.2.10-0.el7
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Complete!
[idcv-ceph1][INFO ] Running command: sudo ceph --version
[idcv-ceph1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

2、设置idcv-ceph1为管理网关

[root@idcv-ceph0 cluster]# ceph-deploy admin idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy admin idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5f91222fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f5f9234f9b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3、创建生成网关实例idcv-ceph1

[root@idcv-ceph0 cluster]# ceph-deploy rgw create idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy rgw create idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [('idcv-ceph1', 'rgw.idcv-ceph1')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6c86f85128>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f6c8805a7d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts idcv-ceph1:rgw.idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph1][WARNIN] rgw keyring does not exist yet, creating one
[idcv-ceph1][DEBUG ] create a keyring file
[idcv-ceph1][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.idcv-ceph1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.idcv-ceph1/keyring
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.idcv-ceph1
[idcv-ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.idcv-ceph1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[idcv-ceph1][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host idcv-ceph1 and default port 7480

4、测试网关服务

[root@idcv-ceph0 cluster]# curl 172.20.1.139:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult&gt;

总结

到此所有需要相关服务已经部署完毕,如果对ceph.conf比较了解,设置正确参数,部署应该会比较顺利,下一篇将会测试osd块存储功能及rgw对象存储功能,链接为https://blog.51cto.com/jerrymin/2139046 。

转载于:https://blog.51cto.com/jerrymin/2139045

CentOS 7.5安装部署Jewel版本Ceph集群相关推荐

  1. ceph-deploy部署指定版本ceph集群

    注意: 16版本的ceph已经不支持ceph-deploy,安装方法见我的博客: cephadm快速部署指定版本ceph集群_ggrong0213的博客-CSDN博客 1.集群规划: 主机名 IP 组 ...

  2. 使用cephadm部署单节点ceph集群,后期可扩容(基于官方文档,靠谱,读起来舒服)

    目录 ceph各种部署工具比较(来自官方文档的翻译,靠谱!) 材料准备 cephadm使用条件 服务器有外网访问能力 服务器没有外网访问能力 安装cephadm cephadm的功能 两种安装方式 基 ...

  3. Ceph分布式存储系列(二):ceph-deploy方式部署三节点ceph集群

    承接上文:Ceph分布式存储系列(一):Ceph工作原理及架构浅析梳理 之前都是使用Deepsea方式部署的ceph,长时间不用ceph-deploy了,这次来回顾,顺便总结下! 前言: ceph-d ...

  4. Zookeeper的安装部署,zookeeper参数配置说明,集群搭建,查看集群状态

    1.Zookeeper的安装部署 7.1 Zookeeper工作机制 7.1.1.Zookeeper集群角色 Zookeeper集群的角色:  Leader 和  follower (Observer ...

  5. Hadoop集群安装部署_伪分布式集群安装_02

    文章目录 一.解压安装 1. 安装包上传 2. 解压hadoop安装包 二.修改Hadoop相关配置文件 2.1. hadoop-env.sh 2.2. core-site.xml 2.3. hdfs ...

  6. Hadoop集群安装部署_伪分布式集群安装_01

    文章目录 一.配置基础环境 1. 设置静态ip 2. hostname 3. firewalld 4. ssh免密码登录 5. JDK 一.配置基础环境 1. 设置静态ip [root@bigdata ...

  7. 四、《云原生 | Kubernetes篇》二进制安装部署k8s高可用集群V1.24

    一.环境准备 1.1.部署k8s的两种方式 1)方式一:kubeadm部署 Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes ...

  8. Centos 7离线安装Nginx 配置负载均衡集群

    场景 项目中有三台应用服务器,系统为Centos 7 ,应用地址分别为: 192.168.198.229:8080 192.168.198.230:8080 192.168.198.231:8080 ...

  9. Ceph (2) - 安装Ceph集群方法 2:使用cephadm配置Nautilus版Ceph集群

    <OpenShift 4.x HOL教程汇总> 文章目录 安装环境说明 Ceph集群节点说明 Ceph集群主机环境说明 用cephadm部署Ceph集群 准备节点环境 设置环境变量 设置h ...

最新文章

  1. 汇编语言 第3版 王爽 检测点6.1自己的答案
  2. 帅爆了!3个月0基础转型头条数据分析师,他做对了什么?
  3. Linux学习笔记4-三种不同类型的软件的安装(绿色软件、rpm软件、源代码软件)...
  4. 多线程并发 (二) 了解 Thread
  5. 使用Stackblitz一分钟之内创建一个Angular应用
  6. [vue] 说说你对单向数据流和双向数据流的理解
  7. 3v stm32 供电 晶振起振_晶振起振_单片机晶振不起振原因及解决方法
  8. shell基础之pxe批量部署
  9. python 网络设备巡检_python写的一个服务器自动巡检工具
  10. 中文怎么设置 水晶报表 越南文_越南语到底是不是汉语的一门方言?为什么和粤语这么像?...
  11. OEIS | 一个牛x的网站
  12. GD32F103快速替换STM32F103
  13. 阿里巴巴分布式服务框架HSF
  14. 诊断实验评估指标-灵敏度(sensitivity)特异度(specificity)准确度(accuracy)
  15. 粒子滤波的通俗解释,傻子的搜寻策略--我的理解_拔剑-浆糊的传说_新浪博客
  16. RSA host key for xxx has changed and you have requested strict checking.
  17. Storm集成HBase、JDBC、Kafka、Hive
  18. 查看手机 ip 地址的方法
  19. Docker安装配置Redis最全教程
  20. 阿里云服务器域名备案

热门文章

  1. 将字符串转换成System.Drawing.Color类型
  2. nginx重定向规则入门
  3. linux任务计划学习
  4. RabbitMQ 2.8.7 发布,AMQP 消息队列
  5. 本计算机的英文意思,电脑的英文什么意思最新见解
  6. python实现get请求 模块_python爬虫 基于requests模块发起ajax的get请求实现解析
  7. 路由器截获微信消息_小白智慧微信小程序无法打印的解决方案
  8. 12 个有效的提高编程技能的方法
  9. Android FrameWork——Activity启动过程详解
  10. 2011-2-14 | Android Handler