一、基础环境准备

主机IP 主机名 部署服务 备注
192.168.0.91 admin-node ceph、ceph-deploy、mon mon节点又称为master节点
192.168.0.92 ceph01 ceph osd
192.168.0.93 ceph02 ceph osd

Ceph版本 10.2.11 Ceph-deploy版本 7.6.1810 内核版本 3.10.0-957.el7.x86_64

每个节点关闭防火墙和selinux
[root@admin-node ~]# systemctl stop firewalld
[root@admin-node ~]# systemctl disable firewalld
[root@admin-node ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@admin-node ~]# setenforce 0每个节点设置时间同步
[root@admin-node ~]# yum -y install ntp ntpdate ntp-doc
[root@admin-node ~]# systemctl enable ntpd
[root@admin-node ~]# systemctl start ntpd
[root@admin-node ~]# /usr/sbin/ntpdate ntp1.aliyun.com
[root@admin-node ~]# hwclock --systohc
[root@admin-node ~]# timedatectl set-timezone Asia/Shanghai每个节点修改主机名
[root@admin-node ~]# hostnamectl set-hostname admin-node
[root@ceph01 ~]# hostnamectl set-hostname ceph01
[root@ceph02 ~]# hostnamectl set-hostname ceph02每个节点修改hosts
[root@admin-node ~]# cat >> /etc/hosts <<EOF
192.168.0.91 admin-node
192.168.0.92 ceph01
192.168.0.93 ceph02
EOF设置yum源
[root@admin-node ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@admin-node ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@admin-node ~]# yum clean all && yum makecache添加ceph源
[root@admin-node ~]# cat > /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
EOF缓存yum源
[root@admin-node ~]# yum clean all && yum makecache每个节点创建cephuser用户并设置sudo权限
[root@admin-node ~]# useradd -d /home/cephuser -m cephuser
[root@admin-node ~]# echo "cephuser"|passwd --stdin cephuser
[root@admin-node ~]# echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
[root@admin-node ~]# chmod 0440 /etc/sudoers.d/cephuser
[root@admin-node ~]# sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers测试cephuser的sudo权限
[root@admin-node ~]# su - cephuser
[cephuser@admin-node ~]$ sudo su -因为ceph-deploy不支持输入密码,所以必须要配置master节点与每个Ceph节点的ssh无密码登录。
[root@admin-node ~]# su - cephuser
[cephuser@admin-node ~]$ ssh-keygen -t rsa
[cephuser@admin-node ~]$ ssh-copy-id cephuser@admin-node
[cephuser@admin-node ~]$ ssh-copy-id cephuser@ceph01
[cephuser@admin-node ~]$ ssh-copy-id cephuser@ceph02

二、准备磁盘

三台主机上都添加一块30G大小的硬盘,我这里是在vmware上添加,比较简单。

添加好硬盘之后,用fdisk查看下
[cephuser@admin-node ~]$ sudo fdisk -lDisk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a93adDevice Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   104857599    52427776   83  LinuxDisk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes对磁盘进行分区,分别输入“n”,“p”,“1”,然后会两次回车,再输入“w”。
[cephuser@admin-node ~]$ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x8aec5f5a.Command (m for help): n
Partition type:p   primary (0 primary, 0 extended, 4 free)e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-62914559, default 62914559):
Using default value 62914559
Partition 1 of type Linux and of size 30 GiB is setCommand (m for help): w
The partition table has been altered!Calling ioctl() to re-read partition table.
Syncing disks.然后用fdisk -l就可以看到已经分区的磁盘了
[cephuser@admin-node ~]$ sudo fdisk -lDisk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8aec5f5aDevice Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    62914559    31456256   83  Linux格式化分区
[cephuser@admin-node ~]$ sudo mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=1966016 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=7864064, imaxpct=25=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=3839, version=2=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0添加到/etc/fstab
[root@admin-node ~]# echo '/dev/sdb1 /ceph xfs defaults 0 0' >> /etc/fstab
[root@admin-node ~]# mkdir /ceph
[root@admin-node ~]# mount -a
[root@admin-node ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        50G  1.7G   49G   4% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs           378M     0  378M   0% /run/user/0
/dev/sdb1        30G   33M   30G   1% /ceph

三、部署

使用ceph-deploy工具部署ceph集群,在master节点上新建一个ceph集群,使用ceph-deploy来管理三个节点。

[root@admin-node ~]# su - cephuser
[cephuser@admin-node ~]$ sudo yum -y install ceph ceph-deploy

创建cluster目录

[cephuser@admin-node ~]$ mkdir cluster
[cephuser@admin-node ~]$ cd cluster/

在master节点创建monitor

[cephuser@admin-node cluster]$ ceph-deploy new admin-node
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy new admin-node
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f27b483f5f0>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f27b3fbc5a8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['admin-node']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO  ] Running command: sudo /usr/sbin/ip link show
[admin-node][INFO  ] Running command: sudo /usr/sbin/ip addr show
[admin-node][DEBUG ] IP addresses found: [u'192.168.0.91']
[ceph_deploy.new][DEBUG ] Resolving host admin-node
[ceph_deploy.new][DEBUG ] Monitor admin-node at 192.168.0.91
[ceph_deploy.new][DEBUG ] Monitor initial members are ['admin-node']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.0.91']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...执行这条命令之后,admin-node节点作为了monitor节点,如果想多个mon节点可以实现互备,需要加上其他节点并且节点需要安装ceph-deploy

当上面的步骤执行完成之后,在cluster目录下会产生ceph.conf文件,这里需要修改ceph.conf文件(注意:mon_host必须和public network网络是同网段的),添加下面4行:

osd pool default size = 3
rbd_default_features = 1
public network = 192.168.0.0/24
osd journal size = 2000解释如下:
# osd pool default size = 3
设置默认副本数为3份,有一个副本故障,其他2个副本的osd可正常提供服务,需要注意的是如果设置为副本数为3,osd总数量需要是3的倍数。# rbd_default_features = 1
增加 rbd_default_features 配置可以永久的改变默认值。注意:这种方式设置是永久性的,要注意在集群各个node上都要修改。# public network = 192.168.0.0/24
设置集群所在的网段# osd journal size = 2000
OSD日志大小,单位是MB

安装ceph

这个过程有点长,需要等待一段时间

[cephuser@admin-node cluster]$ ceph-deploy install admin-node ceph01 ceph02

master节点初始化mon节点及收集秘钥信息

cephuser@admin-node cluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbc00e79ef0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7fbc00e566e0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts admin-node
[ceph_deploy.mon][DEBUG ] detecting platform for host admin-node ...
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
......
[admin-node][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin-node/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpmLTa1b

在master节点创建osd:

创建osd
[cephuser@admin-node cluster]$ ceph-deploy osd prepare admin-node:/ceph ceph01:/ceph ceph02:/ceph激活osd
在激活之前需要先将/ceph目录的权限改为ceph。因为从I版本起,ceph的守护进程以ceph用户而不是root用户运行。而osd在prepare和activate之前需要将硬盘挂载到指定目录去。而创建指定目录(例如/ceph)时,用户可能是非ceph用户例如root用户等。
[cephuser@admin-node cluster]$ sudo chown -R ceph:ceph /ceph
[cephuser@admin-node cluster]$ ceph-deploy osd activate admin-node:/ceph ceph01:/ceph ceph02:/ceph

master创建mon节点-监控集群状态-同时管理集群及msd

[cephuser@admin-node cluster]$ ceph-deploy mon create admin-node
如果需要管理集群节点,需要在node节点安装ceph-deploy,因为ceph-deploy是管理ceph节点的工具。
如果需要增加其他的mon机器,可以使用 ceph-deploy mon add  主机名

修改秘钥权限

[cephuser@admin-node cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring 

检查ceph状态

[cephuser@admin-node cluster]$ sudo ceph health
HEALTH_OK
[cephuser@admin-node cluster]$ sudo ceph -scluster e76f5c31-89d6-4d5f-a302-8f1faf17655ehealth HEALTH_OKmonmap e1: 1 mons at {admin-node=192.168.0.91:6789/0}election epoch 4, quorum 0 admin-nodeosdmap e15: 3 osds: 3 up, 3 inflags sortbitwise,require_jewel_osdspgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects6321 MB used, 85790 MB / 92112 MB avail64 active+clean
[cephuser@admin-node cluster]$ ceph osd statosdmap e15: 3 osds: 3 up, 3 inflags sortbitwise,require_jewel_osds

检查osd运行状态

[cephuser@admin-node cluster]$ ceph osd statosdmap e15: 3 osds: 3 up, 3 inflags sortbitwise,require_jewel_osds

查看ceph集群磁盘

[cephuser@admin-node cluster]$ ceph df
GLOBAL:SIZE       AVAIL      RAW USED     %RAW USED 92112M     85790M        6321M          6.86
POOLS:NAME     ID     USED     %USED     MAX AVAIL     OBJECTS rbd      0         0         0        27061M           0 这里3台服务器每台30G硬盘组成90G

查看osd节点状态

[cephuser@admin-node cluster]$ ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.08789 root default
-2 0.02930     host admin-node                                   0 0.02930         osd.0            up  1.00000          1.00000
-3 0.02930     host ceph01                                       1 0.02930         osd.1            up  1.00000          1.00000
-4 0.02930     host ceph02                                       2 0.02930         osd.2            up  1.00000          1.00000 

查看osd的状态-负责数据存放的位置

[cephuser@admin-node cluster]$ ceph-deploy osd list admin-node ceph01 ceph02
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy osd list admin-node ceph01 ceph02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f614b4ba170>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f614b4fdf50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('admin-node', None, None), ('ceph01', None, None), ('ceph02', None, None)]
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd tree --format=json
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[admin-node][INFO  ] ----------------------------------------
[admin-node][INFO  ] ceph-0
[admin-node][INFO  ] ----------------------------------------
[admin-node][INFO  ] Path           /var/lib/ceph/osd/ceph-0
[admin-node][INFO  ] ID             0
[admin-node][INFO  ] Name           osd.0
[admin-node][INFO  ] Status         up
[admin-node][INFO  ] Reweight       1.0
[admin-node][INFO  ] Active         ok
[admin-node][INFO  ] Magic          ceph osd volume v026
[admin-node][INFO  ] Whoami         0
[admin-node][INFO  ] Journal path   /ceph/journal
[admin-node][INFO  ] ----------------------------------------
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[ceph01][INFO  ] ----------------------------------------
[ceph01][INFO  ] ceph-1
[ceph01][INFO  ] ----------------------------------------
[ceph01][INFO  ] Path           /var/lib/ceph/osd/ceph-1
[ceph01][INFO  ] ID             1
[ceph01][INFO  ] Name           osd.1
[ceph01][INFO  ] Status         up
[ceph01][INFO  ] Reweight       1.0
[ceph01][INFO  ] Active         ok
[ceph01][INFO  ] Magic          ceph osd volume v026
[ceph01][INFO  ] Whoami         1
[ceph01][INFO  ] Journal path   /ceph/journal
[ceph01][INFO  ] ----------------------------------------
[ceph02][DEBUG ] connection detected need for sudo
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph02][DEBUG ] find the location of an executable
[ceph02][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[ceph02][INFO  ] ----------------------------------------
[ceph02][INFO  ] ceph-2
[ceph02][INFO  ] ----------------------------------------
[ceph02][INFO  ] Path           /var/lib/ceph/osd/ceph-2
[ceph02][INFO  ] ID             2
[ceph02][INFO  ] Name           osd.2
[ceph02][INFO  ] Status         up
[ceph02][INFO  ] Reweight       1.0
[ceph02][INFO  ] Active         ok
[ceph02][INFO  ] Magic          ceph osd volume v026
[ceph02][INFO  ] Whoami         2
[ceph02][INFO  ] Journal path   /ceph/journal
[ceph02][INFO  ] ----------------------------------------

查看集群mon选举状态

[cephuser@admin-node cluster]$ ceph quorum_status --format json-pretty{"election_epoch": 4,"quorum": [0],"quorum_names": ["admin-node"],"quorum_leader_name": "admin-node","monmap": {"epoch": 1,"fsid": "e76f5c31-89d6-4d5f-a302-8f1faf17655e","modified": "2020-09-28 15:18:13.294522","created": "2020-09-28 15:18:13.294522","mons": [{"rank": 0,"name": "admin-node","addr": "192.168.0.91:6789\/0"}]}
}

创建文件的系统

先查看管理节点状态,默认是没有管理节点的
[cephuser@admin-node ~]$ ceph mds stat
e1:创建管理节点(admin-node作为管理节点)
需要注意:如果不创建mds管理节点的,client客户端将不能正常挂载到ceph集群!![cephuser@admin-node ~]$ cd /home/cephuser/cluster/
[cephuser@admin-node cluster]$ ceph-deploy mds create admin-node再次查看管理节点状态,发现已经在启动中了
[cephuser@admin-node cluster]$ ceph mds stat
e2:, 1 up:standby创建pool池,pool是ceph存储数据时的逻辑分区,它起到namespace的作用
[cephuser@admin-node cluster]$ ceph osd lspools #先查看pool
0 rbd,新创建的ceph集群只有rdb一个pool。这时需要创建一个新的pool
[cephuser@admin-node cluster]$ ceph osd pool create cephfs_data 128 #后面的数据是pg的数量
pool 'cephfs_data' created
##命令格式##
#创建pool
ceph osd pool create [pool池名称] 128
#删除pool
ceph osd pool delete [pool池名称]
#调整副本数量
ceph osd pool set [pool池名称] size 2
###[cephuser@admin-node cluster]$ ceph osd pool create cephfs_metadata 128 #pool的元数据
pool 'cephfs_metadata' created[cephuser@admin-node cluster]$ ceph fs new mycephfs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1再次查看pool状态
[cephuser@admin-node cluster]$ ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,检查mds管理节点状态
[cephuser@admin-node cluster]$ ceph mds stat
e5: 1/1/1 up {0=admin-node=up:active}检查ceph集群状态
[cephuser@admin-node cluster]$ ceph -scluster e76f5c31-89d6-4d5f-a302-8f1faf17655ehealth HEALTH_WARNtoo many PGs per OSD (320 > max 300)monmap e1: 1 mons at {admin-node=192.168.0.91:6789/0}election epoch 5, quorum 0 admin-nodefsmap e5: 1/1/1 up {0=admin-node=up:active}osdmap e29: 3 osds: 3 up, 3 inflags sortbitwise,require_jewel_osdspgmap v248: 320 pgs, 3 pools, 2068 bytes data, 20 objects6323 MB used, 85788 MB / 92112 MB avail320 active+clean

挂载到服务器的目录中

由于我这里的cephfs存储是用来给pod中的java应用存放日志的,所以可以将cephfs存储挂载到一台服务器上,方便查看日志。

查看并记录admin用户的key,admin用户默认就存在,不需要创建
[cephuser@admin-node cluster]$ ceph auth get-key client.admin
AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==查看用户权限
[cephuser@admin-node cluster]$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]key = AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==caps mds = "allow *"caps mon = "allow *"caps osd = "allow *"查看ceph授权
[cephuser@admin-node cluster]$ ceph auth list
installed auth entries:mds.admin-nodekey: AQCj+XNfUd6qLBAAERbdVdkbeV3AmKyxvDLDyA==caps: [mds] allowcaps: [mon] allow profile mdscaps: [osd] allow rwx
osd.0key: AQAZqHJfsM7fCxAAX4BRw+HxNAPMezGpKKTafw==caps: [mon] allow profile osdcaps: [osd] allow *
osd.1key: AQDMqHJfJxAIAhAAyccoTUr5PciteZ5X/vTRLw==caps: [mon] allow profile osdcaps: [osd] allow *
osd.2key: AQDXqHJfkilwCBAAtw2QqKZ6LATMijK/8Ggi7Q==caps: [mon] allow profile osdcaps: [osd] allow *
client.adminkey: AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==caps: [mds] allow *caps: [mon] allow *caps: [osd] allow *
client.bootstrap-mdskey: AQA2jnFfRdOMLRAA9LuyGs5YsZYfH+Mu4wQrsA==caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgrkey: AQA5jnFfp87tDxAAtx4rvkE7OIobER2iKQDyew==caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osdkey: AQA2jnFfocEJFRAArMJYW9Bjd8B3zT/PpsRZJg==caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgwkey: AQA2jnFfrOQqIRAAmH3v7pvVNBmy5CqxtxeNsg==caps: [mon] allow profile bootstrap-rgw在客户机上挂载
客户机需要安装ceph,具体yum源上面已经写过,这里不再赘述
[root@localhost ~]# yum -y install ceph
[root@localhost ~]# mkdir /mnt/cephfs
root@localhost ~]# mount.ceph 192.168.0.91:6789:/  /mnt/cephfs/ -o name=admin,secret=AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/sda1             50G  2.0G   48G   4% /
devtmpfs             1.9G     0  1.9G   0% /dev
tmpfs                1.9G     0  1.9G   0% /dev/shm
tmpfs                1.9G   12M  1.9G   1% /run
tmpfs                1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs                378M     0  378M   0% /run/user/0
192.168.0.91:6789:/   90G  6.2G   84G   7% /mnt/cephfs
并在fstab中加入以下配置:
192.168.0.91:6789:/  /mnt/cephfs/ ceph     name=admin,secret=AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==   0   0

创建Provisioner

注意点:
1.需要在k8s的master和node节点中都安装ceph,因为我后面还要使用storageclass模式,还需要在k8s的master节点中安装ceph-common,否则会报错。
2.因为只有cephfs(即文件类的存储)才能够支持ReadWriteMany,所以上面创建的是ceph的文件系统cephfs。
3.在k8s中使用cephfs存储时,节点中必须安装ceph-fuse,不然会报错。创建秘钥
ceph auth get-key client.admin > /opt/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/opt/secret --namespace=cephfs部署Provisioner
git clone https://github.com/kubernetes-retired/external-storage.git
cd external-storage/ceph/cephfs/deploy
NAMESPACE=cephfs
sed -r -i "s/namespace: [^ ]+/namespace: $NAMESPACE/g" ./rbac/*.yaml
sed -r -i "N;s/(name: PROVISIONER_SECRET_NAMESPACE.*\n[[:space:]]*)value:.*/\1value: $NAMESPACE/" ./rbac/deployment.yaml
kubectl -n $NAMESPACE apply -f ./rbac检查pod的状态是否正常
kubectl get pods -n cephfs创建Storageclass
cat > storageclass.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: cephfs
provisioner: ceph.com/cephfs
parameters:monitors: 192.168.0.91:6789   #换成自己的ceph地址adminId: adminadminSecretName: ceph-secret-adminadminSecretNamespace: cephfsclaimRoot: /logs
EOF
kubectl apply -f storageclass.yaml 创建pvc
cat > pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: claim1
spec:storageClassName: cephfsaccessModes:- ReadWriteManyresources:requests:storage: 2Gi
EOF
kubectl apply -f pvc.yaml查看pvc
kubectl get pvc
NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim1   Bound    pvc-7c1740a3-a3ab-4889-9d94-e50c736701fd   2Gi        RWX            cephfs         27m

上面创建好了StorageClass和PVC之后,我这边的java应用就可以直接使用这个pvc进行日志目录的挂载了,并且是多个pod可以同时使用同一个PVC的,这样方便了日志的管理和收集工作。

注意:在这里还是要再次强调下,ceph存储的文件系统cephfs才支持ReadWriteMany哦。

参考文章:
https://k.i4t.com/kubernetes_ceph_storageclass.html
https://www.cnblogs.com/kevingrace/p/9141432.html
https://blog.csdn.net/qq_42006894/article/details/88576446
https://zhuanlan.zhihu.com/p/58602716
https://www.cnblogs.com/sunsky303/p/12206485.html

Ceph存储搭建及在k8s中的使用相关推荐

  1. ceph查看卷_基于CEPH后端存储搭建Harbor

    上一篇文章基于NFS后端存储搭建Harbor ,这一节来聊聊K8s与CEPH的对接以及基于CEPH Harbor的构建. 因为资源的问题,测试环境仍然是K8s的ALL-IN-ONE环境,CEPH集群通 ...

  2. ceph怎么搭建文件存储_SUSE专家谈Ceph落地之最佳实践

    点击上方"蓝字",欢迎关注! 软件定义存储业内的重要开源项目Ceph可以说是目前业内最流行,应用最广泛的开源软件定义存储.近日,在第三届软件定义存储线上峰会上,SUSE中国区技术总 ...

  3. 初试 Centos7 上 Ceph 存储集群搭建

    https://blog.csdn.net/aixiaoyang168/article/details/78788703 目录 Ceph 介绍     环境.软件准备     Ceph 预检     ...

  4. rancher k8s 对接 ceph 存储

    本文永久链接: https://www.xtplayer.cn/kubernetes/storage/k8s-storage-ceph/ 本文编写的前提是已有正常工作的 ceph 存储服务,并且 Ra ...

  5. k8s中graphite_在Graphite中存储Hystrix的几个月历史指标

    k8s中graphite Hystrix的杀手级功能之一是低延迟,数据密集和美观的仪表板 : 即使这只是Hystrix实际操作的副作用(断路器,线程池,超时等),它也往往是最令人印象深刻的功能. 为了 ...

  6. OpenStack-M版(Mitaka)搭建基于(Centos7.2)+++十、Openstack对象存储服务(swift)中

    十.Openstack对象存储服务(swift)中 计算节点上(我把计算节点当存储节点用添加了sdb,sdc两块硬盘) 1.安装软件包: yum install xfsprogs rsync  ope ...

  7. Kubernetes 集群基于 Rook 的 Ceph 存储之块设备、文件系统、对象存储

    文章目录 1.Rook & Ceph 介绍 2.环境.软件准备 3.Block 块存储 3.1.创建 CephBlockPool.StorageClass 3.2.验证并测试 4.File S ...

  8. ceph存储应用--owncloud

    目录 框架​ 节点规划 部署环境准备 ceph集群部署 在master1上创建集群 创建mon监控组件 监控节点master1初始化 创建mgr管理组件 创建osd磁盘 添加其他节点到ceph集群中 ...

  9. AI在K8S中的实践:云智天枢AI中台架构揭秘

    导语 | 9月7日,云+社区(腾讯云官方开发者社区)主办的技术沙龙--AI技术原理与实践,在上海成功举行.现场的5位腾讯云技术专家,在现场与开发者们面对面交流,并深度讲解了腾讯云云智天枢人工智能服务平 ...

最新文章

  1. 【camera】1. 相机硬件组成
  2. Windows内核新手上路2——挂钩shadow SSDT
  3. 震惊!java中日期格式化的大坑!
  4. 中大型网站技术架构演变过程
  5. 怎么才能学好Java编程写好Java代码?
  6. html assign无效,Object.assign的一些用法
  7. mysql连库串_数据库连接串整理 - osc_ac5z111b的个人空间 - OSCHINA - 中文开源技术交流社区...
  8. Spring Boot 2.1.4 发布,提醒全体用户升级 2.1
  9. Struts2——知识点:Action Implements SessionAware
  10. 设某一机器由n个部件组成_干货 | 带你了解!工业机器人知识大全
  11. 网吧操作系统制作与优化2007最终版(转)
  12. 解决用wps另存dbf格式文件,丢失只转换了部分数据
  13. 一款,整合百度翻译api跟有道翻译api的翻译君
  14. Hilbert变换C语言实现学习
  15. 平安夜关于苹果的题目——1705. 吃苹果的最大数目
  16. 公网远程开机(唤醒家庭PC)
  17. IDEA的TODO的使用
  18. “脚本小子”和真正黑客的区别是什么?
  19. 常见浏览器及其内核(国际)
  20. Qt编写的项目作品2-控件属性设计器(组态)

热门文章

  1. 【Flutter】使用BottomAppBar自定义bottomNavigationBar
  2. P3P设置第三方cookie解决方案
  3. C#笔记--StreamReader
  4. (附源码)计算机毕业设计SSM基于Java的新冠疫苗预约系统
  5. Win 10下anaconda构建合适的气象环境
  6. UTC时间转北京时间JAVA方法
  7. 兜兜SEO工具包:百度网址URL采集、下拉框指数、权重、收录批量查询、文章伪原创一条龙
  8. Kamiya丨Kamiya艾美捷人β2-微球蛋白ELISA说明书
  9. python画圆形螺旋线_硬核教程,利用 Python 搞定精美网络图!
  10. linux temp文件夹在哪_手机文件夹为什么都是英文?到底哪些可以删除?看完涨知识了...