环境说明

  • 使用 Kolla-Ansible 请参考《使用 Kolla-AnsibleCentOS 7 单节点上部署 OpenStack Pike 》;
  • 部署 Ceph 服务请参考《 Ceph 学习笔记 1 - Mimic 版本多节点部署》。

配置Ceph

  • osdev 用户登录:
$ ssh osdev@osdev01
$ cd /opt/ceph/deploy/

创建Pool

创建镜像Pool

  • 用于保存 Glance 镜像:
$ ceph osd pool create images 32 32
pool 'images' created

创建卷Pool

  • 用于保存 Cinder 的卷:
$ ceph osd pool create volumes 32 32
pool 'volumes' created
  • 用于保存 Cinder 的卷备份:
$ ceph osd pool create backups 32 32
pool 'backups' created

创建虚拟机Pool

  • 用于保存虚拟机系统卷:
$ ceph osd pool create vms 32 32
pool 'vms' created

查看Pool

$ ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
6 rbd
8 images
9 volumes
10 backups
11 vms

创建用户

查看用户

  • 查看所有用户:
$ ceph auth list
installed auth entries:mds.osdev01key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q==caps: [mds] allowcaps: [mon] allow profile mdscaps: [osd] allow rwx
mds.osdev02key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA==caps: [mds] allowcaps: [mon] allow profile mdscaps: [osd] allow rwx
mds.osdev03key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw==caps: [mds] allowcaps: [mon] allow profile mdscaps: [osd] allow rwx
osd.0key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *
osd.1key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *
osd.2key: AQD9JH5bbPi6IRAA7DbwaCh5JBaa6RfWPoe9VQ==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *
client.adminkey: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==caps: [mds] allow *caps: [mgr] allow *caps: [mon] allow *caps: [osd] allow *
client.bootstrap-mdskey: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ==caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgrkey: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ==caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osdkey: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg==caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbdkey: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g==caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rgwkey: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow==caps: [mon] allow profile bootstrap-rgw
client.rgw.osdev01key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q==caps: [mon] allow rwcaps: [osd] allow rwx
client.rgw.osdev02key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w==caps: [mon] allow rwcaps: [osd] allow rwx
client.rgw.osdev03key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A==caps: [mon] allow rwcaps: [osd] allow rwx
mgr.osdev01key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw==caps: [mds] allow *caps: [mon] allow profile mgrcaps: [osd] allow *
mgr.osdev02key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww==caps: [mds] allow *caps: [mon] allow profile mgrcaps: [osd] allow *
mgr.osdev03key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg==caps: [mds] allow *caps: [mon] allow profile mgrcaps: [osd] allow *
  • 查看指定用户:
$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"

创建Glance用户

  • 创建 glance 用户,并授予 images 存储池访问权限:
$ ceph auth get-or-create client.glance
[client.glance]key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==$ ceph auth caps client.glance mon 'allow r' osd 'allow rwx pool=images'
updated caps for client.glance
  • 查看并保存 glance 用户的 KeyRing 文件:
$ ceph auth get client.glance
exported keyring for client.glance
[client.glance]key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==caps mon = "allow r"caps osd = "allow rwx pool=images"$ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyring
exported keyring for client.glance

创建Cinder用户

  • 创建 cinder-volume 用户,并授予 volumes 存储池访问权限:
$ ceph auth get-or-create client.cinder-volume
[client.cinder-volume]key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==$ ceph auth caps client.cinder-volume mon 'allow r' osd 'allow rwx pool=volumes'
updated caps for client.cinder-volume
  • 查看并保存 cinder-volume 用户的 KeyRing 文件:
$ ceph auth get client.cinder-volume
exported keyring for client.cinder-volume
[client.cinder-volume]key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==caps mon = "allow r"caps osd = "allow rwx pool=volumes"$ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyring
exported keyring for client.cinder-volume
  • 创建 cinder-backup 用户,并授予 volumesbackups 存储池访问权限:
$ ceph auth get-or-create client.cinder-backup
[client.cinder-backup]key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==$ ceph auth caps client.cinder-backup mon 'allow r' osd 'allow rwx pool=volumes, allow rwx pool=backups'
updated caps for client.cinder-backup
  • 查看并保存 cinder-backup 用户的 KeyRing 文件:
$ ceph auth get client.cinder-backup
exported keyring for client.cinder-backup
[client.cinder-backup]key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==caps mon = "allow r"caps osd = "allow rwx pool=volumes, allow rwx pool=backups"$ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyring
exported keyring for client.cinder-backup

创建Nova用户

  • 创建 nova 用户,并授予 vms 存储池的访问权限:
$ ceph auth get-or-create client.nova
[client.nova]key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==$ ceph auth caps client.nova mon 'allow r' osd 'allow rwx pool=vms'
updated caps for client.nova
  • 查看并保存 nova 用户的 KeyRing 文件:
$ ceph auth get client.nova
exported keyring for client.nova
[client.nova]key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==caps mon = "allow r"caps osd = "allow rwx pool=vms"$ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyring
exported keyring for client.nova

配置Kolla-Ansible

  • root 用户身份登录 osdev01 部署节点,并设置好环境变量:
$ ssh root@osdev01
$ export KOLLA_ROOT=/opt/kolla
$ cd ${KOLLA_ROOT}/myconfig

全局配置

  • 编辑 globals.yml ,禁止部署 Ceph
enable_ceph: "no"
  • 开启 Cinder 服务,并开启 GlanceCinderNova 的后端 Ceph 功能:
enable_cinder: "yes"glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"

配置Glance

  • 配置 Glance 使用 glance 用户使用 Cephimages 存储池:
$ mkdir -pv config/glance
mkdir: 已创建目录 "config/glance"$ vi config/glance/glance-api.conf
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
  • 新增 GlanceCeph 客户端配置和 glance 用户的 KeyRing 文件:
$ vi config/glance/ceph.conf
[global]
fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2a
mon_initial_members = osdev01, osdev02, osdev03
mon_host = 172.29.101.166,172.29.101.167,172.29.101.168
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx$ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring
"/opt/ceph/deploy/ceph.client.glance.keyring" -> "config/glance/ceph.client.glance.keyring"

配置Cinder

  • 配置 Cinder 卷服务使用 Cephcinder-volume 用户使用 volumes 存储池, Cinder 卷备份服务使用 Cephcinder-backup 用户使用 backups 存储池:
$ mkdir -pv config/cinder/
mkdir: 已创建目录 "config/cinder/"$ vi config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends=rbd-1[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder-volume
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}$ vi config/cinder/cinder-backup.conf
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=backups
backup_driver = cinder.backup.drivers.ceph
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
  • 新增 Cinder 的卷服务和卷备份服务的 Ceph 客户端配置和 KeyRing 文件:
$ cp config/glance/ceph.conf config/cinder/ceph.conf$ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/
mkdir: 已创建目录 "config/cinder/cinder-backup/"
mkdir: 已创建目录 "config/cinder/cinder-volume/"$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-volume.keyring"$ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
"/opt/ceph/deploy/ceph.client.cinder-backup.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-backup.keyring"$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-volume/ceph.client.cinder.keyring"

配置Nova

  • 配置 Nova 使用 Cephnova 用户使用 vms 存储池:
$ vi config/nova/nova-compute.conf
[libvirt]
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
  • 新增Nova的 Ceph 客户端配置和 nova 用户的 KeyRing 文件:
$ cp -v config/glance/ceph.conf config/nova/ceph.conf
"config/glance/ceph.conf" -> "config/nova/ceph.conf"$ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring
"/opt/ceph/deploy/ceph.client.nova.keyring" -> "config/nova/ceph.client.nova.keyring"$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/nova/ceph.client.cinder.keyring"

部署测试

开始部署

  • 编辑部署脚本 osdev.sh
#!/bin/bashset -uexvusage()
{echo -e "usage : \n$0 <action>"echo -e "  \$1 action"
}if [ $# -lt 1 ]; thenusageexit 1
fi${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1
  • 增加可执行权限:
$ chmod a+x osdev.sh
  • 部署 OpenStack 集群:
$ ./osdev.sh bootstrap-servers
$ ./osdev.sh prechecks
$ ./osdev.sh pull
$ ./osdev.sh deploy
$ ./osdev.sh post-deploy
# ./osdev.sh "destroy --yes-i-really-really-mean-it"
  • 查看部署的服务概况:
$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron     | network        |
| 46de4440a5cf4a5697fa94b2d0424ba9 | heat        | orchestration  |
| 60b46b491ce7403aaec0c064384dde49 | heat-cfn    | cloudformation |
| 7726ab5d41c5450d954f073f1a9aff28 | cinderv2    | volumev2       |
| 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi     | metric         |
| 7ae6f98018fb4d509e862e45ebf10145 | glance      | image          |
| a0ec333149284c09ac0e157753205fd6 | nova        | compute        |
| b15e90c382864723945b15c37d3317a6 | placement   | placement      |
| b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3    | volumev3       |
| c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy |
| db27eb8524be4db3be12b9dd0dab16b8 | keystone    | identity       |
| edf5c8b894a74a69b65bb49d8e014fff | cinder      | volume         |
+----------------------------------+-------------+----------------+$ openstack volume service list
+------------------+-------------------+------+---------+-------+----------------------------+
| Binary           | Host              | Zone | Status  | State | Updated At                 |
+------------------+-------------------+------+---------+-------+----------------------------+
| cinder-scheduler | osdev02           | nova | enabled | up    | 2018-08-27T11:33:27.000000 |
| cinder-volume    | rbd:volumes@rbd-1 | nova | enabled | up    | 2018-08-27T11:33:18.000000 |
| cinder-backup    | osdev02           | nova | enabled | up    | 2018-08-27T11:33:17.000000 |
+------------------+-------------------+------+---------+-------+----------------------------+

初始化环境

  • 查看初始的 RBD 存储池情况,全部是空的:
$ rbd -p images ls
$ rbd -p volumes ls
$ rbd -p vms ls
  • 设置环境变量,并初始化 OpenStack 环境:
$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh
$ ${KOLLA_ROOT}/myconfig/init-runonce
  • 查看新增的镜像信息:
$ openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active |
+--------------------------------------+--------+--------+$ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                    |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                                                                                         |
| container_format | bare                                                                                                                                                     |
| created_at       | 2018-08-27T11:25:29Z                                                                                                                                     |
| disk_format      | qcow2                                                                                                                                                    |
| file             | /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file                                                                                                     |
| id               | 293b25bb-30be-4839-b4e2-1dba3c43a56a                                                                                                                     |
| min_disk         | 0                                                                                                                                                        |
| min_ram          | 0                                                                                                                                                        |
| name             | cirros                                                                                                                                                   |
| owner            | 68ada1726a864e2081a56be0a2dca3a0                                                                                                                         |
| properties       | locations='[{u'url': u'rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap', u'metadata': {}}]', os_type='linux' |
| protected        | False                                                                                                                                                    |
| schema           | /v2/schemas/image                                                                                                                                        |
| size             | 12716032                                                                                                                                                 |
| status           | active                                                                                                                                                   |
| tags             |                                                                                                                                                          |
| updated_at       | 2018-08-27T11:25:30Z                                                                                                                                     |
| virtual_size     | None                                                                                                                                                     |
| visibility       | public                                                                                                                                                   |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 查看 RBD 存储池的变化,可见镜像被存储在 images 存储池中,并且有一个快照:
$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p vms ls$ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56a
rbd image '293b25bb-30be-4839-b4e2-1dba3c43a56a':size 12 MiB in 2 objectsorder 23 (8 MiB objects)id: 178f4008d95block_name_prefix: rbd_data.178f4008d95format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenop_features: flags: create_timestamp: Mon Aug 27 19:25:29 2018$ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56a
SNAPID NAME   SIZE TIMESTAMP                6 snap 12 MiB Mon Aug 27 19:25:30 2018

创建虚拟机

  • 创建一个虚拟机:
$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1
+-------------------------------------+-----------------------------------------------+
| Field                               | Value                                         |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                        |
| OS-EXT-AZ:availability_zone         |                                               |
| OS-EXT-SRV-ATTR:host                | None                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          |
| OS-EXT-SRV-ATTR:instance_name       |                                               |
| OS-EXT-STS:power_state              | NOSTATE                                       |
| OS-EXT-STS:task_state               | scheduling                                    |
| OS-EXT-STS:vm_state                 | building                                      |
| OS-SRV-USG:launched_at              | None                                          |
| OS-SRV-USG:terminated_at            | None                                          |
| accessIPv4                          |                                               |
| accessIPv6                          |                                               |
| addresses                           |                                               |
| adminPass                           | 65cVBJ7S6yaD                                  |
| config_drive                        |                                               |
| created                             | 2018-08-27T11:29:03Z                          |
| flavor                              | m1.tiny (1)                                   |
| hostId                              |                                               |
| id                                  | 309f1364-4d58-413d-a865-dfc37ff04308          |
| image                               | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) |
| key_name                            | mykey                                         |
| name                                | demo1                                         |
| progress                            | 0                                             |
| project_id                          | 68ada1726a864e2081a56be0a2dca3a0              |
| properties                          |                                               |
| security_groups                     | name='default'                                |
| status                              | BUILD                                         |
| updated                             | 2018-08-27T11:29:03Z                          |
| user_id                             | c7111728fbbd4fd79bdd2b60e7d7cb42              |
| volumes_attached                    |                                               |
+-------------------------------------+-----------------------------------------------+$ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308
+-------------------------------------+----------------------------------------------------------+
| Field                               | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| OS-EXT-SRV-ATTR:host                | osdev03                                                  |
| OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03                                                  |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                                        |
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-SRV-USG:launched_at              | 2018-08-27T11:29:16.000000                               |
| OS-SRV-USG:terminated_at            | None                                                     |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| addresses                           | demo-net=10.0.0.11                                       |
| config_drive                        |                                                          |
| created                             | 2018-08-27T11:29:03Z                                     |
| flavor                              | m1.tiny (1)                                              |
| hostId                              | 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 |
| id                                  | 309f1364-4d58-413d-a865-dfc37ff04308                     |
| image                               | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a)            |
| key_name                            | mykey                                                    |
| name                                | demo1                                                    |
| progress                            | 0                                                        |
| project_id                          | 68ada1726a864e2081a56be0a2dca3a0                         |
| properties                          |                                                          |
| security_groups                     | name='default'                                           |
| status                              | ACTIVE                                                   |
| updated                             | 2018-08-27T11:29:16Z                                     |
| user_id                             | c7111728fbbd4fd79bdd2b60e7d7cb42                         |
| volumes_attached                    |                                                          |
+-------------------------------------+----------------------------------------------------------+
  • 可见虚拟机在 vms 存储池中创建了一个卷:
$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk
  • 登录虚拟机所在节点,可以看到虚拟机的系统卷使用的是在 vms 中创建的这个卷,从进程参数可以看出 qemu 直接使用的是 Cephlibrbd 库访问的 RBD 块设备:
$ ssh osdev@osdev03
$ sudo docker exec -it nova_libvirt virsh listId    Name                           State
----------------------------------------------------1     instance-00000001              running$ sudo docker exec -it nova_libvirt virsh dumpxml 1
...<disk type='network' device='disk'><driver name='qemu' type='raw' cache='none'/><auth username='nova'><secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/></auth><source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'><host name='172.29.101.166' port='6789'/><host name='172.29.101.167' port='6789'/><host name='172.29.101.168' port='6789'/></source><target dev='vda' bus='virtio'/><alias name='virtio-disk0'/><address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/></disk>
...$ ps -aux | grep qemu
42436    2678909  4.6  0.0 1341144 171404 ?      Sl   19:29   0:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on$ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbdlibrbd.so.1 => /lib64/librbd.so.1 (0x00007fde38815000)libceph-common.so.0 => /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)

创建卷

  • 创建一个卷:
$ openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2018-08-27T11:33:52.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | c7111728fbbd4fd79bdd2b60e7d7cb42     |
+---------------------+--------------------------------------+
  • 查看存储池状态,可以看到新建的卷被放在 volumes 存储池:
$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk

创建备份

  • 创建一个卷备份,可以看到是创建在 backups 存储池中:
$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | f2321578-88d5-4337-b93c-798855b817ce |
| name  | None                                 |
+-------+--------------------------------------+$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID                                   | Name | Description | Status    | Size |
+--------------------------------------+------+-------------+-----------+------+
| f2321578-88d5-4337-b93c-798855b817ce | None | None        | available |    1 |
+--------------------------------------+------+-------------+-----------+------+$ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2018-08-27T11:39:40.000000           |
| data_timestamp        | 2018-08-27T11:39:40.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | f2321578-88d5-4337-b93c-798855b817ce |
| is_incremental        | False                                |
| name                  | None                                 |
| object_count          | 0                                    |
| size                  | 1                                    |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2018-08-27T11:39:46.000000           |
| volume_id             | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
+-----------------------+--------------------------------------+$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
  • 在此创建一个备份,发现 backups 存储池并无变化,仅仅是在原有的备份卷中增加一个快照:
$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | 07132063-9bdb-4391-addd-a791dae2cfea |
| name  | None                                 |
+-------+--------------------------------------+$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base$ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
SNAPID NAME                                                            SIZE TIMESTAMP                4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018 5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018

连接卷

  • 把新增的卷链接到之前创建的虚拟机中:
$ openstack server add volume demo1 volume1
$ openstack volume show volume1
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                          | Value                                                                                                                                                                                                                                                                                                                        |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments                    | [{u'server_id': u'309f1364-4d58-413d-a865-dfc37ff04308', u'attachment_id': u'fb4d9ec0-8a33-4ed0-8845-09e6f17aac81', u'attached_at': u'2018-08-27T11:44:51.000000', u'host_name': u'osdev03', u'volume_id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9', u'device': u'/dev/vdb', u'id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9'}] |
| availability_zone              | nova                                                                                                                                                                                                                                                                                                                         |
| bootable                       | false                                                                                                                                                                                                                                                                                                                        |
| consistencygroup_id            | None                                                                                                                                                                                                                                                                                                                         |
| created_at                     | 2018-08-27T11:33:52.000000                                                                                                                                                                                                                                                                                                   |
| description                    | None                                                                                                                                                                                                                                                                                                                         |
| encrypted                      | False                                                                                                                                                                                                                                                                                                                        |
| id                             | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9                                                                                                                                                                                                                                                                                         |
| migration_status               | None                                                                                                                                                                                                                                                                                                                         |
| multiattach                    | False                                                                                                                                                                                                                                                                                                                        |
| name                           | volume1                                                                                                                                                                                                                                                                                                                      |
| os-vol-host-attr:host          | rbd:volumes@rbd-1#rbd-1                                                                                                                                                                                                                                                                                                      |
| os-vol-mig-status-attr:migstat | None                                                                                                                                                                                                                                                                                                                         |
| os-vol-mig-status-attr:name_id | None                                                                                                                                                                                                                                                                                                                         |
| os-vol-tenant-attr:tenant_id   | 68ada1726a864e2081a56be0a2dca3a0                                                                                                                                                                                                                                                                                             |
| properties                     | attached_mode='rw'                                                                                                                                                                                                                                                                                                           |
| replication_status             | None                                                                                                                                                                                                                                                                                                                         |
| size                           | 1                                                                                                                                                                                                                                                                                                                            |
| snapshot_id                    | None                                                                                                                                                                                                                                                                                                                         |
| source_volid                   | None                                                                                                                                                                                                                                                                                                                         |
| status                         | in-use                                                                                                                                                                                                                                                                                                                       |
| type                           | None                                                                                                                                                                                                                                                                                                                         |
| updated_at                     | 2018-08-27T11:44:52.000000                                                                                                                                                                                                                                                                                                   |
| user_id                        | c7111728fbbd4fd79bdd2b60e7d7cb42                                                                                                                                                                                                                                                                                             |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 到虚拟机所在节点查看其 libvirt 上参数的变化,发现新增了一个 RBD 磁盘:
$ sudo docker exec -it nova_libvirt virsh dumpxml 1
...<disk type='network' device='disk'><driver name='qemu' type='raw' cache='none'/><auth username='nova'><secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/></auth><source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'><host name='172.29.101.166' port='6789'/><host name='172.29.101.167' port='6789'/><host name='172.29.101.168' port='6789'/></source><target dev='vda' bus='virtio'/><alias name='virtio-disk0'/><address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/></disk><disk type='network' device='disk'><driver name='qemu' type='raw' cache='none' discard='unmap'/><auth username='cinder-volume'><secret type='ceph' uuid='3fa55f7c-b556-4095-9253-b908d5408ec8'/></auth><source protocol='rbd' name='volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9'><host name='172.29.101.166' port='6789'/><host name='172.29.101.167' port='6789'/><host name='172.29.101.168' port='6789'/></source><target dev='vdb' bus='virtio'/><serial>3ccca300-bee3-4b5a-b89b-32e6b8b806d9</serial><alias name='virtio-disk1'/><address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/></disk>
...
  • 为虚拟机创建一个浮动 IP ,使用 SSH 登陆进去:
$ openstack console url show demo1
+-------+-------------------------------------------------------------------------------------+
| Field | Value                                                                               |
+-------+-------------------------------------------------------------------------------------+
| type  | novnc                                                                               |
| url   | http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 |
+-------+-------------------------------------------------------------------------------------+$ openstack floating ip create public1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2018-08-27T11:49:02Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 192.168.162.52                       |
| floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 |
| id                  | 2aa86075-9c62-49f5-84ac-e7b6353c9591 |
| name                | 192.168.162.52                       |
| port_id             | None                                 |
| project_id          | 68ada1726a864e2081a56be0a2dca3a0     |
| qos_policy_id       | None                                 |
| revision_number     | 0                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| subnet_id           | None                                 |
| tags                | []                                   |
| updated_at          | 2018-08-27T11:49:02Z                 |
+---------------------+--------------------------------------+$ openstack server add floating ip demo1 192.168.162.52$ openstack server list
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| ID                                   | Name  | Status | Networks                           | Image  | Flavor  |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+$ ssh root@osdev02
$ ip netns
qrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1)
qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0)$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9(用户名"cirros",密码"gocubsgo")
$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh cirros@192.168.162.52
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh cirros@10.0.0.11$ sudo passwd root
Changing password for root
New password:
Bad password: too weak
Retype password:
Password for root changed by root
$ su -
Password:
  • 创建分区并写入测试文件,最后卸载分区:
# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part
vdb     253:16   0    1G  0 disk
# mkfs.ext4 /dev/vdb
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7
Superblock backups stored on blocks: 32768, 98304, 163840, 229376Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# mount /dev/vdb /mnt
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev                    240.1M         0    240.1M   0% /dev
/dev/vda1               978.9M     23.9M    914.1M   3% /
tmpfs                   244.2M         0    244.2M   0% /dev/shm
tmpfs                   244.2M     92.0K    244.1M   0% /run
/dev/vdb                975.9M      1.3M    907.4M   0% /mnt
# echo "hello openstack, volume test." > /mnt/ceph_rbd_test
# umount /mnt
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev                    240.1M         0    240.1M   0% /dev
/dev/vda1               978.9M     23.9M    914.1M   3% /
tmpfs                   244.2M         0    244.2M   0% /dev/shm
tmpfs                   244.2M     92.0K    244.1M   0% /run

断开卷

  • 断开卷,同时查看虚拟机内部变化:
$ openstack server remove volume demo1 volume1# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part
  • 在宿主机映射和挂载 RBD 卷,并查看之前虚拟机内部创建的文件,完全相同:
$ rbd showmapped
id pool image    snap device
0  rbd  rbd_test -    /dev/rbd0$ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten
$ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
/dev/rbd1$ mkdir /mnt/volume1
$ mount /dev/rbd1 /mnt/volume1/$ cat /mnt/volume1/
ceph_rbd_test  lost+found/
$ cat /mnt/volume1/ceph_rbd_test
hello openstack, volume test.

参考文档

  1. External Ceph

Ceph学习笔记2-在Kolla-Ansible中使用Ceph后端存储相关推荐

  1. JavaScript学习笔记06【高级——JavaScript中的事件】

    w3school 在线教程:https://www.w3school.com.cn JavaScript学习笔记01[基础--简介.基础语法.运算符.特殊语法.流程控制语句][day01] JavaS ...

  2. Sharepoint学习笔记---如何在Sharepoint2010网站中整合Crystal Report水晶报表(显示数据 二)...

    在Sharepoint学习笔记---如何在Sharepoint2010网站中整合Crystal Report水晶报表(显示数据一)中,解释了如何把Crystal Report整合到Sharepoint ...

  3. MySQL学习笔记(六)-MySQL中库和表的管理

    MySQL学习笔记(六)-MySQL中库和表的管理 作者:就叫易易好了 日期:2020/11/23 1 2 DDL即数据定义语言 创建:create 修改:alter 删除:drop 库和表的管理: ...

  4. Android学习笔记---22_访问通信录中的联系人和添加联系人,使用事物添加联系人...

    Android学习笔记---22_访问通信录中的联系人和添加联系

  5. Hadoop学习笔记—13.分布式集群中节点的动态添加与下架

    Hadoop学习笔记-13.分布式集群中节点的动态添加与下架 开篇:在本笔记系列的第一篇中,我们介绍了如何搭建伪分布与分布模式的Hadoop集群.现在,我们来了解一下在一个Hadoop分布式集群中,如 ...

  6. PhalAPI学习笔记拓展篇 ———ADM模式中NotORM实现简单CURD

    PhalAPI学习笔记拓展篇 ---ADM模式中NotORM实现简单CURD 前言 内容 ADM模式 ADM简单介绍 准备工作 PhalAPI提供的CURD操作方法 业务实现 结束语 前言 公司业务需 ...

  7. 假装认真的LaTeX学习笔记(1)—— Sublime中自动补全LaTeX命令(LaTeX-cwl安装教程)

    假装认真的LaTeX学习笔记(1)-- Sublime中自动补全LaTeX命令 简介 使用环境 如何在Sublime中获得LaTeX自动补全功能 安装Sublime插件--LaTeX-cwl 方法一: ...

  8. oracle复制另一个字段,【学习笔记】Oracle存储过程 表中列不同时动态复制表中数据到另一个表中...

    天萃荷净 分享一篇关于Oracle存储过程实现表之间数据复制功能.两表中列不同,动态的将一表中的数据复制到另一个表中案例 因为要用到回收站功能,删除一条记录,要先放到一个delete表中,以便以后恢复 ...

  9. Zemax学习笔记(13)- ZEMAX 中的玻璃库波长范围的使用范围

    Zemax学习笔记(13)- ZEMAX 中的玻璃库波长范围的使用范围 有时候在使用Zemax的时候,会出现提示"某种玻璃的波长超出了玻璃库中的允许使用范围",这是为什么呢? 实在 ...

最新文章

  1. 基于OpenCV实战:绘制图像轮廓(附代码)
  2. FC8下安装mplayer
  3. java 判断语句 性能_前端性能优化:js中优化条件判断语句
  4. AI之matlab随笔(1)-数据类型,逻辑操作,数组,逻辑运算,异或,零向量或零矩阵
  5. 关于might_sleep的一点说明---CONFIG_DEBUG_ATOMIC_SLEEP【转】
  6. ubuntu 16.04 配置Python2.7 和 Python3.5 同时调用OpenCV
  7. classcastexception异常_优雅的异常处理
  8. 【HDU - 5963】朋友(博弈,思维,必胜态必败态,找规律)
  9. 有关Oracle最大连接数的问题
  10. idea mybaits逆向工程_IDEA 中集成 MyBatis Generator 组件逆向生成工程
  11. 基于selenium的钓鱼工具:关于ReelPhish神器的使用
  12. 中国女性补体面膜市场趋势报告、技术动态创新及市场预测
  13. 学生签到系统java_基于jsp的学生签到-JavaEE实现学生签到 - java项目源码
  14. 悼念毛星云(浅墨)老师
  15. 山大中心校区计算机课在哪,山东大学有几个校区,哪个校区最好及各校区介绍...
  16. 吉比特校招笔试题 字母数字混合排序
  17. ROSBridge - ROS系统与非ROS外部系统的通信的C++客户端实现
  18. python3大小写转换函数_python字符串大小写转换
  19. poi中excel锁定行列问题
  20. Ubuntu 18.04 LTS 安装JDK1.8-Linux-64

热门文章

  1. 锌掺杂的普鲁士蓝纳米颗粒|微/纳米多孔普鲁士蓝/金复合物|氧化石墨烯/普鲁士蓝/氨基苝四甲酸复合物(GO/PB/PTC-NH2)
  2. 程序ajax请求公共组件app-jquery-http.js中url参数部分的项目应用
  3. 一本入门深度学习的好书
  4. MobileNet相关知识整理
  5. 对话系统论文集(1)-BBQ网络
  6. Android ToolBar and Listview
  7. 修改了便签内容怎样再恢复?
  8. python语言编程-Python成为2018年度编程语言,遥遥领先于其他语言
  9. gtx1060 3g和6g性能差距 gtx1060 3g和6g吃鸡区别
  10. linux安装git并配置GitHub账号,本地与GitHub之间进行文件的上传(push)、下载(克隆)、更新