1. 在提供商网络架构中,所有实例都直接附加到提供商网络。在自助服务(私有)网络架构中,实例可以附加到自助服务或提供商网络。自助服务网络可以完全驻留在 OpenStack 中,也可以通过提供商网络使用 NAT 提供一定程度的外部网络访问。

示例体系结构假定使用以下网络:

  • 在 10.0.0.0/24 上使用网关 10.0.0.1 进行管理

    此网络需要一个网关来提供对所有节点的 Internet 访问,以用于管理目的,例如软件包安装、安全更新、DNS 和 NTP。

  • 网关为 203.0.113.0/24 且网关为 203.0.113.1 的提供程序

    此网络需要一个网关来提供对 OpenStack 环境中的实例的互联网访问。

    本次实验网络配置:

    三台vmware虚拟机:

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Wf7xOSa2-1676338462039)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019112846545.png)]

    eth0 作为 provider网络:192.168.10.0/24,可以上外网
    eth1 作为 管理网络:192.168.2.0/24
    [root@controller ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.2.101 controller
    192.168.2.102 nova
    192.168.2.103 cinder

二、环境配置安装

1、配置NTP 服务

controller 节点
yum install chrony
vim /etc/chrony.conf
server controller_ip iburst
allow 10.0.0.0/24
systemctl enable chronyd.service
systemctl start chronyd.service计算节点
yum install chrony
vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp1.aliyun.com iburst
systemctl enable chronyd.service
systemctl start chronyd.service存储节点
yum install chrony
vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller_ip iburst
systemctl enable chronyd.service
systemctl start chronyd.service
chronyc sources 检查时间同步                        ^* 代表同步完成

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AIcyI8vc-1676338462040)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221010103512681.png)]

2、安装服务包

安装 Rocky 版本时,请运行:

yum -y install centos-release-openstack-rocky
安装完成后,升级所有节点上的软件包:
yum -y upgrade
yum -y install python-openstackclient
yum -y install openstack-selinux

3、安装数据库

yum -y install mariadb mariadb-server python2-PyMySQL
创建并编辑文件:vim /etc/my.cnf
[mysqld]
bind-address = 192.168.2.3
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
启动数据库服务并将其配置为在系统引导时启动:
systemctl enable mariadb.service
systemctl start mariadb.service
  1. 通过运行脚本来保护数据库服务。具体而言,请为数据库帐户选择合适的密码:mysql_secure_installation``root

    mysql_secure_installation
    然后全部选中n
    

4、安装rabbitmq

安装软件包:
yum -y install rabbitmq-server
启动消息队列服务并将其配置为在系统引导时启动:
systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
添加用户:openstack
rabbitmqctl add_user openstack 123456
允许用户进行配置、写入和读取访问:openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

5、安装Memcached

yum -y install memcached python-memcached
vim /etc/sysconfig/memcached
#将服务配置为使用控制器节点的管理 IP 地址。这是为了允许其他节点通过管理网络进行访问:
OPTIONS="-l 127.0.0.1,::1,controller"
systemctl enable memcached.service
systemctl restart memcached.service

6、安装Etcd

yum -y install etcd
设置为控制器节点的管理 IP 地址,以允许其他节点通过管理网络进行访问:
vim /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.2.3:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.2.3:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.2.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.2.3:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.2.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
启动服务并设置开机自启动
systemctl enable etcd
systemctl start etcd

7、在controller节点安装keystone

#controller上面执行以下命令创建数据库
mysql -u root -p
Enter Password:  此处输入密码123456(之前安装mariaDB时设置的)
# 创建keystone数据库
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.001 sec)
# 创建一个keystone用户并设置密码也是keystone,专门用于访问keystone数据库
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.002 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.000 sec)安装和配置组件:
yum -y install openstack-keystone httpd mod_wsgi
编辑文件并完成以下操作:修改下面两处
vim /etc/keystone/keystone.conf
[database]
# ...
connection = mysql+pymysql://keystone:keyston@controller/keystone
在部分中,配置 Fernet 令牌提供程序:[token]
[token]
# ...
provider = fernet
填充身份服务数据库:
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化费内特密钥存储库:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导身份服务:
keystone-manage bootstrap --bootstrap-password 123456 \--bootstrap-admin-url http://controller:5000/v3/ \--bootstrap-internal-url http://controller:5000/v3/ \--bootstrap-public-url http://controller:5000/v3/ \--bootstrap-region-id RegionOne
配置阿帕奇 HTTP 服务器
vim /etc/httpd/conf/httpd.conf 添加下面一行
ServerName controller
创建指向该文件的链接:/usr/share/keystone/wsgi-keystone.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动阿帕奇 HTTP 服务并将其配置为在系统引导时启动:
systemctl enable httpd.service
systemctl start httpd.service
配置管理帐户:
vim admin.sh
export OS_USERNAME=admin
export OS_PASSWORD=123456 # 这个就是之前运行API时候的bootstrap-password
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
执行admin.sh脚本,导入os环境变量
source admin.sh
#配置域、项目、用户、角色
openstack domain create --description "An Example Domain" example
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" myproject
openstack user create --domain default --password=user user # 为了方便记忆,密码也设置成user
openstack role create myrole
openstack role add --project myproject --user user myrole
#验证keystone是否安装成功
unset OS_AUTH_URL OS_PASSWORD
# 用admin用户尝试获取一个token
# 随后提示输入密码,就是之前设置的123456
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password:
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2022-10-11T09:26:08+0000                                                                                                                                                                |
| id         | gAAAAABjRSigaPucQTW0CaeM-FnbKG8e7-HS7p6EayeG0dJwwFrHp6cHK6vmGhORVTmox2UBFJ2_SWSP4nGmmBZlyk24BOYHA8dos0lH2dj_xs5EYw0hTVdVVPobO0d7ak1UW3Q6bJdcYeiScGqwTp1wHnLW63k6aDrMqghLjfX8tIc41vTOIH0 |
| project_id | 0f516fa722364217bb032174435fc4fe                                                                                                                                                        |
| user_id    | 53dad147bad846abb5a96daf13bf0c50                                                                                                                                                        |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------
# 用myuser用户尝试获取一个token
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue
# 密码是myuser
#在controller上编写两个凭证文件
vim ~/admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2vim ~/demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
尝试加载admin-openrc试试
. ~/admin-openrc
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2022-10-11T09:30:42+0000                                                                                                                                                                |
| id         | gAAAAABjRSmyNQ01ZAUmjtVyZqUQoq22_AH1OAwWJRtOcp6hI9NoqtsxRl6kUIDk2XDJeS5seii_lXYBk4S0r-P8A2y4ZmfUgyswMSgEy94C1uVSvI4fd-EQpOjQ9-HyabvTGtvGofR_Nf6hMuXmTdKzD7tbpoRP7a8WYld6ewLCLqq8uOgX2fs |
| project_id | 0f516fa722364217bb032174435fc4fe                                                                                                                                                        |
| user_id    | 53dad147bad846abb5a96daf13bf0c50                                                                                                                                                        |
#到此,所有的keystone安装结束了

8、controller安装 Glance安装

mysql
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.002 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.000 sec)
#加载admin用户(这个用户在keystone安装时创建,所以不能跳)
. ~/admin-openrc
#创建glance用户和项目
openstack user create --domain default --password=glance glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
安装服务包:
yum -y install openstack-glance
编辑文件并完成以下操作:/etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
编辑文件并完成以下操作:/etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance[paste_deploy]
flavor = keystone
填充镜像服务数据库:
su -s /bin/sh -c "glance-manage db_sync" glance
启动镜像服务并将其配置为在系统引导时启动:
systemctl enable openstack-glance-api.service  openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
验证功能:
openstack image create "cirros" \--file cirros-0.4.0-x86_64-disk.img \--disk-format qcow2 --container-format bare \--public
[root@controller ~]# glance image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| a518d73c-f7c6-4301-bd99-180fa810ef6b | cirros |
+--------------------------------------+--------+

9、controller安装 Placement安装

mysql
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.000 sec)
#创建项目和用户
openstack user create --domain default --password=placement placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装软件包:
yum -y install openstack-placement-api
编辑文件并完成以下操作:/etc/placement/placement.conf
[placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[api]
# ...
auth_strategy = keystone[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement#填充数据库:
su -s /bin/sh -c "placement-manage db sync" placement
重新启动服务:
systemctl restart httpd#验证placement是否安装成功
[root@controller ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

10、nova安装

1、controller上安装nova

mysql
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
#创建项目、用户、角色
. ~/admin-openrc
openstack user create --domain default --password=nova nova # 这里设置nova用户的密码也是nova
openstack role add --project service --user nova admin  # 将nova用户添加到admin组中变成管理员
openstack service create --name nova --description "OpenStack Compute" compute # 创建服务实体
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 # 提供API服务
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
#下载安装配置NOVA
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
编辑文件并完成以下操作:/etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[api_database]
# ...
connection = mysql+pymysql://nova:nova@controller/nova_api[database]
# ...
connection = mysql+pymysql://nova:nova@controller/nova
[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller:5672/
[api]
# ...
auth_strategy = keystone[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova[DEFAULT]
# ...
my_ip = 192.168.2.101[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.2.101:6080/vnc_auto.html
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement填充数据库:nova-api
su -s /bin/sh -c "nova-manage api_db sync" nova
注册数据库:cell0
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建单元格:cell1
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充 nova 数据库:
su -s /bin/sh -c "nova-manage db sync" nova
验证nova cell0和cell1是否正确注册:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
|  名称 |                 UUID                 |              Transport URL               |                    数据库连接                   | Disabled |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                  none:/                  | mysql+pymysql://nova:****@controller/nova_cell0 |  False   |
| cell1 | ae43cc15-7b3b-4aa0-9a8c-3e00d939d24c | rabbit://openstack:****@controller:5672/ |    mysql+pymysql://nova:****@controller/nova    |  False   |
启动计算服务并将其配置为在系统引导时启动:
systemctl restart libvirtd && systemctl enable libvirtd
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
验证nova:
[root@controller ~]# nova service-list
+--------------------------------------+----------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id                                   | Binary         | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
+--------------------------------------+----------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| a1d598da-dc56-4d68-8181-4b0f9a03caa0 | nova-conductor | controller | internal | enabled | up    | 2022-10-12T02:04:39.000000 | -               | False       |
| 2264205d-95df-4b5f-a91a-c89b51625c2a | nova-scheduler | controller | internal | enabled | up    | 2022-10-12T02:04:40.000000 | -               | False       |
| dec9972b-5b90-45a8-8dfc-e0a66340937b | nova-compute   | controller | nova     | enabled | up    | 2022-10-12T02:04:40.000000 | -               | False       |

2、compute 安装 nova

yum -y install openstack-nova-compute
编辑文件并完成以下操作:/etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.2.102
[api]
# ...
auth_strategy = keystone[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.2.101:6080/vnc_auto.html
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement[libvirt]
# ...
virt_type = qemu
启动计算服务(包括其依赖项),并将其配置为在系统引导时自动启动:
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
验证nova计算节点是否可用:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': ae43cc15-7b3b-4aa0-9a8c-3e00d939d24c
Checking host mapping for compute host 'nova': c133069a-aa00-444e-80b7-7d5d10dd90c6
Creating host mapping for compute host 'nova': c133069a-aa00-444e-80b7-7d5d10dd90c6
Checking host mapping for compute host 'controller': 98de87cd-1f23-436d-a0e9-27c19fb86a3f
Creating host mapping for compute host 'controller': 98de87cd-1f23-436d-a0e9-27c19fb86a3f
Found 2 unmapped computes in cell: ae43cc15-7b3b-4aa0-9a8c-3e00d939d24c## 添加新的计算节点时,必须在控制器节点上运行以注册这些新的计算节点。或者,您可以在 中设置适当的间隔:
vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

11、neutron安装

1、controller节点安装neutron

MariaDB [(none)] CREATE DATABASE neutron;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
#创建用户和角色
. ~/admin-openrc
openstack user create --domain default --password=neutron neutron # 这里设置密码,密码设成neutron,方便记忆
openstack role add --project service --user neutron admin # 把neutron用户加到admin组
openstack service create --name neutron --description "OpenStack Networking" network # 实例化服务
openstack endpoint create --region RegionOne network public http://controller:9696 # 老样子,创建3大接口
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
# 如果遇到了Multiple service matches found for 'network', use an ID to be more specific.
# openstack service list
# openstack service  delete  <ID号>    删除多余的服务
#然后官方文档给出了两个网络架构:公网架构option1和私网架构option2。其中私网架构包含了公网架构的所有功能,也比公网架构多两个组件。所以本文档选择部署option2私网架构。yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
编辑文件并完成以下操作:/etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:neutron@controller/neutron
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp配置模块化第 2 层 (ML2) 插件
编辑文件并完成以下操作:/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
配置网桥代理:
编辑文件并完成以下操作:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.2.101
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver#加载网桥过滤模块
modprobe br_netfilter
#查看网桥过滤是否成功
lsmod | grep br_netfilter
[root@controller ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl -p配置 DHCP 代理
编辑文件并完成以下操作:/etc/neutron/dhcp_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置第 3 层代理
编辑文件并完成以下操作:/etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
配置元数据代理
编辑文件并完成以下操作:/etc/neutron/metadata_agent.ini
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = metadata配置计算服务以使用网络服务
编辑文件并执行以下操作:/etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata网络服务初始化脚本需要指向 ML2 插件配置文件 的符号链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充数据库:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重新启动计算 API 服务:
systemctl restart openstack-nova-api.service
启动网络服务并将其配置为在系统引导时启动。
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

2、在nova节点上安装neutron

yum -y install openstack-neutron-linuxbridge ebtables ipset
编辑文件并完成以下操作:/etc/neutron/neutron.conf[database]
# ...
connection = mysql+pymysql://neutron:neutron@controller/neutron
[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp配置网桥代理
编辑文件并完成以下操作:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.2.102
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver#加载网桥过滤模块
modprobe br_netfilter
#查看网桥过滤是否成功
lsmod | grep br_netfilter
[root@controller ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter编辑文件并完成以下操作:/etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron启动 Linux 网桥代理并将其配置为在系统引导时启动:
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

12、controller Horizon安装dashboard

yum install openstack-dashboard
编辑文件并完成以下操作:/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']   ## 允许主机访问仪表板##配置会话存储服务:memcached
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
WEBROOT = '/dashboard/'
ALLOWED_HOSTS = ['one.example.com', 'two.example.com','192.168.2.228']
CACHES = {'default': {'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache','LOCATION': 'controller:11211',}
}
##启用身份 API 版本 3
OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
## 启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
## 配置接口版本:
OPENSTACK_API_VERSIONS = {                             "identity": 3,"image": 2,"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {...'enable_router': False,'enable_quotas': False,'enable_distributed_router': False,'enable_ha_router': False,'enable_fip_topology_check': False,
}
如果不包括,则添加以下行。/etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}重新启动 Web 服务器和会话存储服务:
systemctl restart httpd.service memcached.service

13、cinder安装

1、controller节点 cinder部署

mysql
MariaDB [(none)] CREATE DATABASE cinder;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
#获取admin凭据以访问仅限管理员的 CLI 命令:
. ~/admin-openrc
#要创建服务凭证,请完成以下步骤:
#创建cinder用户,密码cinder:
openstack user create --domain default --password=cinder cinder
#将admin角色添加到cinder用户:
openstack role add --project service --user cinder admin
#创建cinderv2、v3服务实体:
openstack service create --name cinderv2 \--description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 \--description "OpenStack Block Storage" volumev3
#创建块存储服务 API 端点:
openstack endpoint create --region RegionOne \volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \volumev3 admin http://controller:8776/v3/%\(project_id\)s安装软件包:
yum -y install openstack-cinder
编辑文件并完成以下操作:/etc/cinder/cinder.conf
[database]
# ...
connection = mysql+pymysql://cinder:cinder@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.2.101
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
填充块存储数据库:
su -s /bin/sh -c "cinder-manage db sync" cinder
将计算配置为使用块存储:
vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
重新启动计算 API 服务:
systemctl restart openstack-nova-api.service
启动块存储服务并将其配置为在系统引导时启动:
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

2、存储节点安装cinder

安装 LVM 软件包:

yum -y install lvm2 device-mapper-persistent-data
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
创建 LVM 物理卷 :/dev/sdb
pvcreate /dev/sdb
创建 LVM 卷组 :cinder-volumes
vgcreate cinder-volumes /dev/sdb
在部分中,添加接受设备并拒绝所有其他设备的筛选器:devices/dev/sdb
filter = [ "a/sdb/", "r/.*/"]##如果存储节点在操作系统磁盘上使用 LVM,则还必须将关联的设备添加到过滤器中。例如,如果设备包含操作系统:/dev/sdafilter = [ "a/sda/", "a/sdb/", "r/.*/"]
同样,如果计算节点在操作系统磁盘上使用 LVM,则还必须修改这些节点上文件中的筛选器以仅包含操作系统磁盘。例如,如果设备包含操作系统:/etc/lvm/lvm.conf/dev/sdafilter = [ "a/sda/", "r/.*/"]yum -y install openstack-cinder targetcli python-keystone
编辑文件并完成以下操作:/etc/cinder/cinder.conf
[database]
# ...
connection = mysql+pymysql://cinder:cinder@controller/cinder[DEFAULT]
auth_strategy = keystone
my_ip = 192.168.2.103
enabled_backends = lvm
glance_api_servers = http://controller:9292
transport_url = rabbit://openstack:123456@controller[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp启动块存储卷服务(包括其依赖项),并将其配置为在系统引导时启动:
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service

14、创建测试实例

1、创建网络

1、1 创建provider网络
创建网络:
openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider在网络上创建子网:
openstack subnet create --network provider --allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS --dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY --subnet-range PROVIDER_NETWORK_CIDR provider
将 替换为 CIDR 表示法中的提供商物理网络上的子网。PROVIDER_NETWORK_CIDR
将 和 替换为子网中要为实例分配的范围的第一个和最后一个 IP 地址。此范围不得包含任何现有的活动 IP 地址。例如:
openstack subnet create --network provider --allocation-pool start=192.168.10.101,end=192.168.10.250  --dns-nameserver 8.8.4.4 --gateway 192.168.10.2 --subnet-range 192.168.10.0/24 provider1、2 创建自助服务网络创建网络:
openstack network create selfservice在网络上创建子网:                                    子网网段可以自定义
openstack subnet create --network selfservice \--dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \--subnet-range 172.16.1.0/24 selfservice
创建路由器:
openstack router create router
将自助服务网络子网添加为路由器上的接口:
openstack router add subnet router selfservice
在路由器上的提供商网络上设置网关:
openstack router set router --external-gateway provider
验证操作:
列出网络命名空间。您应该会看到一个命名空间和两个命名空间。qrouterqdhcp
[root@controller ~]# ip netns
qdhcp-94e5902a-667f-453b-a527-b490544bf4d8 (id: 2)
qrouter-6e63bb4c-1610-45fa-af2b-ca32982e3bde (id: 1)
qdhcp-2d09ee71-019d-4d81-a82a-17dcdf32d628 (id: 0)
列出路由器上的端口以确定提供商网络上的网关 IP 地址:
[root@controller ~]# openstack port list --router router
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
| ID                                   | Name | MAC Address       | Fixed IP Addresses                                                            | Status |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
| 9c5d898e-7515-4b25-b50d-2e9fbef9ce44 |      | fa:16:3e:0f:2e:71 | ip_address='172.16.1.1', subnet_id='bac6824a-2355-4760-b471-b51b85408528'     | ACTIVE |
| c5cfcf84-2385-419a-8a25-3dfd5607b999 |      | fa:16:3e:fd:08:11 | ip_address='192.168.10.129', subnet_id='6257aead-9338-4e26-8f2d-bebf6e6827b8' | ACTIVE
从控制器节点或物理提供商网络上的任何主机 Ping 此 IP 地址:
[root@controller ~]# ping -c 4 192.168.10.129
PING 192.168.10.129 (192.168.10.129) 56(84) bytes of data.
64 bytes from 192.168.10.129: icmp_seq=1 ttl=64 time=4.40 ms
64 bytes from 192.168.10.129: icmp_seq=2 ttl=64 time=2.47 ms
64 bytes from 192.168.10.129: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 192.168.10.129: icmp_seq=4 ttl=64 time=0.066 ms
添加安全组规则:
默认情况下,安全组适用于所有实例,并包含拒绝远程访问实例的防火墙规则。对于诸如 CirrOS 之类的 Linux 映像,我们建议至少允许使用 ICMP (ping) 和安全外壳 (SSH)。default
openstack security group rule create --proto icmp default
允许安全外壳 (SSH) 访问:
openstack security group rule create --proto tcp --dst-port 22 default

创建实例:俩个节点都可以正常调度

远程登录成功,虚拟机也能上外网

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-K0b9reOl-1676338462041)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019110646477.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FSSsga3j-1676338462041)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019110213558.png)]

vnc登录

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hMfct3EX-1676338462042)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019110406680.png)]

2、同一vpc下所有子网网络是互通的

1、在vxlan网络下创建两个不同子网

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hHz11QeX-1676338462042)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019172502194.png)]

2、分别给两个子网创建路由并添加内部接口

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4DzVogRP-1676338462043)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019172619335.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lkIti9lT-1676338462043)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019172650171.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hJ1pYxVa-1676338462043)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019172713937.png)]

创建vm1虚拟机的网卡ip

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vgDtZvOO-1676338462044)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019172205253.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iV4iVxob-1676338462044)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019172851942.png)]

验证两个虚拟机网络是否相通:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-O8JoXza2-1676338462045)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019173058046.png)]

3、不同网络下的子网不能互通,需要添加静态路由

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YLy8jDJ0-1676338462045)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019175404514.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Z98GOLVR-1676338462046)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019175346124.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WQvlKLpt-1676338462046)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019175455462.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dTIKNkIf-1676338462047)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019175508232.png)]

测试虚拟机之前连通性:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-81yJPWWF-1676338462047)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019175559930.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0MTw6pUW-1676338462048)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019180048039.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5wUVBp29-1676338462048)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221019180327555.png)]

15、常见报错处理:

1、dashboard无法登录,界面报400

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0426oC1N-1676338462049)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221017103709993.png)]

后台日志显示

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xhSkwsnN-1676338462050)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221017103735112.png)]

配置文件有问题

vim /etc/openstack-dashboard/local_settings

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-L4bkIri4-1676338462050)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221017104201391.png)]

ALLOWED_HOSTS = ['*']             ## 允许所有主机访问仪表盘
重新启动 Web 服务器和会话存储服务:
systemctl restart httpd.service memcached.service

2、dashboard 登录 Not Found

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vzs0y5Zh-1676338462050)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221017104707103.png)]

查看日志

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Vq6llmlt-1676338462051)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221017104804591.png)]

配置文件有问题

vim /etc/openstack-dashboard/local_settings

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Mzr8FX8f-1676338462051)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221017104931150.png)]

添加下面配置项

WEBROOT = ‘/dashboard/’

重新启动 Web 服务器和会话存储服务:
systemctl restart httpd.service memcached.service

16、主机聚合和主机组

1、主机聚合(不同可用域的虚拟机不可以进行冷热迁移)
主机聚合(Host Aggregate)允许将硬件进行逻辑分组,并在用户创建实例时能指定对应组。通常将相同规格硬件计算节点归类在一个组,例如具备CPU高主频的计算节点归为一个组,然后再定义-些元数据。这样,用户在创建实例时,可以选择将实例创建CPU高主频的计算节点1、只有计算节点才能使用主机聚合。2、创建主机聚合时,必须要自定义一个域(可用区可用域)3、 创建主机聚合时,会自动添加一个元数据vailability _zone, 值默认等于主机聚合名称。4、 同一个计算节点只能归属于一 个可用域,即不能同时属于两个或以上可用域。5、 删除主机聚合时,必须先移除该主机聚合组下的所有计算节点,否则无法删除该主机聚合。6、 主机聚合可以让实例创建在指定的计算节点上。大部分情况,实例创建时是由调度器决定创建在哪台计算节点上,但在需要错误排查或编排服务资源分配时,希望能够直接在指定计算节点上创建实例,就可以用到主机聚合来实现。主机聚合可以让管理员对终端用户透明的方式定义计算资源,同时根据目的进行逻辑地分组,当主机分组的元数据与实例类型的元数据匹配,并且使用该类型启动实例时,就会调度主机聚合中的计算节点创建实例。可用域(可用区)是对计算资源的逻辑隔离,用户启动实例时可以选择选择对应的可用域,如果没有自定义可用域,默认为nova域。如果有多个可用域,则用户创建多个实例时可以指定不同的可用域,避免某-个可用域故障带来的问题。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-uUyv7cLF-1676338462052)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221020101335279.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FvK3bHdI-1676338462052)(C:\Users\乔楠\AppData\Roaming\Typora\typora-user-images\image-20221020101425665.png)]

主机组:
主机组与主机聚合作用的对象不同。主机聚合是针对宿主机计算节点,而主机组是针对虑拟机实例
按照关联不关联、软关联、软不关联四种策略对用户创建的虚拟机进行管理。实现约束虚拟机与虚拟机在物理机上的部署关系。1、关联:将实例创建在同一个计算节点上,当该计算节点资源不够时就会失败。例如,2个计算节点,批量创建3个实例,选择“关联”组,则实例全部创建在其中-个计算节点上,不会在另一个计算节点, 若资源不够时将会失败,关联组实例不支持迁移。2、不关联:将实创建在不同计算节点上,当计算节点数等于“不关联“组中实例个数时,再创建实例到不关联“组就失败。3、软关联:将实例尽量创建在同-一个计算节点上,当评估该计算节点资源不够时不会失败,会落到另一个计算节点。4、软不关联:将实例尽量创建在不同计算节点上,当计算节点数量不够时,实例会落到同-个计算节点上。Openstack 的虚拟机调度策略默认是用FilterScheduler来实现。通过调度算法实现1、 先过滤掉不满足虚机flavor要求的计算节点2、对剩余计算节点进行权重计算3、选取权重值最优的计算节点返回在控制节点上nova.conf中的available. filters用于配置可用的iter,默认所有nova内置的ilter都可以用于过滤操作。enabled_ fiters用于指定可用的过滤器。filter_ scheduler将按照enabled. filters值的顺序进行过滤。[filter_ _scheduler]available_ filters= nova.scheduler.fltrs.all filtersenabled_ filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAtiffinityFilterServerGroupAffinityilter 1、 RetryFilter过滤掉已经尝试过调度且失败的节点2、AvailabilityZoneFilter为提高容灾性和隔离性,可以将计算节点划分到不同的可用域中, Openstack默认只有个命名为"nova"的可用域,所有计算节点默认都是放在’nova"可用域中,用户可以通过主机聚合创建自定义域,在创建实例时可以选择自定义域。3、 RamFilter将不能满足flavor内存的计算节点过滤掉, 为了提高系统的资源使用率,Openstack允许计算节点超额使用内存, 也就是可以超过实际内存大小,可通过ram_alocation_ration=1.5配置,默认初始超比率itial_ram_alocation_ratio=1.5,同样还有CPU和磁盘超额配置。cpu_alocation_ratio=1.0
ram_allcation_ratio=1.0
initial_cpu_allocation_ratio=16.0
initial_ram_allocation_ratio=1.54、ComputeFilter确保只能正常的计算节点才能被nova-scheduler调度, ComputeFilter 是必须的过滤器5、ComputeCapabilitiesFilter根据计算节点的特性来过滤,比如计算节点有x86和ARM架构,如果想指定实例部署到x86_64架构节点上,就要使ComputeCapabiltiesFilter.同时还需要在flavor中添加metadata,添加Architecture=x86_646、ImagePropertiesFilter根据所选image的属性来过滤计算节点,跟flavor类似。image也有metadata, 用于指定其属性。7、ServerGroupAffinityFilter (亲和性)将实例尽量创建在同一个计算节点上,结合主机组实现,需要创建一个“关联"主机组或”软关联“组。8、ServerGroupAtifinityFilter (反亲和性)与ServerGroupAityiliter相反,将实例尽量分散到不同计算节点上,结合主机组实现.需要创建一个不关联”生机组或嗽不关联”组
s值的顺序进行过滤。[filter_ _scheduler]available_ filters= nova.scheduler.fltrs.all filtersenabled_ filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAtiffinityFilterServerGroupAffinityilter 1、 RetryFilter过滤掉已经尝试过调度且失败的节点2、AvailabilityZoneFilter为提高容灾性和隔离性,可以将计算节点划分到不同的可用域中, Openstack默认只有个命名为"nova"的可用域,所有计算节点默认都是放在’nova"可用域中,用户可以通过主机聚合创建自定义域,在创建实例时可以选择自定义域。3、 RamFilter将不能满足flavor内存的计算节点过滤掉, 为了提高系统的资源使用率,Openstack允许计算节点超额使用内存, 也就是可以超过实际内存大小,可通过ram_alocation_ration=1.5配置,默认初始超比率itial_ram_alocation_ratio=1.5,同样还有CPU和磁盘超额配置。cpu_alocation_ratio=1.0
ram_allcation_ratio=1.0
initial_cpu_allocation_ratio=16.0
initial_ram_allocation_ratio=1.54、ComputeFilter确保只能正常的计算节点才能被nova-scheduler调度, ComputeFilter 是必须的过滤器5、ComputeCapabilitiesFilter根据计算节点的特性来过滤,比如计算节点有x86和ARM架构,如果想指定实例部署到x86_64架构节点上,就要使ComputeCapabiltiesFilter.同时还需要在flavor中添加metadata,添加Architecture=x86_646、ImagePropertiesFilter根据所选image的属性来过滤计算节点,跟flavor类似。image也有metadata, 用于指定其属性。7、ServerGroupAffinityFilter (亲和性)将实例尽量创建在同一个计算节点上,结合主机组实现,需要创建一个“关联"主机组或”软关联“组。8、ServerGroupAtifinityFilter (反亲和性)与ServerGroupAityiliter相反,将实例尽量分散到不同计算节点上,结合主机组实现.需要创建一个不关联”生机组或嗽不关联”组

openstack集群安装(Rocky版)相关推荐

  1. 脚本安装Rocky版OpenStack 1控制节点+1计算节点环境部署

    视频安装指南请访问: http://39.96.203.138/wordpress/document/%E8%84%9A%E6%9C%AC%E5%AE%89%E8%A3%85rocky%E7%89%8 ...

  2. 基于麒麟SP10服务器版的Kubernetes集群安装

    Kubernetes集群安装 1.规划 图1-1 节点IP地址规划 2.系统初始化 (1)为master.node1和node2节点设置主机名. [root@master ~]# hostnamect ...

  3. OpenStack 集群部署工具:ProStack

    项目名称:ProStack 功能:自动化安装部署带有HA的 OpenStack 集群. 已实现的功能: 可以在 Ubuntu 14.04 Server 版上面自动化安装部署带有主/主式HA的OpenS ...

  4. ProxmoxVE 之 5.3集群安装及使用ceph

      上面左边是我的个人微信,如需进一步沟通,请加微信.  右边是我的公众号"Openstack私有云",如有兴趣,请关注. PromixVE 系列文章: proxmox-私有云的另 ...

  5. esxi虚拟化集群_ProxmoxVE 之集群安装(V5.2)

    上次找了一台物理服务器直接安装了一个proxmox VE 环境(VE是虚拟化环境的意思),后续又看了官方的admin文档,对整个系统架构有了一定的了解,接下来,准备好好研究一下具体能够落在生产环境上的 ...

  6. zookeeper+kafka集群安装之中的一个

    zookeeper+kafka集群安装之中的一个 准备3台虚拟机, 系统是RHEL64服务版. 1) 每台机器配置例如以下: $ cat /etc/hosts ... # zookeeper host ...

  7. Hadoop集群安装与配置

    转载自Hadoop集群安装配置教程_Hadoop2.6.0_Ubuntu/CentOS 本教程讲述如何配置 Hadoop 集群,默认读者已经掌握了 Hadoop 的单机伪分布式配置,否则请先查看Had ...

  8. websphere一直安装部署_WebSphere集群安装配置及部署应用说明

    <WebSphere集群安装配置及部署应用说明>由会员分享,可在线阅读,更多相关<WebSphere集群安装配置及部署应用说明(27页珍藏版)>请在人人文库网上搜索. 1.We ...

  9. 资源放送丨《Oracle RAC 集群安装部署》PPT视频

    点击上方"蓝字" 关注我们,享更多干货! 前段时间,墨天轮邀请数据库资深专家 邦德 老师分享了<Oracle RAC 集群安装部署>,在这里我们将课件PPT和实况录像分 ...

最新文章

  1. git pull 问题“error: Your local changes to the following files would be overwritten by merge”
  2. python培训比较好的机构-学Python哪个机构好?老男孩Python培训班
  3. 【MySQL】基于Docker的Mysql主从复制搭建
  4. Dex文件格式扫描器:特征API的检测和扫描-小工具一枚(转载)
  5. 滴水穿石--mysql添加授权用户命令
  6. Bioconductor Workflows
  7. 《MYSQL必知必会》—18.如何使用MySQL的Match()和Against()函数进行全文本搜索以及查询扩展的使用
  8. 如何使用 Mac 中的“信息”?
  9. 互联网大厂跳槽鄙视链
  10. 读python源码--对象模型
  11. 图书馆网计算机编目管理系统,浅谈图书编目计算机管理系统
  12. UltraCompare无限30天试用的方法
  13. va_list可变参数理解(va_start/va_end...)
  14. 字体Helvetica Arial,导致页面中使用中文时页面布局混乱
  15. waf服务器部署位置,【原】WAF 防火墙 部署
  16. 与门,AND Gate
  17. 云计算应该怎么学,学习路线是什么?
  18. [4G5G基础学习]:流程 - 4G LTE PLMN选择、扫频、小区搜索、系统消息读取、小区选择过程
  19. D3.js实现带动画效果的柱状图
  20. Android系统判断CPU是32位还是64位

热门文章

  1. MySQL存储引擎InnoDB、MyISAM和MEMORY介绍详解和区别
  2. 苹果手机怎么定位安卓手机_手机:你选择安卓还是苹果?苹果一定好吗?|安卓|手机|苹果手机|智能手机...
  3. 处理加权排序类型问题的一般套路
  4. c语言float怎么表示,c语言中float是如何表示的
  5. Linux与Windows下Shebang的执行
  6. d3.time.format时间模式
  7. MacOS配置Python环境
  8. 详解Python中的itertools模块
  9. ssssssssssssss 是收拾收拾收拾收拾收
  10. linuxprobe 第0章