多节点OpenStack Charms 部署指南0.0.1.dev--42--部署bundle openstack-base-78,单网口openstack网络,及注意:千万不能以数字开头命名主机名
参考文档:
OpenStack Charms Deployment Guide-001dev416
Openstack Base #78
最近在看新的openstack-base-78,顺手点开了OpenStack Charms Deployment Guide-001dev416读了下,发现网络结构有比较大的改变。
在OpenStack Charms Deployment Guide-001dev416中的Install MAAS章节,提到需求为:
- 1 x MAAS 系统:8GiB RAM、2 个 CPU、1 个 NIC、1 x 40GiB 存储
- 1 x Juju 控制器节点:4GiB RAM、2 CPU、1 NIC、1 x 40GiB 存储
- 4 个云节点:8GiB RAM、2 个 CPU、1 个网卡、3 个 80GiB 存储
注:其实在Openstack Base #78中,三个云节点就够了,且存储两块硬盘就可以。且juju控制器可以配置在maas节点的lxd上。
原注:自 MAAS (v.2.9.2) 添加对 Open vSwitch
网桥的支持以来,四个云节点中的每个节点上的两个网络接口的传统要求已被删除。这种桥接类型支持比 Linux Bridge
更复杂的网络拓扑结构,特别是对于 Charmed OpenStack,ovn-chassis 单元不再需要专用的网络设备
。
1 IPMI带内管理模式
拓扑可以变化为有点类似dell emc和conanial联合推出的文档Reference Architecture—Canonical Charmed OpenStack (Ussuri) on Dell EMC Hardware
eno1端口是与带外管理连接(OOB)交换机连接,并通过带外管理路由器与互联网连接,进行ipmi电源管理。
eno2与TOR交换机连接,openstack使用10.0.0.0/20网段,externet即floatingip使用其中的10.0.9.1-10.0.9.250子网段,openstack之上的vm使用172.16.0.0网段。
2 带内管理IPMI模式:
而笔者实际使用的是使用使用带内管理,全部网络通过eno1。
这在配置中涉及下列变化:
1 MAAS服务器端配置变化:
在招募节点并改名及tag节点后,需要创建OVS网桥:
还需要在subnet配置中,将子网10.0.0.0/20更改到fabric-1.
2 Openstack Base #78中配置的变化。
首先gitclone openstack-base-78:
git clone https://github.com/openstack-charmers/openstack-bundles
编辑bundle.yaml
vim ~/openstack-bundles/stable/openstack-base/bundle.yaml
其中,将variables:中的 data-port: &data-port br-ex:eno2
变更为eno1。
其中,ovn-chassis段中,也要配置为eno1口,
bridge-interface-mappings: br-ex:eno1
详细配置可以参见Install OpenStack
全文如下:
# Please refer to the OpenStack Charms Deployment Guide for more information.
# https://docs.openstack.org/project-deploy-guide/charm-deployment-guide
#
# NOTE: Please review the value for the configuration option
# `bridge-interface-mappings` for the `ovn-chassis` charm (see `data-port` variable).
# Refer to the [Open Virtual Network (OVN)](https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ovn.html)
# section of the [OpenStack Charms Deployment Guide](https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/)
# for more information.series: focal
variables:openstack-origin: &openstack-origin cloud:focal-xenadata-port: &data-port br-ex:eno1worker-multiplier: &worker-multiplier 0.25osd-devices: &osd-devices /dev/sdb /dev/vdbexpected-osd-count: &expected-osd-count 3expected-mon-count: &expected-mon-count 3
machines:'0':'1':'2':
relations:
- - nova-compute:amqp- rabbitmq-server:amqp
- - nova-cloud-controller:identity-service- keystone:identity-service
- - glance:identity-service- keystone:identity-service
- - neutron-api:identity-service- keystone:identity-service
- - neutron-api:amqp- rabbitmq-server:amqp
- - glance:amqp- rabbitmq-server:amqp
- - nova-cloud-controller:image-service- glance:image-service
- - nova-compute:image-service- glance:image-service
- - nova-cloud-controller:cloud-compute- nova-compute:cloud-compute
- - nova-cloud-controller:amqp- rabbitmq-server:amqp
- - openstack-dashboard:identity-service- keystone:identity-service
- - nova-cloud-controller:neutron-api- neutron-api:neutron-api
- - cinder:image-service- glance:image-service
- - cinder:amqp- rabbitmq-server:amqp
- - cinder:identity-service- keystone:identity-service
- - cinder:cinder-volume-service- nova-cloud-controller:cinder-volume-service
- - cinder-ceph:storage-backend- cinder:storage-backend
- - ceph-mon:client- nova-compute:ceph
- - nova-compute:ceph-access- cinder-ceph:ceph-access
- - ceph-mon:client- cinder-ceph:ceph
- - ceph-mon:client- glance:ceph
- - ceph-osd:mon- ceph-mon:osd
- - ntp:juju-info- nova-compute:juju-info
- - ceph-radosgw:mon- ceph-mon:radosgw
- - ceph-radosgw:identity-service- keystone:identity-service
- - placement:identity-service- keystone:identity-service
- - placement:placement- nova-cloud-controller:placement
- - keystone:shared-db- keystone-mysql-router:shared-db
- - cinder:shared-db- cinder-mysql-router:shared-db
- - glance:shared-db- glance-mysql-router:shared-db
- - nova-cloud-controller:shared-db- nova-mysql-router:shared-db
- - neutron-api:shared-db- neutron-mysql-router:shared-db
- - openstack-dashboard:shared-db- dashboard-mysql-router:shared-db
- - placement:shared-db- placement-mysql-router:shared-db
- - vault:shared-db- vault-mysql-router:shared-db
- - keystone-mysql-router:db-router- mysql-innodb-cluster:db-router
- - cinder-mysql-router:db-router- mysql-innodb-cluster:db-router
- - nova-mysql-router:db-router- mysql-innodb-cluster:db-router
- - glance-mysql-router:db-router- mysql-innodb-cluster:db-router
- - neutron-mysql-router:db-router- mysql-innodb-cluster:db-router
- - dashboard-mysql-router:db-router- mysql-innodb-cluster:db-router
- - placement-mysql-router:db-router- mysql-innodb-cluster:db-router
- - vault-mysql-router:db-router- mysql-innodb-cluster:db-router
- - neutron-api-plugin-ovn:neutron-plugin- neutron-api:neutron-plugin-api-subordinate
- - ovn-central:certificates- vault:certificates
- - ovn-central:ovsdb-cms- neutron-api-plugin-ovn:ovsdb-cms
- - neutron-api:certificates- vault:certificates
- - ovn-chassis:nova-compute- nova-compute:neutron-plugin
- - ovn-chassis:certificates- vault:certificates
- - ovn-chassis:ovsdb- ovn-central:ovsdb
- - vault:certificates- neutron-api-plugin-ovn:certificates
- - vault:certificates- cinder:certificates
- - vault:certificates- glance:certificates
- - vault:certificates- keystone:certificates
- - vault:certificates- nova-cloud-controller:certificates
- - vault:certificates- openstack-dashboard:certificates
- - vault:certificates- placement:certificates
- - vault:certificates- ceph-radosgw:certificates
- - vault:certificates- mysql-innodb-cluster:certificates
applications:ceph-mon:annotations:gui-x: '790'gui-y: '1540'charm: cs:ceph-mon-61num_units: 3options:expected-osd-count: *expected-osd-countmonitor-count: *expected-mon-countsource: *openstack-originto:- lxd:0- lxd:1- lxd:2ceph-osd:annotations:gui-x: '1065'gui-y: '1540'charm: cs:ceph-osd-315num_units: 3options:osd-devices: *osd-devicessource: *openstack-originto:- '0'- '1'- '2'ceph-radosgw:annotations:gui-x: '850'gui-y: '900'charm: cs:ceph-radosgw-300num_units: 1options:source: *openstack-originto:- lxd:0cinder-mysql-router:annotations:gui-x: '900'gui-y: '1400'charm: cs:mysql-router-15cinder:annotations:gui-x: '980'gui-y: '1270'charm: cs:cinder-317num_units: 1options:block-device: Noneglance-api-version: 2worker-multiplier: *worker-multiplieropenstack-origin: *openstack-originto:- lxd:1cinder-ceph:annotations:gui-x: '1120'gui-y: '1400'charm: cs:cinder-ceph-268num_units: 0glance-mysql-router:annotations:gui-x: '-290'gui-y: '1400'charm: cs:mysql-router-15glance:annotations:gui-x: '-230'gui-y: '1270'charm: cs:glance-312num_units: 1options:worker-multiplier: *worker-multiplieropenstack-origin: *openstack-originto:- lxd:2keystone-mysql-router:annotations:gui-x: '230'gui-y: '1400'charm: cs:mysql-router-15keystone:annotations:gui-x: '300'gui-y: '1270'charm: cs:keystone-329num_units: 1options:worker-multiplier: *worker-multiplieropenstack-origin: *openstack-originto:- lxd:0neutron-mysql-router:annotations:gui-x: '505'gui-y: '1385'charm: cs:mysql-router-15neutron-api-plugin-ovn:annotations:gui-x: '690'gui-y: '1385'charm: cs:neutron-api-plugin-ovn-10neutron-api:annotations:gui-x: '580'gui-y: '1270'charm: cs:neutron-api-302num_units: 1options:neutron-security-groups: trueflat-network-providers: physnet1worker-multiplier: *worker-multiplieropenstack-origin: *openstack-originto:- lxd:1placement-mysql-router:annotations:gui-x: '1320'gui-y: '1385'charm: cs:mysql-router-15placement:annotations:gui-x: '1320'gui-y: '1270'charm: cs:placement-31num_units: 1options:worker-multiplier: *worker-multiplieropenstack-origin: *openstack-originto:- lxd:2nova-mysql-router:annotations:gui-x: '-30'gui-y: '1385'charm: cs:mysql-router-15nova-cloud-controller:annotations:gui-x: '35'gui-y: '1270'charm: cs:nova-cloud-controller-361num_units: 1options:network-manager: Neutronworker-multiplier: *worker-multiplieropenstack-origin: *openstack-originto:- lxd:0nova-compute:annotations:gui-x: '190'gui-y: '890'charm: cs:nova-compute-337num_units: 3options:config-flags: default_ephemeral_format=ext4enable-live-migration: trueenable-resize: truemigration-auth-type: sshopenstack-origin: *openstack-originto:- '0'- '1'- '2'ntp:annotations:gui-x: '315'gui-y: '1030'charm: cs:ntp-47num_units: 0dashboard-mysql-router:annotations:gui-x: '510'gui-y: '1030'charm: cs:mysql-router-15openstack-dashboard:annotations:gui-x: '585'gui-y: '900'charm: cs:openstack-dashboard-318num_units: 1options:openstack-origin: *openstack-originto:- lxd:1rabbitmq-server:annotations:gui-x: '300'gui-y: '1550'charm: cs:rabbitmq-server-117num_units: 1to:- lxd:2mysql-innodb-cluster:annotations:gui-x: '535'gui-y: '1550'charm: cs:mysql-innodb-cluster-15num_units: 3to:- lxd:0- lxd:1- lxd:2ovn-central:annotations:gui-x: '70'gui-y: '1550'charm: cs:ovn-central-15num_units: 3options:source: *openstack-originto:- lxd:0- lxd:1- lxd:2ovn-chassis:annotations:gui-x: '120'gui-y: '1030'charm: cs:ovn-chassis-21# Please update the `bridge-interface-mappings` to values suitable for the# hardware used in your deployment. See the referenced documentation at the# top of this file.options:ovn-bridge-mappings: physnet1:br-exbridge-interface-mappings: br-ex:eno1vault-mysql-router:annotations:gui-x: '1535'gui-y: '1560'charm: cs:mysql-router-15vault:annotations:gui-x: '1610'gui-y: '1430'charm: cs:vault-54num_units: 1to:- lxd:0
3 然后部署Openstack Base #78
juju deploy ~/openstack-bundles/stable/openstack-base/bundle.yaml --debug
然后juju status,惊讶的发现nova-compute 状态blocked:
nova-compute/0 blocked idle 0 10.0.0.159 Services not running that should be: nova-compute, nova-api-metadatantp/2 active idle 10.0.0.159 123/udp chrony: Readyovn-chassis/2 active idle 10.0.0.159 Unit is ready
nova-compute/1 blocked idle 1 10.0.0.157 Services not running that should be: nova-compute, nova-api-metadatantp/1 active idle 10.0.0.157 123/udp chrony: Readyovn-chassis/1 active idle 10.0.0.157 Unit is ready
nova-compute/2* blocked idle 2 10.0.0.158 Services not running that should be: nova-compute, nova-api-metadatantp/0* active idle 10.0.0.158 123/udp chrony: Readyovn-chassis/0* active idle 10.0.0.158 Unit is ready
检查了下log:
登录到nova-compute/0
juju ssh nova-compute/0
查看log和nova-compute:
systemctl status nova-compute
systemctl status nova-api-metadata
journalctl -xe
输出:
1
systemctl status nova-compute
● nova-compute.service - OpenStack ComputeLoaded: loaded (/lib/systemd/system/nova-compute.service; enabled; vendor preset: enabled)Active: failed (Result: exit-code) since Tue 2021-11-09 06:13:26 UTC; 12s agoProcess: 614284 ExecStart=/etc/init.d/nova-compute systemd-start (code=exited, status=1/FAILURE)Main PID: 614284 (code=exited, status=1/FAILURE)Nov 09 06:13:26 158 systemd[1]: nova-compute.service: Scheduled restart job, restart counter is at 5.
Nov 09 06:13:26 158 systemd[1]: Stopped OpenStack Compute.
Nov 09 06:13:26 158 systemd[1]: nova-compute.service: Start request repeated too quickly.
Nov 09 06:13:26 158 systemd[1]: nova-compute.service: Failed with result 'exit-code'.
Nov 09 06:13:26 158 systemd[1]: Failed to start OpenStack Compute.2systemctl status nova-api-metadata
● nova-api-metadata.service - OpenStack Compute metadata APILoaded: loaded (/lib/systemd/system/nova-api-metadata.service; enabled; vendor preset: enabled)Active: failed (Result: exit-code) since Tue 2021-11-09 06:11:43 UTC; 2min 11s agoDocs: man:nova-api-metadata(1)Process: 612584 ExecStart=/etc/init.d/nova-api-metadata systemd-start (code=exited, status=1/FAILURE)Main PID: 612584 (code=exited, status=1/FAILURE)Nov 09 06:11:43 158 systemd[1]: nova-api-metadata.service: Scheduled restart job, restart counter is at 5.
Nov 09 06:11:43 158 systemd[1]: Stopped OpenStack Compute metadata API.
Nov 09 06:11:43 158 systemd[1]: nova-api-metadata.service: Start request repeated too quickly.
Nov 09 06:11:43 158 systemd[1]: nova-api-metadata.service: Failed with result 'exit-code'.
Nov 09 06:11:43 158 systemd[1]: Failed to start OpenStack Compute metadata API.3
less /var/log/nova/nova-compute.log
/var/log/nova/nova-compute.log: No such file or directory
less /var/log/nova/nova-api-metadata.log
/var/log/nova/nova-api-metadata.log: No such file or directory4journalctl -xe
--
-- The job identifier is 148372.
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: Traceback (most recent call last):
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/types.py", line 843, in __call__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: value = self.ip_address(value)
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/types.py", line 728, in __call__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: self.version_checker(value)
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/types.py", line 748, in _check_both_versions
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: raise ValueError("%s is not IPv4 or IPv6 address" % address)
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: ValueError: 158 is not IPv4 or IPv6 address
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: During handling of the above exception, another exception occurred:
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: Traceback (most recent call last):
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/types.py", line 846, in __call__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: value = self.hostname(value)
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/types.py", line 795, in __call__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: raise ValueError('%s contains no non-numeric characters in the '
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: ValueError: 158 contains no non-numeric characters in the top-level domain part of the host name and is invalid
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: During handling of the above exception, another exception occurred:
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: Traceback (most recent call last):
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 611, in _check_default
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: self.type(self.default)
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/types.py", line 848, in __call__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: raise ValueError(
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: ValueError: 158 is not a valid host address
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: During handling of the above exception, another exception occurred:
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: Traceback (most recent call last):
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/bin/neutron-ovn-metadata-agent", line 6, in <module>
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: from neutron.cmd.eventlet.agents.ovn_metadata import main
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/agents/ovn_metadata.py", line 15, in <module>
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: from neutron.agent.ovn import metadata_agent
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/neutron/agent/ovn/metadata_agent.py", line 16, in <module>
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: from neutron.common import config
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/neutron/common/config.py", line 33, in <module>
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: from neutron.conf import common as common_config
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/neutron/conf/common.py", line 94, in <module>
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: cfg.HostAddressOpt('host', default=net.get_hostname(),
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 1173, in __init__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: super(HostAddressOpt, self).__init__(name,
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 595, in __init__
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: self._check_default()
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 613, in _check_default
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: raise DefaultValueError("Error processing default value "
Nov 09 06:22:47 158 neutron-ovn-metadata-agent[631648]: oslo_config.cfg.DefaultValueError: Error processing default value 158 for Opt type of HostAddress.
root@158:~#
root@158:~#
root@158:~#
root@158:~# journalctl -xe
Nov 09 06:23:27 158 neutron-ovn-metadata-agent[632922]: super(HostAddressOpt, self).__init__(name,
Nov 09 06:23:27 158 neutron-ovn-metadata-agent[632922]: File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 595, in __init__
Nov 09 06:23:27 158 neutron-ovn-metadata-agent[632922]: self._check_default()
Nov 09 06:23:27 158 neutron-ovn-metadata-agent[632922]: File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 613, in _check_default
Nov 09 06:23:27 158 neutron-ovn-metadata-agent[632922]: raise DefaultValueError("Error processing default value "
Nov 09 06:23:27 158 neutron-ovn-metadata-agent[632922]: oslo_config.cfg.DefaultValueError: Error processing default value 158 for Opt type of HostAddress.
Nov 09 06:23:27 158 systemd[1]: neutron-ovn-metadata-agent.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit neutron-ovn-metadata-agent.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Nov 09 06:23:27 158 systemd[1]: neutron-ovn-metadata-agent.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit neutron-ovn-metadata-agent.service has entered the 'failed' state with result 'exit-code'.
Nov 09 06:23:27 158 systemd[1]: neutron-ovn-metadata-agent.service: Scheduled restart job, restart counter is at 1845.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit neutron-ovn-metadata-agent.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Nov 09 06:23:27 158 systemd[1]: Stopped Neutron OVN Metadata Agent.
-- Subject: A stop job for unit neutron-ovn-metadata-agent.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit neutron-ovn-metadata-agent.service has finished.
--
-- The job identifier is 149316 and the job result is done.
Nov 09 06:23:27 158 systemd[1]: Started Neutron OVN Metadata Agent.
-- Subject: A start job for unit neutron-ovn-metadata-agent.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit neutron-ovn-metadata-agent.service has finished successfully.
--
-- The job identifier is 149316.
论坛答疑为:
似乎您将捆绑包部署到的金属的主机名被命名为 157、158 和 159。
ovn 元数据代理日志显示它没有正确处理这个问题,因为如果它是主机名,它期望在起始位置有一个字母字符,如果它是 IPv4
地址,则需要一个完整的点四元 IP 地址。这篇文章 解释说,虽然 RFC 1123 放宽了 RFC 952 的约束,即主机名必须以字母字符开头,但是,由于实现问题,RFC1178 指南中提供了选择有效主机名的规定
不要在名称的开头使用数字。
许多程序接受数字互联网地址以及 姓名。不幸的是,有些程序不正确 区分两者并可能被愚弄,例如, 以十进制数字开头的字符串。
完全由十六进制数字组成的名称,例如 “beef”,也是有问题的,因为它们可以被解释 完全作为十六进制数字以及字母字符串。 我建议重命名以 A和 Z 之间的字符开头的主机,以符合主机名 RFC 952,然后重新部署捆绑包。
原来是因为偷懒在maas中把主机名改成数字造成的。。。。。。
在MAAS中将把节点名改成node156类似的格式,重新部署,解封vault,juju status
为:
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.9.18 unsupported 09:57:02+08:00App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 16.2.6 active 3 ceph-mon charmstore stable 61 ubuntu Unit is ready and clustered
ceph-osd 16.2.6 active 3 ceph-osd charmstore stable 315 ubuntu Unit is ready (1 OSD)
ceph-radosgw 16.2.6 active 1 ceph-radosgw charmstore stable 300 ubuntu Unit is ready
cinder 19.0.0 active 1 cinder charmstore stable 317 ubuntu Unit is ready
cinder-ceph 19.0.0 active 1 cinder-ceph charmstore stable 268 ubuntu Unit is ready
cinder-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
dashboard-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
glance 23.0.0 active 1 glance charmstore stable 312 ubuntu Unit is ready
glance-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
keystone 20.0.0 active 1 keystone charmstore stable 329 ubuntu Application Ready
keystone-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
mysql-innodb-cluster 8.0.27 active 3 mysql-innodb-cluster charmstore stable 15 ubuntu Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api 19.0.0 active 1 neutron-api charmstore stable 302 ubuntu Unit is ready
neutron-api-plugin-ovn 19.0.0 active 1 neutron-api-plugin-ovn charmstore stable 10 ubuntu Unit is ready
neutron-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
nova-cloud-controller 24.0.0 active 1 nova-cloud-controller charmstore stable 361 ubuntu Unit is ready
nova-compute 24.0.0 active 3 nova-compute charmstore stable 337 ubuntu Unit is ready
nova-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
ntp 3.5 active 3 ntp charmstore stable 47 ubuntu chrony: Ready
openstack-dashboard 20.1.0 active 1 openstack-dashboard charmstore stable 318 ubuntu Unit is ready
ovn-central 21.09.0 active 3 ovn-central charmstore stable 15 ubuntu Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-chassis 21.09.0 active 3 ovn-chassis charmstore stable 21 ubuntu Unit is ready
placement 6.0.0 active 1 placement charmstore stable 31 ubuntu Unit is ready
placement-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready
rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore stable 117 ubuntu Unit is ready
vault 1.5.9 active 1 vault charmstore stable 54 ubuntu Unit is ready (active: true, mlock: disabled)
vault-mysql-router 8.0.27 active 1 mysql-router charmstore stable 15 ubuntu Unit is readyUnit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 0/lxd/0 10.0.9.255 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/0 10.0.0.254 Unit is ready and clustered
ceph-mon/2 active idle 2/lxd/0 10.0.10.9 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.157 Unit is ready (1 OSD)
ceph-osd/1* active idle 1 10.0.0.158 Unit is ready (1 OSD)
ceph-osd/2 active idle 2 10.0.0.159 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 0/lxd/1 10.0.9.254 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 10.0.1.0 8776/tcp Unit is readycinder-ceph/0* active idle 10.0.1.0 Unit is readycinder-mysql-router/0* active idle 10.0.1.0 Unit is ready
glance/0* active idle 2/lxd/1 10.0.10.8 9292/tcp Unit is readyglance-mysql-router/0* active idle 10.0.10.8 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.10.0 5000/tcp Unit is readykeystone-mysql-router/0* active idle 10.0.10.0 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/3 10.0.10.3 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/2 10.0.9.252 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/2 10.0.10.6 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/3 10.0.0.253 9696/tcp Unit is readyneutron-api-plugin-ovn/0* active idle 10.0.0.253 Unit is readyneutron-mysql-router/0* active idle 10.0.0.253 Unit is ready
nova-cloud-controller/0* active idle 0/lxd/4 10.0.10.2 8774/tcp,8775/tcp Unit is readynova-mysql-router/0* active idle 10.0.10.2 Unit is ready
nova-compute/0 active idle 0 10.0.0.157 Unit is readyntp/1 active idle 10.0.0.157 123/udp chrony: Readyovn-chassis/1 active idle 10.0.0.157 Unit is ready
nova-compute/1* active idle 1 10.0.0.158 Unit is readyntp/0* active idle 10.0.0.158 123/udp chrony: Readyovn-chassis/0* active idle 10.0.0.158 Unit is ready
nova-compute/2 active idle 2 10.0.0.159 Unit is readyntp/2 active idle 10.0.0.159 123/udp chrony: Readyovn-chassis/2 active idle 10.0.0.159 Unit is ready
openstack-dashboard/0* active idle 1/lxd/4 10.0.9.251 80/tcp,443/tcp Unit is readydashboard-mysql-router/0* active idle 10.0.9.251 Unit is ready
ovn-central/0* active idle 0/lxd/5 10.0.9.253 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/5 10.0.0.255 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/3 10.0.10.4 6641/tcp,6642/tcp Unit is ready
placement/0* active idle 2/lxd/4 10.0.10.5 8778/tcp Unit is readyplacement-mysql-router/0* active idle 10.0.10.5 Unit is ready
rabbitmq-server/0* active idle 2/lxd/5 10.0.10.7 5672/tcp Unit is ready
vault/0* active idle 0/lxd/6 10.0.10.1 8200/tcp Unit is ready (active: true, mlock: disabled)vault-mysql-router/0* active idle 10.0.10.1 Unit is readyMachine State DNS Inst id Series AZ Message
0 started 10.0.0.157 node157 focal default Deployed
0/lxd/0 started 10.0.9.255 juju-7569b7-0-lxd-0 focal default Container started
0/lxd/1 started 10.0.9.254 juju-7569b7-0-lxd-1 focal default Container started
0/lxd/2 started 10.0.10.0 juju-7569b7-0-lxd-2 focal default Container started
0/lxd/3 started 10.0.10.3 juju-7569b7-0-lxd-3 focal default Container started
0/lxd/4 started 10.0.10.2 juju-7569b7-0-lxd-4 focal default Container started
0/lxd/5 started 10.0.9.253 juju-7569b7-0-lxd-5 focal default Container started
0/lxd/6 started 10.0.10.1 juju-7569b7-0-lxd-6 focal default Container started
1 started 10.0.0.158 node158 focal default Deployed
1/lxd/0 started 10.0.0.254 juju-7569b7-1-lxd-0 focal default Container started
1/lxd/1 started 10.0.1.0 juju-7569b7-1-lxd-1 focal default Container started
1/lxd/2 started 10.0.9.252 juju-7569b7-1-lxd-2 focal default Container started
1/lxd/3 started 10.0.0.253 juju-7569b7-1-lxd-3 focal default Container started
1/lxd/4 started 10.0.9.251 juju-7569b7-1-lxd-4 focal default Container started
1/lxd/5 started 10.0.0.255 juju-7569b7-1-lxd-5 focal default Container started
2 started 10.0.0.159 node159 focal default Deployed
2/lxd/0 started 10.0.10.9 juju-7569b7-2-lxd-0 focal default Container started
2/lxd/1 started 10.0.10.8 juju-7569b7-2-lxd-1 focal default Container started
2/lxd/2 started 10.0.10.6 juju-7569b7-2-lxd-2 focal default Container started
2/lxd/3 started 10.0.10.4 juju-7569b7-2-lxd-3 focal default Container started
2/lxd/4 started 10.0.10.5 juju-7569b7-2-lxd-4 focal default Container started
2/lxd/5 started 10.0.10.7 juju-7569b7-2-lxd-5 focal default Container started
多节点OpenStack Charms 部署指南0.0.1.dev--42--部署bundle openstack-base-78,单网口openstack网络,及注意:千万不能以数字开头命名主机名相关推荐
- zabbix4.0添加mysql报警_部署监控三剑客 Zabbix4.0 监控以及告警机制
Zabbix 简介 : Zabbix 是一个高度集成的企业级开源网络监控解决方案,与Cacti.Nagios类似,提供分布式监控以及集中的 Web 管理界面.被监控对象只要支持 SNMP 协议或者运行 ...
- OCP NIC 3.0设计指南(版本0.85b)
第三章 电气接口定义----Card Edge和主板 Card Edge金手指要求 金手指Mating序列 Card Edge金手指要求 OCP NIC 3.0卡服从SFF-TA-1002规范中金手指 ...
- 多节点OpenStack Charms 部署指南0.0.1.dev303--21--控制器备份和还原
目录: 第一节 多节点OpenStack Charms 部署指南0.0.1.dev223–1--OpenStack Charms 部署指南 第二节 多节点OpenStack Charms 部署指南0. ...
- 多节点OpenStack Charms 部署指南0.0.1.dev--41--配置openstack-base-73作为juju管理的openstack云
参考文档: OpenStack Using OpenStack with Juju Add credentials Cloud image metadata Cloud image metadata ...
- OpenStack Yoga 三节点部署指南
头盘 版本选择 本指南部分图片来自 OpenStack 官方 OpenStack 本指南以CentOS作为底层系统,OpenStack同样支持多种Linux系统,具体系统要求参考:https://do ...
- 云服务器 Centos7 部署 Elasticsearch 8.0 + Kibana 8.0 指南
1.昨晚上更新云服务器部署8.0视频 Part1:https://t.zsxq.com/UJ6AY7y Part2:https://t.zsxq.com/YzVrZJq 2.Elasticsearch ...
- Elasticsearch 5.0 —— Head插件部署指南
使用ES的基本都会使用过head,但是版本升级到5.0后,head插件就不好使了.下面就看看如何在5.0中启动Head插件吧! 官方粗略教程 Running with built in server ...
- Kubernetes 部署 Mysql 8.0 数据库(单节点)
作者:超级小豆丁 http://www.mydlq.club/article/75 目录 简介 Mysql 参数配置 创建 ConfigMap 存储 Mysql 配置文件 通过 Kubectl 工具部 ...
- VMware-vRealize-Log-Insight-8.2.0.0安装部署指南
VMware-vRealize-Log-Insight-8.2.0.0安装部署指南 https://blog.csdn.net/u011775882/article/details/110928142 ...
最新文章
- cat在MATLAB中的应用
- MySQL 里的 Timestrap 和 DateTime 和 Java 中的 Date
- YUIDoc的使用方法小结
- 职称计算机考试选择题,职称计算机考试综合选择题「附答案」.docx
- 最小完美哈希函数简介
- mysql也能注册到eureka_为什么我的服务提供者和消费者都注册不到eureka注册中心?...
- 4月21日会议总结(整理—祁子梁)
- C++ Primer 第五版 第6章 6.7——函数指针习题答案
- MAYA制作女猎人角色毛发-3D建模场景模型教程
- 【JavaEE】Spring 事务传播机制
- 持有美签,加拿大签证申请攻略(内含图文)
- Kubernetes学习笔记(一):Pod详解:Pod配置、Pod生命周期、Pod调度、容器设计模式
- 骗子华中帝国,诈骗犯QQ860100371
- 【考研数学】数一-数学概念anki卡片合集-547张-23000字-22电子科大考研上岸整理
- 百度网盘2T学习资源分享干货满满
- 数据仓库:分层设计详解
- Java--验证码登录
- 单片机看门狗c语言,单片机看门狗程序 -解决方案-华强电子网
- 【研究生工作周报】(第十周)
- 家里WiFi慢?几招帮你解决
热门文章
- 同为博客,不同风格 ——Hexo另类搭建
- 抠图算法:经典的贝叶斯抠图
- SQL优化之常用优化工具
- Fama-French 三因子模型
- 企业搬迁至直线50公里新地点,员工不去被要求集中办理离职,请各位大佬支招如何保证自身权益?
- servlet+javabean+jdbc+mysql基于MVC模式的课件管理系统,有三个表的增删改查和课件搜索、课件上传、课件下载功能, 具体功能请看界面上的导航条
- python爬取手机app图片_python 手机app数据爬取
- 表白爱心HTML制作
- 专业计算机术语中英文对照(二)
- UNP-基本TCP编程-2(复用技术)