第一篇:手动部署Openstack Rocky 双节点(1)- 基础服务
上一篇:手动部署Openstack Rocky 双节点(4)- Nova
下一篇:手动部署Openstack Rocky 双节点(6)- Horizon

文章目录

  • 参考文档
  • 关于机器名变更
    • controller / tony-controller网络配置
    • compute1 / tony-compte1网络配置
  • neutron (controller)
    • 添加neutron账户及其鉴权信息
    • 安装软件包
    • 修改neutron配置文件
    • 修改ml2配置文件
    • 修改openvswtich配置文件
    • 修改l3_agent配置文件
    • 修改dhcp agent的配置文件
    • 修改metadata agent配置文件
    • 修改nova配置文件
    • 创建plugin软链接
    • Open vSwitch
    • 添加bridge与port
    • 创建neutron数据库
    • 初始化neutron数据库
    • 启动neutron服务
    • 检查服务启动后的网络配置
  • neutron (compute1)
    • 安装软件包
    • 修改neutron相关的配置文件
    • Open vSwitch
    • 重启nova-compute服务
    • 启用并启动openvswitch agent服务
    • 检查compute1机器上的网络配置状态
    • 验证openvswitch agent正常工作
  • 结语

参考文档

手动部署 OpenStack Rocky 双节点

关于机器名变更

由于实验室网络冲突,机器变更如下。

原机器名 新机器名 /etc/hosts文件
controller tony-controller 172.18.22.231 controller
compute tony-compute1 172.18.22.232 compute1

controller / tony-controller网络配置

[tony@tony-controller ~]$ sudo ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 172.18.22.231/24 brd 172.18.22.255 scope global noprefixroute enp0s3
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute enp0s8
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 10.238.156.138/23 brd 10.238.157.255 scope global noprefixroute dynamic

compute1 / tony-compte1网络配置

[tony@tony-compute1 ~]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 172.18.22.232/24 brd 172.18.22.255 scope global noprefixroute enp0s3
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 10.0.0.2/24 brd 10.0.0.255 scope global noprefixroute enp0s8
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 10.238.157.84/23 brd 10.238.157.255 scope global noprefixroute dynamic

neutron (controller)

添加neutron账户及其鉴权信息

# oops, 忘记加载鉴权信息了
[tony@tony-controller ~]$ openstack service create --name neutron --description "OpenStack Networking" network
Missing value auth-url required for auth plugin password# 加载admin鉴权信息
[tony@tony-controller ~]$ source adminrc# 创建network服务,名称为 neutron
[tony@tony-controller ~]$ openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 603f059a82a34c9f84ccf6aa40619e7e |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+# 创建neutron账户
[tony@tony-controller ~]$ openstack user create --domain default --password-prompt neutron
User Password: <Enter password>
Repeat User Password: <Repeat password>
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 5cc6dddbf4cf49cb8cbe07dd45a2ba37 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[tony@tony-controller ~]$# 将neutron账户加入到service项目的admin角色中
[tony@tony-controller ~]$ openstack role add --project service --user neutron admin# 创建public访问的endpoint
[tony@tony-controller ~]$ openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7c400b1cd37646bfb8c6105e61b0fecc |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 603f059a82a34c9f84ccf6aa40619e7e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+# 创建internal访问的endpoint
[tony@tony-controller ~]$ openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | de6d2c721df7458b89abe0a4b47d8b9c |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 603f059a82a34c9f84ccf6aa40619e7e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+# 创建admin访问的endpoint
[tony@tony-controller ~]$ openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5683397959764e4ba97854f3a2996c54 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 603f059a82a34c9f84ccf6aa40619e7e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

安装软件包

[tony@tony-controller ~]$ sudo yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

修改neutron配置文件

neutron的配置文件:/etc/neutron/neutron.conf

  • 原始配置文件

    所有配置项都是注释掉的

[tony@tony-controller ~]$ sudo cat /etc/neutron/neutron.conf  | grep -v -E '^$|^#'
[DEFAULT]
[agent]
[cors]
[database]
[keystone_authtoken]
[matchmaker_redis]
[nova]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]
  • 修改后的配置文件
[tony@tony-controller ~]$ sudo cat /etc/neutron/neutron.conf | grep -v -E '^#|^$'
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabiit://openstack:Netvista123@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:Netvista123@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Netvista123
[matchmaker_redis]
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = Netvista123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

修改ml2配置文件

ml2配置文件:/etc/neutron/plugins/ml2/ml2_conf.ini

  • 原始配置文件

    所有配置项都是注释掉的

[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v -E '^#|^$'
[DEFAULT]
[l2pop]
[ml2]
[ml2_type_flat]
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
  • 修改后的配置文件
[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v -E '^#|^$'
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,l2population
[ml2_type_flat]
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

修改openvswtich配置文件

openvswitch配置文件:/etc/neutron/plugins/ml2/openvswitch_agent.ini

  • 原始配置文件

    所有配置项都是注释掉的。

[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/openvswitch_agent.ini | grep -v -E '^#|^$'
[DEFAULT]
[agent]
[network_log]
[ovs]
[securitygroup]
[xenapi]
  • 修改后的配置文件[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/openvswitch_agent.ini | grep -v -E ‘#|$’
[DEFAULT]
[agent]
tunnel_types = vxlan
l2_population = True
[network_log]
[ovs]
bridge_mappings = provider:br-provider
local_ip = 10.0.0.1
[securitygroup]
firewall_driver = iptables_hybrid
[xenapi]

修改l3_agent配置文件

配置文件:/etc/neutron/l3_agent.ini

  • 原始配置文件

    所有的配置项都是注释掉的。

[tony@tony-controller ~]$ sudo cat /etc/neutron/l3_agent.ini | grep -v -E '^#|^$'                    [DEFAULT]
[agent]
[ovs]
  • 修改后的配置文件
# The external_network_bridge option intentionally contains no value.
[tony@tony-controller ~]$ sudo cat /etc/neutron/l3_agent.ini | grep -v -E '^#|^$'
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
[agent]
[ovs]

修改dhcp agent的配置文件

配置文件:/etc/neutron/dhcp_agent.ini

  • 原始配置文件
[tony@tony-controller ~]$ sudo cat /etc/neutron/dhcp_agent.ini | grep -v -E '^#|^$'
[DEFAULT]
[agent]
[ovs]
  • 修改后的配置文件
[tony@tony-controller ~]$ sudo cat /etc/neutron/dhcp_agent.ini | grep -v -E '^#|^$'
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_ioslated_metadata = true
[agent]
[ovs]

修改metadata agent配置文件

配置文件:

  • 原始配置文件:

所有的配置项都是注释掉的。

[tony@tony-controller ~]$ sudo cat /etc/neutron/metadata_agent.ini | grep -v -E '^#|^$'
[DEFAULT]
[agent]
[cache]
  • 修改后的配置文件
[tony@tony-controller ~]$ sudo cat /etc/neutron/metadata_agent.ini | grep -v -E '^#|^$'
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = Netvista123
[agent]
[cache]

修改nova配置文件

配置文件:

  • 原始配置文件
[tony@tony-controller ~]$ sudo cat /etc/nova/nova.conf | grep -v -E '^#|^$'
[DEFAULT]
my_ip = 172.18.22.231
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:Netvista123@controller
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
instances_path = /var/lib/nova/instances
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:Netvista123@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:Netvista123@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = Netvista123
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
user_domain_name = Default
auth_type = password
auth_url = http://controller:5000/v3
username = placement
password = Netvista123
[placement_database]
connection = mysql+pymysql://placement:Netvista123@controller/placement
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
  • 修改后配置文件
    启用neutron节,加上如下配置项
[tony@tony-controller ~]$ sudo cat /etc/nova/nova.conf | grep -v -E '^#|^$'

[DEFAULT]
my_ip = 172.18.22.231
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:Netvista123@controller
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
instances_path = /var/lib/nova/instances
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:Netvista123@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:Netvista123@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = Netvista123
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Netvista123
service_metadata_proxy = true
metadata_proxy_shared_secret = Netvista123
[notifications]
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
user_domain_name = Default
auth_type = password
auth_url = http://controller:5000/v3
username = placement
password = Netvista123
[placement_database]
connection = mysql+pymysql://placement:Netvista123@controller/placement
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

创建plugin软链接

# 没有软链接
[tony@tony-controller ~]$ ls -l /etc/neutron/
total 128
drwxr-xr-x. 11 root root      260 Apr 13 11:56 conf.d
-rw-r-----.  1 root neutron 10867 Apr 13 13:20 dhcp_agent.ini
-rw-r-----.  1 root neutron 14206 Apr 13 13:16 l3_agent.ini
-rw-r-----.  1 root neutron 11389 Apr 13 13:24 metadata_agent.ini
-rw-r-----.  1 root neutron 72079 Apr 13 12:58 neutron.conf
drwxr-xr-x.  3 root root       17 Apr 13 11:56 plugins
-rw-r-----.  1 root neutron 12153 Nov  6 14:12 policy.json
-rw-r--r--.  1 root root     1195 Nov  6 14:12 rootwrap.conf# 创建到ml2_conf.ini的软链接,名字叫plugin.ini
[tony@tony-controller ~]$ sudo ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini# 软链接创建成功
[tony@tony-controller ~]$ ls -l /etc/neutron/
total 128
drwxr-xr-x. 11 root root      260 Apr 13 11:56 conf.d
-rw-r-----.  1 root neutron 10867 Apr 13 13:20 dhcp_agent.ini
-rw-r-----.  1 root neutron 14206 Apr 13 13:16 l3_agent.ini
-rw-r-----.  1 root neutron 11389 Apr 13 13:24 metadata_agent.ini
-rw-r-----.  1 root neutron 72079 Apr 13 12:58 neutron.conf
lrwxrwxrwx.  1 root root       37 Apr 13 13:33 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
drwxr-xr-x.  3 root root       17 Apr 13 11:56 plugins
-rw-r-----.  1 root neutron 12153 Nov  6 14:12 policy.json
-rw-r--r--.  1 root root     1195 Nov  6 14:12 rootwrap.conf

Open vSwitch

# 启用openvswitch服务
[tony@tony-controller ~]$ sudo systemctl enable openvswitch
Created symlink from /etc/systemd/system/multi-user.target.wants/openvswitch.service to /usr/lib/systemd/system/openvswitch.service.# 启动服务
[tony@tony-controller ~]$ sudo systemctl start openvswitch# 检查服务状态
[tony@tony-controller ~]$ sudo systemctl status openvswitch
● openvswitch.service - Open vSwitchLoaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled)Active: active (exited) since Sat 2019-04-13 13:36:05 CST; 6s agoProcess: 782 ExecStart=/bin/true (code=exited, status=0/SUCCESS)Main PID: 782 (code=exited, status=0/SUCCESS)Apr 13 13:36:05 tony-controller systemd[1]: Starting Open vSwitch...
Apr 13 13:36:05 tony-controller systemd[1]: Started Open vSwitch.

添加bridge与port

# 在添加bridge之前,检查网络配置状态
[tony@tony-controller ~]$ sudo ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:51:a8:c5 brd ff:ff:ff:ff:ff:ffinet 172.18.22.231/24 brd 172.18.22.255 scope global noprefixroute enp0s3valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:fe51:a8c5/64 scope linkvalid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:2a:dd:4b brd ff:ff:ff:ff:ff:ffinet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute enp0s8valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:fe2a:dd4b/64 scope linkvalid_lft forever preferred_lft forever
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:70:f2:73 brd ff:ff:ff:ff:ff:ffinet 10.238.156.138/23 brd 10.238.157.255 scope global noprefixroute dynamic enp0s9valid_lft 31972sec preferred_lft 31972secinet6 fe80::a00:27ff:fe70:f273/64 scope linkvalid_lft forever preferred_lft forever# 添加bridge,名称为br-provider
[tony@tony-controller ~]$ sudo ovs-vsctl add-br br-provider# 再次检查网络配置状态,看到第5与第6项分别是ovs-system与br-provider
[tony@tony-controller ~]$ sudo ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:51:a8:c5 brd ff:ff:ff:ff:ff:ffinet 172.18.22.231/24 brd 172.18.22.255 scope global noprefixroute enp0s3valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:fe51:a8c5/64 scope linkvalid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:2a:dd:4b brd ff:ff:ff:ff:ff:ffinet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute enp0s8valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:fe2a:dd4b/64 scope linkvalid_lft forever preferred_lft forever
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:70:f2:73 brd ff:ff:ff:ff:ff:ffinet 10.238.156.138/23 brd 10.238.157.255 scope global noprefixroute dynamic enp0s9valid_lft 31722sec preferred_lft 31722secinet6 fe80::a00:27ff:fe70:f273/64 scope linkvalid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000link/ether aa:d2:a5:c5:27:9d brd ff:ff:ff:ff:ff:ff
6: br-provider: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000link/ether 0a:1e:81:c9:b6:45 brd ff:ff:ff:ff:ff:ff# 给br-provider添加port,这里指定enp0s9
[tony@tony-controller ~]$ sudo ovs-vsctl add-port br-provider enp0s9# 注意:这条命令执行后,导致enp0s9网口无法ping通,ssh连接中断。
# 解决方法:登录到机器的控制台,执行
# $ sudo dhclient br-provider
# 让br-provider获得一个DHCP地址。
通过br-provider的IP地址继续ssh登录,执行后续操作。# 显示Bridge的配置
[tony@tony-controller ~]$ sudo ovs-vsctl show
c3928675-2e0a-42c1-83f7-6d8bee8fee1dBridge br-providerPort br-providerInterface br-providertype: internalPort "enp0s9"Interface "enp0s9"ovs_version: "2.10.1"

创建neutron数据库


# 登录到mysql数据库
[tony@tony-controller ~]$ sudo mysql -u root -p
Enter password: <Enter password>Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 469
Server version: 10.1.20-MariaDB MariaDB ServerCopyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> <font color=red>create database neutron; </font>
Query OK, 1 row affected (0.00 sec)MariaDB [(none)]> <font color=red>grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'Netvista123' ; </font>
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> <font color=red>grant all privileges on neutron.* to 'neutron'@'%' identified by 'Netvista123' ; </font>
Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> <font color=red> show databases; </font>
+--------------------+
| Database           |
+--------------------+
| glance             |
| information_schema |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| nova_api           |
| nova_cell0         |
| performance_schema |
| placement          |
+--------------------+
10 rows in set (0.01 sec)MariaDB [(none)]> <font color=red> quit </font>
Bye

初始化neutron数据库

[tony@tony-controller ~]$ sudo -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> kilo
INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225
INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151
...
INFO  [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4
INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99
INFO  [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada
...
INFO  [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c
INFO  [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding
...
INFO  [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616dOK

启动neutron服务

# 启用与neutron相关的4个服务
[tony@tony-controller ~]$ sudo systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service to /usr/lib/systemd/system/neutron-openvswitch-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.# 重启nova-api服务
[root@tony-controller ~]# sudo systemctl start openstack-nova-api.service# 启动neutron相关的4个服务
[root@tony-controller ~]# sudo systemctl start neutron-server.service
[root@tony-controller ~]# sudo systemctl start neutron-openvswitch-agent.service
[root@tony-controller ~]# sudo systemctl start neutron-dhcp-agent.service
[root@tony-controller ~]# sudo systemctl start neutron-metadata-agent.service
[root@tony-controller ~]## 检查服务状态
[root@tony-controller ~]# sudo systemctl status neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service
● neutron-server.service - OpenStack Neutron ServerLoaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2019-04-13 14:54:21 CST; 1min 40s agoMain PID: 22322 (neutron-server)Tasks: 5CGroup: /system.slice/neutron-server.service├─22322 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -...├─22336 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -...├─22337 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -...├─22338 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -...└─22339 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -...Apr 13 14:54:19 tony-controller systemd[1]: Starting OpenStack Neutron Server...
Apr 13 14:54:21 tony-controller systemd[1]: Started OpenStack Neutron Server.● neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch AgentLoaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2019-04-13 14:54:43 CST; 1min 18s agoProcess: 22382 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)Main PID: 22387 (neutron-openvsw)Tasks: 3CGroup: /system.slice/neutron-openvswitch-agent.service├─22387 /usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-...├─22457 ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=json└─22459 ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=jsonApr 13 14:54:43 tony-controller systemd[1]: Starting OpenStack Neutron Open vSwitch Agent...
Apr 13 14:54:43 tony-controller neutron-enable-bridge-firewall.sh[22382]: net.bridge.bridge-nf-call-iptables = 1
Apr 13 14:54:43 tony-controller neutron-enable-bridge-firewall.sh[22382]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 13 14:54:43 tony-controller systemd[1]: Started OpenStack Neutron Open vSwitch Agent.
Apr 13 14:54:45 tony-controller sudo[22402]:  neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutr...conf● neutron-dhcp-agent.service - OpenStack Neutron DHCP AgentLoaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2019-04-13 14:55:04 CST; 57s agoMain PID: 22492 (neutron-dhcp-ag)Tasks: 1CGroup: /system.slice/neutron-dhcp-agent.service└─22492 /usr/bin/python2 /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.co...Apr 13 14:55:04 tony-controller systemd[1]: Started OpenStack Neutron DHCP Agent.● neutron-metadata-agent.service - OpenStack Neutron Metadata AgentLoaded: loaded (/usr/lib/systemd/system/neutron-metadata-agent.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2019-04-13 14:55:20 CST; 42s agoMain PID: 22532 (neutron-metadat)Tasks: 1CGroup: /system.slice/neutron-metadata-agent.service└─22532 /usr/bin/python2 /usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dis...Apr 13 14:55:20 tony-controller systemd[1]: Started OpenStack Neutron Metadata Agent.
Hint: Some lines were ellipsized, use -l to show in full.
[root@tony-controller ~]## 启用menutron-l3 agent服务
[root@tony-controller ~]# sudo systemctl enable neutron-l3-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.# 启动并检查状态
[root@tony-controller ~]# sudo systemctl start neutron-l3-agent.service
[root@tony-controller ~]# sudo systemctl status neutron-l3-agent.service
● neutron-l3-agent.service - OpenStack Neutron Layer 3 AgentLoaded: loaded (/usr/lib/systemd/system/neutron-l3-agent.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2019-04-13 14:56:32 CST; 5s agoMain PID: 22672 (neutron-l3-agen)Tasks: 1CGroup: /system.slice/neutron-l3-agent.service└─22672 /usr/bin/python2 /usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf...Apr 13 14:56:32 tony-controller systemd[1]: Started OpenStack Neutron Layer 3 Agent.
Apr 13 14:56:37 tony-controller sudo[22690]:  neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-r...
Hint: Some lines were ellipsized, use -l to show in full.

检查服务启动后的网络配置

[root@tony-controller ~]# sudo ovs-vsctl show
c3928675-2e0a-42c1-83f7-6d8bee8fee1dManager "ptcp:6640:127.0.0.1"is_connected: trueBridge br-tunController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort br-tunInterface br-tuntype: internalPort patch-intInterface patch-inttype: patchoptions: {peer=patch-tun}Bridge br-providerController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort phy-br-providerInterface phy-br-providertype: patchoptions: {peer=int-br-provider}Port br-providerInterface br-providertype: internalPort "enp0s9"Interface "enp0s9"Bridge br-intController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort int-br-providerInterface int-br-providertype: patchoptions: {peer=phy-br-provider}Port br-intInterface br-inttype: internalPort patch-tunInterface patch-tuntype: patchoptions: {peer=patch-int}ovs_version: "2.10.1"

neutron (compute1)

安装软件包

[tony@tony-compute1 ~]$ sudo yum install -y openstack-neutron-openvswitch ipset

修改neutron相关的配置文件

[tony@tony-compute1 ~]$ sudo cat /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:Netvista123@controller
auth_strategy = keystone[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Netvista123
[tony@tony-compute1 ~]$ sudo cat /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
local_ip = 10.0.0.2[agent]
tunnel_types = vxlan
l2_population = True
[tony@tony-compute1 ~]$ sudo vim /etc/nova/nova.conf
...
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Netvista123
...

Open vSwitch

[tony@tony-compute1 ~]$ sudo systemctl enable openvswitch
Created symlink from /etc/systemd/system/multi-user.target.wants/openvswitch.service to /usr/lib/systemd/system/openvswitch.service.[tony@tony-compute1 ~]$ sudo systemctl start openvswitch
[tony@tony-compute1 ~]$ sudo systemctl status openvswitch
● openvswitch.service - Open vSwitchLoaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled)Active: active (exited) since Sat 2019-04-13 15:12:55 CST; 4s agoProcess: 10626 ExecStart=/bin/true (code=exited, status=0/SUCCESS)Main PID: 10626 (code=exited, status=0/SUCCESS)Apr 13 15:12:55 tony-compute1 systemd[1]: Starting Open vSwitch...
Apr 13 15:12:55 tony-compute1 systemd[1]: Started Open vSwitch.

重启nova-compute服务

[tony@tony-compute1 ~]$ sudo systemctl restart openstack-nova-compute.service

启用并启动openvswitch agent服务

[tony@tony-compute1 ~]$ sudo systemctl enable neutron-openvswitch-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service to /usr/lib/systemd/system/neutron-openvswitch-agent.service.[tony@tony-compute1 ~]$ sudo systemctl start neutron-openvswitch-agent.service
[tony@tony-compute1 ~]$ sudo systemctl status neutron-openvswitch-agent.service
● neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch AgentLoaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2019-04-13 16:27:52 CST; 20s agoProcess: 13259 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)Main PID: 13264 (neutron-openvsw)Tasks: 3CGroup: /system.slice/neutron-openvswitch-agent.service├─13264 /usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/ne...├─13333 ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=jso...└─13335 ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=jsonApr 13 16:27:52 tony-compute1 systemd[1]: Starting OpenStack Neutron Open vSwitch Agent...
Apr 13 16:27:52 tony-compute1 neutron-enable-bridge-firewall.sh[13259]: net.bridge.bridge-nf-call-iptable...1
Apr 13 16:27:52 tony-compute1 neutron-enable-bridge-firewall.sh[13259]: net.bridge.bridge-nf-call-ip6tabl...1
Apr 13 16:27:52 tony-compute1 systemd[1]: Started OpenStack Neutron Open vSwitch Agent.
Apr 13 16:27:54 tony-compute1 sudo[13279]:  neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/n...conf
Hint: Some lines were ellipsized, use -l to show in full.

注意:如果发生类似如下Permission Denied错误。首先检查openstack-selinux包是否安装,还若不行,则禁用selinux。

$ sudo yum install -y openstack-selinux
# or
$ sudo setenforce 0

2019-04-13 15:32:16.486 11776 INFO neutron.common.config [-] Logging enabled!
2019-04-13 15:32:16.486 11776 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 13.0.2
2019-04-13 15:32:16.486 11776 INFO ryu.base.app_manager [-] loading app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp
2019-04-13 15:32:16.860 11776 INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service
2019-04-13 15:32:16.861 11776 INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler
2019-04-13 15:32:16.861 11776 INFO ryu.base.app_manager [-] instantiating app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of OVSNeutronAgentRyuApp
2019-04-13 15:32:16.862 11776 INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of OFPHandler
2019-04-13 15:32:16.862 11776 INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of OfctlService
2019-04-13 15:32:16.863 11776 INFO neutron.agent.agent_extensions_manager [-] Loaded agent extensions: []
2019-04-13 15:32:16.875 11776 ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call last):
File “/usr/lib/python2.7/site-packages/ryu/lib/hub.py”, line 59, in _launch
return func(*args, **kwargs)
File “/usr/lib/python2.7/site-packages/ryu/controller/controller.py”, line 153, in call
self.ofp_ssl_listen_port)
File “/usr/lib/python2.7/site-packages/ryu/controller/controller.py”, line 187, in server_loop
datapath_connection_factory)
File “/usr/lib/python2.7/site-packages/ryu/lib/hub.py”, line 126, in init
self.server = eventlet.listen(listen_info)
File “/usr/lib/python2.7/site-packages/eventlet/convenience.py”, line 46, in listen
sock.bind(addr)
File “/usr/lib64/python2.7/socket.py”, line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 13] Permission denied
: error: [Errno 13] Permission denied

检查compute1机器上的网络配置状态

[tony@tony-compute1 ~]$ sudo ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.18.22.232/24 brd 172.18.22.255 scope global noprefixroute enp0s3
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 10.0.0.2/24 brd 10.0.0.255 scope global noprefixroute enp0s8
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
inet 10.238.157.84/23 brd 10.238.157.255 scope global noprefixroute dynamic
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 9e:23:18:bd:e0:4a brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 42:30:63:9b:0f:41 brd ff:ff:ff:ff:ff:ff
7: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether aa:8d:70:af:56:4d brd ff:ff:ff:ff:ff:ff

# 服务正在侦听6633端口
[tony@tony-compute1 ~]$ sudo ovs-vsctl show
6e9ac464-afae-4179-90bb-e665dbba5fb2Manager "ptcp:6640:127.0.0.1"is_connected: trueBridge br-intController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort patch-tunInterface patch-tuntype: patchoptions: {peer=patch-int}Port br-intInterface br-inttype: internalBridge br-tunController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort br-tunInterface br-tuntype: internalPort patch-intInterface patch-inttype: patchoptions: {peer=patch-tun}ovs_version: "2.10.1"

验证openvswitch agent正常工作

在controller上执行如下命令,如果看到了compute1,说明OpenvSwitch Agent正常工作了。

[tony@tony-controller ~]$ openstack network agent list

±-------------------------------------±-------------------±----------------±------------------±------±------±--------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
±-------------------------------------±-------------------±----------------±------------------±------±------±--------------------------+
| 67d969a3-0361-46ea-9e63-c57eabbf5bfc | Open vSwitch agent | tony-compute1 | None | : -) | UP | neutron-openvswitch-agent |
| 8ef64698-2b12-42e7-ae06-c3be746047d1 | L3 agent | tony-controller | nova | : -) | UP | neutron-l3-agent |
| c6df225b-0d74-439b-b027-0521b057716d | Metadata agent | tony-controller | None | : -) | UP | neutron-metadata-agent |
| ceb81b9d-143f-472a-ba28-78cee7247520 | DHCP agent | tony-controller | nova | : -) | UP | neutron-dhcp-agent |
| eb0370da-6509-4be8-92ae-01b5f93a30f3 | Open vSwitch agent | tony-controller | None | : -) | UP | neutron-openvswitch-agent |
±-------------------------------------±-------------------±----------------±------------------±------±------±--------------------------+

结语

自此,Neutron模块的部署工作完成。
/var/log/neutron目录下有各种与neutron模块有关的日志文件,如果发生错误,可以通过分析日志解决。

第一篇:手动部署Openstack Rocky 双节点(1)- 基础服务
上一篇:手动部署Openstack Rocky 双节点(4)- Nova
下一篇:手动部署Openstack Rocky 双节点(6)- Horizon

【Openstack】实录手动部署Openstack Rocky 双节点(5)- Neutron相关推荐

  1. 【Openstack】实录手动部署Openstack Rocky 双节点(4)- Nova

    第一篇:实录手动部署Openstack Rocky 双节点(1)- 基础服务 上一篇:实录手动部署Openstack Rocky 双节点(3)- Glance 下一篇:实录手动部署Openstack ...

  2. 【Openstack】实录手动部署Openstack Rocky 双节点(3)- Glance

    第一篇:实录手动部署Openstack Rocky 双节点(1)- 基础服务 上一篇:实录手动部署Openstack Rocky 双节点(2)- Keystone 下一篇:手动部署Openstack ...

  3. 【Openstack】实录手动部署Openstack Rocky 双节点(2)- Keystone

    第一篇:实录手动部署Openstack Rocky 双节点(1)- 基础服务 上一篇:实录手动部署Openstack Rocky 双节点(1)- 基础服务 下一篇:手动部署Openstack Rock ...

  4. 【Openstack】实录手动部署Openstack Rocky 双节点(1)- 基础服务

    第一篇:本文 上一篇:无 下一篇:实录手动部署Openstack Rocky 双节点(2)- Keystone 文章目录 参考文档 虚拟机准备 OS准备 controller虚拟机 compute虚拟 ...

  5. 【Openstack】实录手动部署Openstack Rocky 双节点(6)- Horizon

    第一篇:手动部署Openstack Rocky 双节点(1)- 基础服务 上一篇:手动部署Openstack Rocky 双节点(5)- Neutron 下一篇:无 文章目录 参考文档 关于机器名变更 ...

  6. 手动部署OpenStack之环境部署

    手动部署OpenStack之环境部署 一.虚拟机信息 二.基础环境配置 三.系统环境配置 一.虚拟机信息 1.控制节点ct CPU:双核双线程-CPU虚拟化开启 内存:8G 硬盘:300G+300G( ...

  7. 手动部署OpenStack环境(六:出现的问题与解决方案总结)

    排错一:keystone服务安装中demo用户表单没信息. 排错思路: 组件安装是否有问题: 用户创建畲缶有问题: 用户认证信息是否合适: 原因:用户的认证信息配置错误. 解决方案: a)删除有关de ...

  8. 手动部署OpenStack环境(四:安装控制器必备软件)

    任务四.安装控制器必备组件 4.1.安装MySQL服务(controller0) 4.2.安装Rabbitmq消息队列(controller0) 4.3.Keystone认证(controller0) ...

  9. 手动部署OpenStack环境(三:OpenStack环境预配置)

    任务三.OpenStack环境预配置 3.1.本地OpenStack yum源制作 任务三:OpenStack环境预配置 3.1.本地OpenStack yum 源制作 3.1.1.拷贝镜像文件源到本 ...

最新文章

  1. android 布局 站位符,基于android布局中的常用占位符介绍
  2. php文件之间相互引用路径问题的一般处理方法
  3. dx200手环使用方法_硅胶手环的缺点有哪些?
  4. java中自定义比较器_Java中的比较器:自定义规则!!!
  5. python数字的鲁棒输入_请教关于python的手写数字识别神经网络问题~~~~
  6. 【分布式计算】分布式日志导入工具-Flume
  7. 成功解决ModuleNotFoundError: No module named 'dataset'
  8. 4x4矩阵键盘工作原理及扫描程序_AVR单片机扫描4X4矩阵键盘并数码管显示程序
  9. ABAP里几种整型数据类型的范围和转换
  10. 国产CPU群雄逐鹿谁主沉浮
  11. 设计师必备导航类网站,内有宝藏!
  12. 我见过不少赚了钱,投资孵化一堆店铺的,一般而言后续发展都会出问题的
  13. hashset如何检查重复_如何使用 C# 中的 HashSet
  14. Java中RMI远程调用demo
  15. pat 1006. 换个格式输出整数 (15)
  16. pyaudio模块 python_python无法安装pyaudio模块
  17. RGB颜色中关于Alpha通道的计算
  18. 手把手教你进行腾讯云域名注册
  19. 泰坦尼克号乘客生存情况分析之第二部分特征工程
  20. android 钉钉考勤日历,Flutter仿钉钉考勤日历

热门文章

  1. 元组中[-1]的作用
  2. VScode使用gitbash的配置方法
  3. 风华高科厚膜贴片电阻规格书_1206 0.25R取样贴片电阻规格主要参数详解
  4. 在Windows上编译Spark源码
  5. Hibernate执行Update操作之后查询跟新的语句出错
  6. vb怎么判断整数_VB数学函数大全
  7. endwith php,endwith函数怎么使用
  8. python字符串转float_令人困惑的python-无法将字符串转换为float
  9. 计算机科学与技术专题,专题四 计算机科学与应用技术.ppt
  10. java基础知识点(3)——标识符常量变量