OpenStack-Pike(二)
上一篇主要介绍了控制节点的一些安装配置,这里将会介绍openstack其它组件的相关配置。
计算节点配置nova-compute
1、node2上修改nova的配置文件:
[root@openstack-node2 ~]# egrep -v "^$|^#" /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url= rabbit://openstack:openstack@192.168.10.11
[api]
auth_strategy=keystone[glance]
api_servers=http://192.168.10.11:9292[keystone_authtoken]
auth_uri = http://192.168.10.11:5000
auth_url = http://192.168.10.11:35357
memcached_servers = 192.168.10.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova[oslo_concurrency]
lock_path=/var/lib/nova/tmp[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.10.11:35357/v3
username = placement
password = placement[vnc]
enabled=true
vncserver_listen=192.168.10.12
vncserver_proxyclient_address=192.168.10.12
novncproxy_base_url=http://192.168.10.11:6080/vnc_auto.html
2、检查主机是否开启了虚拟化,返回为0 则说明不支持,需要启用qemu:
egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回0,则在/etc/nova/nova.conf 配置使用 qemu:
[libvirt]
# ...
virt_type = qemu
3、启动服务:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
4、在node3上配置与node2上的相同。只需要修改nova.conf文件的如下两个参数:
[root@openstack-node3 ~]# grep 192.168.10.13 /etc/nova/nova.conf
vncserver_listen=192.168.10.13
vncserver_proxyclient_address=192.168.10.13
5、控制节点上进行验证:
[root@openstack-node1 ~]# source admin-openstack.sh
[root@openstack-node1 ~]# openstack compute service list
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | openstack-node1 | internal | enabled | up | 2018-01-10T10:08:08.000000 |
| 2 | nova-scheduler | openstack-node1 | internal | enabled | up | 2018-01-10T10:08:09.000000 |
| 3 | nova-conductor | openstack-node1 | internal | enabled | up | 2018-01-10T10:08:09.000000 |
| 7 | nova-compute | openstack-node2 | nova | enabled | up | 2018-01-10T10:08:11.000000 |
| 8 | nova-compute | openstack-node3 | nova | enabled | up | 2018-01-10T10:08:14.000000 |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
6、获取计算节点信息,发现计算主机:
[root@openstack-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ddc4df46-fd96-4778-b312-95e8ad37e3d3
Found 2 unmapped computes in cell: ddc4df46-fd96-4778-b312-95e8ad37e3d3
Checking host mapping for compute host 'openstack-node2': eae670ef-c799-4517-8232-525a550e2658
Creating host mapping for compute host 'openstack-node2': eae670ef-c799-4517-8232-525a550e2658
Checking host mapping for compute host 'openstack-node3': f6444b6a-5850-49d1-9aff-1dd0fcab594a
Creating host mapping for compute host 'openstack-node3': f6444b6a-5850-49d1-9aff-1dd0fcab594a
控制节点配置Neutron
1、创建neutron 用户,设置密码为 neutron:
[root@openstack-node1 ~]# source admin-openstack.sh
[root@openstack-node1 ~]# openstack user create --domain default --password neutron neutron
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | d97218e6fab1493583f2a39dba60c3d7 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
2、将neutron 用户添加到 service项目,并授予admin权限。
# openstack role add --project service --user neutron admin
3、创建neutron 服务:
# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | a0a7183cb0b3448d887e0a3e1308a1c3 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
4、创建endpoint,分别对应internal,public,admin:
# openstack endpoint create --region RegionOne network public http://192.168.10.11:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ca6dd31fee654983b216264d7851e1f6 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a0a7183cb0b3448d887e0a3e1308a1c3 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.10.11:9696 |
+--------------+----------------------------------+# openstack endpoint create --region RegionOne network internal http://192.168.10.11:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 21fc021061a249feb934f5f94977d848 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a0a7183cb0b3448d887e0a3e1308a1c3 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.10.11:9696 |
+--------------+----------------------------------+# openstack endpoint create --region RegionOne network admin http://192.168.10.11:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 87b0af4ecd1c4d8fa5b08e3d1855ab2f |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a0a7183cb0b3448d887e0a3e1308a1c3 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.10.11:9696 |
+--------------+----------------------------------+
5、创建一个provider的网络,修改如下配置:
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:openstack@192.168.10.11[database]
connection = mysql+pymysql://neutron:neutron@192.168.10.11/neutron
[keystone_authtoken]
auth_uri = http://192.168.10.11:5000
auth_url = http://192.168.10.11:35357
memcached_servers = 192.168.10.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron[nova]
auth_url = http://192.168.10.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
6、配置/etc/neutron/plugins/ml2/ml2_conf.ini :
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/plugins/ml2/ml2_conf.ini[ml2]
type_drivers = local,flat,vlan,gre,vxlan,geneve
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider[securitygroup]
enable_ipset = true
7、配置Linuxbridge agent, 修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini:
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini[linux_bridge]
physical_interface_mappings = provider:eth0
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = false
8、配置DHCP代理 /etc/neutron/dhcp_agent.ini:
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
9 、配置metadata代理/etc/neutron/metadata_agent.ini:
[root@openstack-node1 ~]# vim /etc/neutron/metadata_agent.ini[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = openstack # 这里是metadata的密码
10、在网络服务中配置计算服务:
[root@openstack-node1 ~]# vim /etc/nova/nova.conf[neutron]
url = http://192.168.10.11:9696
auth_url = http://192.168.10.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = openstack # 这里是metadata的密码
11、创建软连接:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
12、同步数据库:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
13、重启nova-api服务:
# systemctl restart openstack-nova-api.service
14、启动Neutron服务:
# systemctl enable neutron-server.service \neutron-linuxbridge-agent.service neutron-dhcp-agent.service \neutron-metadata-agent.service
# systemctl start neutron-server.service \neutron-linuxbridge-agent.service neutron-dhcp-agent.service \neutron-metadata-agent.service
计算节点配置 Neutron
1、拷贝控制节点的neutron.conf到计算节点,去除【database】的配置:
scp -p /etc/neutron/neutron.conf 192.168.10.12:/etc/neutron/
scp -p /etc/neutron/neutron.conf 192.168.10.13:/etc/neutron/
2、拷贝控制节点的/etc/neutron/plugins/ml2/linuxbridge_agent.ini,到计算节点:
scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.10.12:/etc/neutron/plugins/ml2/
scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.10.13:/etc/neutron/plugins/ml2/
3、计算节点配置nova.conf:
# vim /etc/nova/nova.conf url = http://192.168.10.11:9696
auth_url = http://192.168.10.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
4、完成配置,启动服务:
# systemctl restart openstack-nova-compute.service
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
5、在控制节点进行验证:
[root@openstack-node1 ~]# openstack network agent list
+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+
| 11416545-1330-47fe-bf87-250094e31b5a | Metadata agent | openstack-node1 | None | :-) | UP | neutron-metadata-agent |
| 28227c23-5871-405e-b773-7a63117aae5d | Linux bridge agent | openstack-node2 | None | :-) | UP | neutron-linuxbridge-agent |
| 5058627e-1674-4332-87d9-bc4d957162bc | DHCP agent | openstack-node1 | nova | :-) | UP | neutron-dhcp-agent |
| 709cbca7-501d-48e8-8115-9a6b076bde60 | Linux bridge agent | openstack-node1 | None | :-) | UP | neutron-linuxbridge-agent |
| c713812b-77ac-47cb-8807-6f4b0c6fe99f | Linux bridge agent | openstack-node3 | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+
启动一个实例进行验证
1、创建一个provider的网络:
[root@openstack-node1 ~]# source admin-openstack.sh
[root@openstack-node1 ~]# openstack network create --share --external \--provider-physical-network provider \--provider-network-type flat provider+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-01-11T01:55:44Z |
| description | |
| dns_domain | None |
| id | 09b8f3e8-14a2-40af-a62d-94d6c19462d8 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 0daaf987a867495fa0937a16b359c729 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 3 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2018-01-11T01:55:44Z |
+---------------------------+--------------------------------------+
2、创建一个子网:
openstack subnet create --network provider \--allocation-pool start=192.168.10.100,end=192.168.10.150 \--dns-nameserver 192.168.10.2 --gateway 192.168.10.2 \--subnet-range 192.168.10.0/24 provider+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| allocation_pools | 192.168.10.100-192.168.10.150 |
| cidr | 192.168.10.0/24 |
| created_at | 2018-01-11T02:00:36Z |
| description | |
| dns_nameservers | 192.168.10.2 |
| enable_dhcp | True |
| gateway_ip | 192.168.10.2 |
| host_routes | |
| id | a826d8c2-2c3b-4b36-8c84-1b2a69b3bd06 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 09b8f3e8-14a2-40af-a62d-94d6c19462d8 |
| project_id | 0daaf987a867495fa0937a16b359c729 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-01-11T02:00:36Z |
| use_default_subnet_pool | None |
+-------------------------+--------------------------------------+
3、创建一个m1.nano的虚拟机配置:
[root@openstack-node1 ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
4、使用demo 的环境变量生成一个密钥对:
[root@openstack-node1 ~]# source demo-openstack.sh
[root@openstack-node1 ~]# ssh-keygen -q -N ""[root@openstack-node1 ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | ec:90:b7:c3:9b:bf:27:ff:89:0e:38:a8:5d:ce:57:fe |
| name | mykey |
| user_id | 8c10323be99e4597a099db1ba3b79627 |
+-------------+-------------------------------------------------+[root@openstack-node1 ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | ec:90:b7:c3:9b:bf:27:ff:89:0e:38:a8:5d:ce:57:fe |
+-------+-------------------------------------------------+
5、使用demo环境变量,创建一个安全组规则,并添加22端口的访问权限:
[root@openstack-node1 ~]# openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2018-01-11T02:09:03Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 34a781c5-627e-4349-8fde-f569348989eb |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | d63f87c94e634aefbdf3fa48d4f43b18 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 401f7dea-eb96-4e1f-b199-adc63b742f19 |
| updated_at | 2018-01-11T02:09:03Z |
+-------------------+--------------------------------------+[root@openstack-node1 ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2018-01-11T02:09:29Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | fd6d34e0-0615-4bc2-b371-84d33701250d |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | d63f87c94e634aefbdf3fa48d4f43b18 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 401f7dea-eb96-4e1f-b199-adc63b742f19 |
| updated_at | 2018-01-11T02:09:29Z |
+-------------------+--------------------------------------+
6、检查启动实例前的环境参数:
[root@openstack-node1 ~]# source demo-openstack.sh
[root@openstack-node1 ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+[root@openstack-node1 ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| dc655534-2821-47c1-b9c4-8687b52dfdbc | cirros | active |
+--------------------------------------+--------+--------+[root@openstack-node1 ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 09b8f3e8-14a2-40af-a62d-94d6c19462d8 | provider | a826d8c2-2c3b-4b36-8c84-1b2a69b3bd06 |
+--------------------------------------+----------+--------------------------------------+[root@openstack-node1 ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 401f7dea-eb96-4e1f-b199-adc63b742f19 | default | Default security group | d63f87c94e634aefbdf3fa48d4f43b18 |
+--------------------------------------+---------+------------------------+----------------------------------+
7、创建并启动一个实例,指定net-id 为 provider ID:
openstack server create --flavor m1.nano --image cirros \--nic net-id=09b8f3e8-14a2-40af-a62d-94d6c19462d8 --security-group default \--key-name mykey provider-instance+-----------------------------+-----------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 4H3dqBvxcRim |
| config_drive | |
| created | 2018-01-11T02:17:37Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 91b256f0-54f2-4df1-8ae4-3649670c7813 |
| image | cirros (dc655534-2821-47c1-b9c4-8687b52dfdbc) |
| key_name | mykey |
| name | provider-instance |
| progress | 0 |
| project_id | d63f87c94e634aefbdf3fa48d4f43b18 |
| properties | |
| security_groups | name='401f7dea-eb96-4e1f-b199-adc63b742f19' |
| status | BUILD |
| updated | 2018-01-11T02:17:37Z |
| user_id | 8c10323be99e4597a099db1ba3b79627 |
| volumes_attached | |
+-----------------------------+-----------------------------------------------+
提示: 当实例创建成功后,宿主机的网络状态会发生变化。eth0上不再有IP地址,会创建一个桥接网卡和虚拟机网卡,控制节点的网络也会变为桥接模式。
8、查看虚拟机是否创建成功,显示ACTIVE说明创建成功:
[root@openstack-node1 ~]# openstack server list
+--------------------------------------+-------------------+--------+-------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------+--------+-------------------------+--------+---------+
| 91b256f0-54f2-4df1-8ae4-3649670c7813 | provider-instance | ACTIVE | provider=192.168.10.104 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+-------------------------+--------+---------+
9、获取provider-instance虚拟机的vnc的web登录链接:
[root@openstack-node1 ~]# openstack console url show provider-instance
+-------+------------------------------------------------------------------------------------+
| Field | Value |
+-------+------------------------------------------------------------------------------------+
| type | novnc |
| url | http://192.168.10.11:6080/vnc_auto.html?token=0b5414ee-0928-4a37-9922-eb4c8512da88 |
+-------+------------------------------------------------------------------------------------+
10、使用浏览器打开此链接,查看网络状态:
11、在控制节点上登录虚拟机:
[root@openstack-node1 ~]# ping 192.168.10.104
PING 192.168.10.104 (192.168.10.104) 56(84) bytes of data.
64 bytes from 192.168.10.104: icmp_seq=1 ttl=64 time=0.967 ms[root@openstack-node1 ~]# ssh cirros@192.168.10.104
...
控制节点配置Horizon
Horizon是可以安装在任意节点上的,只要配置对应的Memcached的连接即可,在安装的Horizon的节点上启动httpd服务即可。这将dashboard安装在控制节点。
1、安装openstack-dashboard:
# yum install openstack-dashboard -y
2、修改配置文件/etc/openstack-dashboard/local_settings,配置如下内容:
OPENSTACK_HOST = "192.168.10.11"
ALLOWED_HOSTS = ['*']CACHES = {'default': {'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache','LOCATION': '192.168.10.11:11211',},
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'OPENSTACK_HOST = "192.168.10.11"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOSTOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = TrueOPENSTACK_API_VERSIONS = {"identity": 3,"image": 2,"volume": 2,
}OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"OPENSTACK_NEUTRON_NETWORK = {...'enable_router': False,'enable_quotas': False,'enable_distributed_router': False,'enable_ha_router': False,'enable_lb': False,'enable_firewall': False,'enable_***': False,'enable_fip_topology_check': False,
}
3、重启服务:
# systemctl restart httpd.service memcached.service
4、登录验证: 使用链接 http://192.168.10.11/dashboard进行登录,账号admin/admin, 或者 demo/demo。
使用demo用户登录,可以在web 界面创建虚拟机。
转载于:https://blog.51cto.com/tryingstuff/2059765
OpenStack-Pike(二)相关推荐
- 最新Ceph L版与openstack Pike对接
安装Ceph luminous 实验环境 三台服务器,每台服务器都有4块硬盘,每台服务器都将自己的第一块硬盘作为系统盘,剩下的做ceph 一.在所有服务器上操作 #使用阿里源 yum install ...
- OpenStack(二)——Keystone组件
OpenStack(二)--Keystone组件 一.OpenStack组件之间的通信关系 二.OpenStack物理构架 三.Keystone组件 1.Keystone身份服务 2.管理对象 3.K ...
- ##6.2 Neutron计算节点-- openstack pike
##6.2 Neutron计算节点 openstack pike 安装 目录汇总 http://www.cnblogs.com/elvi/p/7613861.html ##6.2 Neutron计算节 ...
- OpenStack Pike 版本的 Mistral 安装
OpenStack Pike 版本的 Mistral 安装部署 # 安装环境使用的centos 7.3 1. 安装 Mistral 安装包. # yum -y install openstack-m ...
- OpenStack Pike Minimal安装:二、身份认证
1.在controller节点上安装keystone root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y 2.配置 ...
- OpenStack Pike安装学习笔记
此文原创,绝大部分资料翻译自OpenStack官方安装GUIDE,转载请注明出处. 目录 目录... 1 前言... 4 OpenStack参考架构... 4 硬件架构及配置... 5 控制器节点.. ...
- devstack安装OpenStack Pike版本 (OVS+VLAN)
安装环境和安装过程与前一篇描述相同. http://blog.csdn.net/chenhaifeng2016/article/details/78956800 安装过程中不相同之处如下: 1. 默认 ...
- enable multi-tenancy on openstack pike
Multi-tenancy 是openstack ironic从Ocata版本开始支持的新特性,通过network-generic-switch插件控制交换机,Ironic可以实现在不同租户间机网络隔 ...
- Openstack(二):keystone认证服务
首先说明安装Opnstack服务安装的一些基本套路: 1.数据库创建 2.安装对应服务的软件包并修改配置文件 3.创建相应的服务并注册api 一.数据库 官方文档https://docs.openst ...
- CentOS OpenStack Pike tacker 之 mistral 安装实录
格式有点乱有空再整理 一.安装mistral组件(官网手册为Ubuntu版,操作有点坑) "For information on how to install and configure t ...
最新文章
- C++ Primer 5th笔记(chap 16 模板和泛型编程)实例化
- DefaultSingletonBeanRegistry源码解析
- AQS理解之一,基础知识——LockSupport
- Jmeter5.3(windows下)安装过程问题总结
- C语言错误: HEAP CORRUPTION DETECTED
- YBTOJ:前缀匹配(AC自动机)
- 按钮右对齐_Python Tkinter Button按钮
- xss跨站脚本攻击_常见攻击之xss跨站脚本攻击
- Unity手游之路三 基于Unity+Java的聊天室源码
- java cfg_如何使用antlr生成Java CFG(控制流图)?
- 锁定Mac的键盘:连击5次option键
- 腾讯十年经验总结分享!软件测试经典面试题!你招架的住吗?
- chanlist.php,Nginx+FastCgi+Php 的工作机制
- 乔姆斯基生成语法_乔姆斯基的转换生成语法理论
- 2013年锦绣中华民俗村迷情聊斋夜
- 电脑里强力删除的文件如何恢复?
- js正则之前瞻后顾与非捕获分组
- cb.conjunction()的意思
- MySQL死锁解决之道
- Fruit 有上下限的母函数
热门文章
- 计算机应用基础复制3,计算机应用基础3(答案)
- Python协程原理介绍及基本使用
- 已解决 selenium.common.exceptions.NoSuchWindowException: Message: no such window
- 21天Jmeter打卡day9HTTP不同方法post提交表单和json
- java buffer类_Java ByteBuffer类
- python3序列化_python3 json序列化问题
- 软件测试面试-在工作中功能,接口,性能,自动化的占比是多少?
- 【讨论】测试工程师能否作为一份终生职业?30岁+怎么办?
- linux如何安装阵列卡驱动程序,Linux安装阵列卡驱动及档.doc
- oracle物理,Oracle物理结构概述