基于OpenStack的云计算环境搭建

  • 一、基础环境
    • 1.基本环境信息回顾
    • 2.基本环境网络测试
  • 二、实现过程
    • 1.配置阿里yum源(所有节点)
    • 2.安装NTP时钟服务(所有节点)
    • 3.openstack服务安装、配置(所有节点)
    • 4.安装数据库(controller节点)
    • 5.安装、配置RabbitMQ(controller节点)
    • 6.安装缓存数据库Memcached(controller节点)
    • 7.Etcd服务安装(controller节点)
    • 8.安装keystone组件(controller节点)
    • 9.配置apache http服务(controller节点)
    • 10.创建 domain, projects, users, roles(controller节点)
    • 11.验证操作(controller节点)
    • 12.创建openstack 客户端环境脚本(controller节点)
    • 13.安装Glance服务(controller节点)
    • 14.安装和配置组件(controller节点)
    • 15.验证操作(controller节点)
    • 16.安装和配置compute服务(controller节点)
    • 17.安装和配置组件(controller节点)
    • 18.安装和配置compute节点服务(compute节点)
    • 19.在controller节点验证计算服务操作(controller节点)
    • 20.安装和配置controller节点neutron网络配置(controller节点)
    • 21.配置网络部分(controller节点)
    • 22.配置网络二层插件(controller节点)
    • 23.配置Linux网桥(controller节点)
    • 24.配置DHCP服务(controller节点)
    • 25.配置metadata(controller节点)
    • 26.配置计算服务使用网络服务(controller节点)
    • 27.完成安装(controller节点)
    • 28.配置compute节点网络服务(compute节点)
    • 29.配置网络(controller节点)
    • 30.配置计算节点网络服务(controller节点)
    • 31.完成安装(compute节点)
    • 32.在controller节点安装Horizon服务(controller节点)
    • 33.访问openstack的web页面

一、基础环境

OpenStack云计算环境的搭建是基于虚拟机的多节点Linux网络环境基础上搭建起来的,所以需要我们先搭建好集群环境。(基础环境搭建参考:基于虚拟机的多节点Linux网络环境搭建)

1.基本环境信息回顾

操作系统:CentOS7

controller节点IP:192.168.43.199
compute节点IP:192.168.43.74
neutron节点IP:192.168.43.180

说明:这里我的IP之所以和基于虚拟机的多节点Linux网络环境搭建这篇文章中的不一样,是因为我重新搭建了环境,IP也重新分配了,所以后续的IP就以我这里列出来的为主。

2.基本环境网络测试

我们先测试一下网络的连通性,具体过程见:基于虚拟机的多节点Linux网络环境搭建

如果网络没问题,我们就可以开开心心的继续下面的内容啦!

二、实现过程

1.配置阿里yum源(所有节点)

备份

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

下载

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
或者
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

2.安装NTP时钟服务(所有节点)

  • controller节点

安装软件包

yum install chrony -y

编辑vi /etc/chrony.conf文件,配置时钟源同步服务端

server  controlelr  iburst      ##所有节点向controller节点同步时间
allow 192.168.43.0/199         ##设置时间同步网段

设置NTP服务开机启动

systemctl enable chronyd.service
systemctl start chronyd.service
  • compute节点和neutron节点

安装软件包

yum install chrony -y

编辑vi /etc/chrony.conf文件,配置所有节点指向controller同步时间

server  controlelr  iburst

设置NTP服务开机启动

systemctl enable chronyd.service
systemctl start chronyd.service
  • 验证时钟同步服务
chronyc sources



3.openstack服务安装、配置(所有节点)

下载安装openstack软件仓库(queens版本)

yum install centos-release-openstack-queens -y

更新所有节点软件包

yum upgrade

安装openstack client端

yum install python-openstackclient -y

安装openstack-selinux

yum install openstack-selinux -y

4.安装数据库(controller节点)

安装软件包

yum install mariadb mariadb-server python2-PyMySQL -y

编辑/etc/my.cnf.d/mariadb-server.cnf并完成以下配置(说明:bind-address使用controller节点的管理IP)

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
bind-address = 192.168.43.199
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

设置服务开机启动

systemctl enable mariadb.service
systemctl start mariadb.service

通过运行mysql_secure_installation脚本来保护数据库服务。(说明:运行命令后,需要先输入密码,我们可以不用输入直接回车继续即可。)

5.安装、配置RabbitMQ(controller节点)

安装配置消息队列组件

yum install rabbitmq-server -y

设置服务开机启动

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

添加openstack 用户

rabbitmqctl add_user openstack 123456

openstack用户的权限配置

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

6.安装缓存数据库Memcached(controller节点)

安装配置组件

yum install memcached python-memcached -y

编辑vi /etc/sysconfig/memcached

OPTIONS="-l 192.168.43.199,::1,controller"

设置服务开机启动

systemctl enable memcached.service
systemctl start memcached.service

7.Etcd服务安装(controller节点)

安装服务

yum install etcd -y

编辑/etc/etcd/etcd.conf文件

ETCD_INITIAL_CLUSTER
ETCD_INITIAL_ADVERTISE_PEER_URLS
ETCD_ADVERTISE_CLIENT_URLS
ETCD_LISTEN_CLIENT_URLS
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.43.199:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.43.199:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.43.199:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.43.199:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.43.199:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

设置服务开机启动

systemctl enable etcd;systemctl start etcd

8.安装keystone组件(controller节点)

创建keystone数据库并授权

mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';

安装、配置组件

yum install openstack-keystone httpd mod_wsgi -y

编辑 /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
provider = fernet

同步keystone数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

数据库初始化

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份认证服务

keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

9.配置apache http服务(controller节点)

编辑/etc/httpd/conf/httpd.conf,配置ServerName参数

ServerName controller

创建 /usr/share/keystone/wsgi-keystone.conf链接文件

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

设置服务开机启动

systemctl enable httpd.service
systemctl start httpd.service

配置administrative账号

export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

10.创建 domain, projects, users, roles(controller节点)

创建域

openstack domain create --description "Domain" example

创建服务项目

openstack project create --domain default   --description "Service Project" service

创建平台demo项目

openstack project create --domain default --description "Demo Project" demo

创建demo用户

openstack user create --domain default  --password-prompt demo

创建用户角色

openstack role create user

添加用户角色到demo项目和用户

openstack role add --project demo --user demo user

11.验证操作(controller节点)

取消环境变量

unset OS_AUTH_URL OS_PASSWORD

admin用户返回的认证token

openstack --os-auth-url http://controller:35357/v3 \
>   --os-project-domain-name Default --os-user-domain-name Default \
>   --os-project-name admin --os-username admin token issue


demo用户返回的认证token

openstack --os-auth-url http://controller:5000/v3 \
>   --os-project-domain-name Default --os-user-domain-name Default \
>   --os-project-name demo --os-username demo token issue

12.创建openstack 客户端环境脚本(controller节点)

创建admin-openrc脚本(vim admin-openrc)

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

创建demo-openrc脚本(vim demo-openrc)

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

使用脚本,返回认证token

openstack token issue

13.安装Glance服务(controller节点)

创建glance数据库,并授权

mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY '123456';

获取admin用户的环境变量,并创建服务认证

. admin-openrc
注意:这里点后面有一个空格

创建glance用户

openstack user create --domain default --password-prompt glance

把admin用户添加到glance用户和项目中

openstack role add --project service --user glance admin

创建glance服务

openstack service create --name glance  --description "OpenStack Image" image

创建镜像服务API端点

openstack endpoint create --region RegionOne  image public http://controller:9292
openstack endpoint create --region RegionOne  image internal http://controller:9292
openstack endpoint create --region RegionOne  image admin http://controller:9292


14.安装和配置组件(controller节点)

安装软件包

yum install openstack-glance -y

编辑/etc/glance/glance-api.conf文件

[database]
connection = mysql+pymysql://glance:123456@controller/glance[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456[paste_deploy]
flavor = keystone[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

编辑/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456[paste_deploy]
flavor = keystone

同步镜像服务数据库

su -s /bin/sh -c "glance-manage db_sync" glance

设置开机启动并开启服务

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service  openstack-glance-registry.service

15.验证操作(controller节点)

使用CirrOS验证Image服务的操作,这是一个小型Linux映像,可帮助您测试OpenStack部署。
有关如何下载和构建映像的更多信息,请参阅OpenStack虚拟机映像指南https://docs.openstack.org/image-guide/
有关如何管理映像的信息,请参阅OpenStack最终用户指南https://docs.openstack.org/queens/user/

获取admin用户的环境变量,且下载镜像

 . admin-openrcwget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

上传镜像(使用QCOW2磁盘格式,裸容器格式和公开可见性将图像上传到Image服务,以便所有项目都可以访问它)

openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img  --disk-format qcow2 --container-format bare  --public

查看上传的镜像

openstack image list


说明:glance具体配置选项: https://docs.openstack.org/glance/queens/configuration/index.html

16.安装和配置compute服务(controller节点)

创建nova_api, nova, nova_cell0数据库

mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

数据库登录授权

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

创建nova用户

. admin-openrcopenstack user create --domain default --password-prompt nova

添加admin用户为nova用户

openstack role add --project service --user nova admin

创建nova服务端点

openstack service create --name nova --description "OpenStack Compute" compute

创建compute API 服务端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1


创建一个placement服务用户

openstack user create --domain default --password-prompt placement

添加placement用户为项目服务admin角色

openstack role add --project service --user placement admin

创建在服务目录创建Placement API服务

openstack service create --name placement --description "Placement API" placement

创建Placement API服务端点

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778


17.安装和配置组件(controller节点)

安装软件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

编辑 /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.43.199
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api[database]
connection = mysql+pymysql://nova:123456@controller/nova[api]
auth_strategy = keystone[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip[glance]
api_servers = http://controller:9292[oslo_concurrency]
lock_path = /var/lib/nova/tmp[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

由于软件包的一个bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置

<Directory /usr/bin><IfVersion >= 2.4>Require all granted</IfVersion><IfVersion < 2.4>Order allow,denyAllow from all</IfVersion>
</Directory>

重新http服务

systemctl restart httpd

同步nova-api数据库

 su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1 cell

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

同步nova数据库

su -s /bin/sh -c "nova-manage db sync" nova

验证 nova、 cell0、 cell1数据库是否注册正确

nova-manage cell_v2 list_cells

设置服务为开机启动

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

18.安装和配置compute节点服务(compute节点)

安装软件包

yum install openstack-nova-compute

编辑/etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.43.74
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.43.74:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html[glance]
api_servers = http://controller:9292[oslo_concurrency]
lock_path = /var/lib/nova/tmp[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

设置服务开机启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

19.在controller节点验证计算服务操作(controller节点)

添加compute节点到cell数据库,验证有几个计算节点在数据库中

. admin-openrcopenstack compute service list --service nova-compute

发现计算节点

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

列出服务组件

. admin-openrcopenstack compute service list

列出身份服务中的API端点以验证与身份服务的连接

openstack catalog list

列出镜像

openstack image list

检查cells和placement API是否正常

nova-status upgrade check

20.安装和配置controller节点neutron网络配置(controller节点)

创建nuetron数据库和授权

mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'   IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'   IDENTIFIED BY '123456';

创建服务

. admin-openrcopenstack user create --domain default --password-prompt neutron

添加admin角色为neutron用户

openstack role add --project service --user neutron admin

创建neutron服务

openstack service create --name neutron   --description "OpenStack Networking" network

创建网络服务端点

openstack endpoint create --region RegionOne  network public http://controller:9696
openstack endpoint create --region RegionOne  network internal http://controller:9696
openstack endpoint create --region RegionOne  network admin http://controller:969

21.配置网络部分(controller节点)

安装组件

yum install openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge ebtables

配置服务组件,编辑 /etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:123456@controller/neutron[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

22.配置网络二层插件(controller节点)

编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security[ml2_type_flat]
flat_networks = provider[securitygroup]
enable_ipset = true

23.配置Linux网桥(controller节点)

编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eno16777736[vxlan]
enable_vxlan = false[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

24.配置DHCP服务(controller节点)

编辑 /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

25.配置metadata(controller节点)

编辑 /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456

26.配置计算服务使用网络服务(controller节点)

编辑/etc/nova/nova.conf

[neutron]
url =  http://controller:9696
auth_url =  http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

27.完成安装(controller节点)

创建服务软连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启compute API服务

systemctl restart openstack-nova-api.service

配置网络服务开机启动

systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service

28.配置compute节点网络服务(compute节点)

安装组件

yum install openstack-neutron-linuxbridge ebtables ipset

配置公共组件,编辑/etc/neutron/neutron.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@controller[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

29.配置网络(controller节点)

配置Linux网桥,编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eno16777736[vxlan]
enable_vxlan = false[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

30.配置计算节点网络服务(controller节点)

编辑/etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

31.完成安装(compute节点)

重启compute服务

systemctl restart openstack-nova-compute.service

设置网桥服务开机启动

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

32.在controller节点安装Horizon服务(controller节点)

安装软件包

yum install openstack-dashboard -y

编辑/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']

配置memcache会话存储

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {'default': {'BACKEND':'django.core.cache.backends.memcached.MemcachedCache','LOCATION': 'controller:11211',}
}

开启身份认证API 版本v3

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

开启domains版本支持

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

配置API版本

OPENSTACK_API_VERSIONS = {"identity": 3,"image": 2,"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_NEUTRON_NETWORK = {'enable_router': False,'enable_quotas': False,'enable_distributed_router': False,'enable_ha_router': False,'enable_lb': False,'enable_firewall': False,'enable_vpn': False,'enable_fip_topology_check': False,
}

完成安装,重启web服务和会话存储

systemctl restart httpd.service memcached.service

33.访问openstack的web页面

Domain:default
用户名:admin
密码:123456

在浏览器输入http://192.168.43.199/dashboard/(或者http://controller/dashboard/),访问openstack的web页面

至此,我们已经成功搭建好了OpenStack的云计算环境!

小可爱们,真的太棒啦!赶紧奖励自己一个大大的鸡腿吧!!!

阅读参考:
社区OpenStack Queens版本部署安装详解(主要参考)
openstack-dashboard-登陆后显示报错(我遇到问题,使用此方案解决了)
OpenStack官方文档
Openstack架构构建及详解
openstack安装之基础环境准备篇
登录dashboard时出现Internal Server Error

【云计算OpenStack-OpenStack Queens版本】基于OpenStack的云计算环境搭建相关推荐

  1. 基于ECS部署LAMP环境搭建Drupal网站,云计算技术与应用报告

    实验环境: 建站环境:Windows操作系统,基于ECS部署LAMP环境,阿里云资源, Web服务器:Apache,关联的数据库:MySQ PHP:Drupal 8 要求的PHP版本為7.0.33的版 ...

  2. Unity打包基于Android的apk环境搭建总结

    Unity打包基于Android的apk环境搭建总结 资源准备 操作步骤 总结反馈 资源准备 配置Unity打包Android需要3大部分资源准备: 1.Unity准备 打开Unity,点击左上角Fi ...

  3. Sky37E/D 基于Ubuntu21.04 编译环境搭建

    Sky37E/D 基于Ubuntu21.04 编译环境搭建 0. 下载安装Ubuntu 21.04 官网下载ubuntu21.04镜像(ubuntu-21.04-desktop-amd64.iso) ...

  4. 基于阿里云服务器环境搭建到项目上线系列文章之三——安装git

    基于阿里云服务器环境搭建到项目上线系列 前言:最近购买了域名和一台阿里云服务器准备做点东西放上去,所以准备把环境搭建到项目上线的过程记录下来,计划一个系列6篇文章 基于阿里云服务器环境搭建到项目上线系 ...

  5. 基于VS2019与WDK7600环境搭建

    基于VS2019与WDK7600环境搭建 介绍 安装 配置 编译文件 编译运行 介绍 网上好几个方法都试了一下,最后发现该方法好使. 安装 默认已经安装好了VS和WDK 本人WDK路径为: D:\WD ...

  6. 基于阿里云服务器环境搭建到项目上线系列文章之六——项目部署

    基于阿里云服务器环境搭建到项目上线系列 前言:最近购买了域名和一台阿里云服务器准备做点东西放上去,所以准备把环境搭建到项目上线的过程记录下来,计划一个系列6篇文章 基于阿里云服务器环境搭建到项目上线系 ...

  7. 基于阿里云服务器环境搭建到项目上线系列文章之四——安装composer

    基于阿里云服务器环境搭建到项目上线系列 前言:最近购买了域名和一台阿里云服务器准备做点东西放上去,所以准备把环境搭建到项目上线的过程记录下来,计划一个系列6篇文章 基于阿里云服务器环境搭建到项目上线系 ...

  8. 基于阿里云服务器环境搭建到项目上线系列文章之一——putty使用秘钥登录远程服务器

    基于阿里云服务器环境搭建到项目上线系列 前言:最近购买了域名和一台阿里云服务器准备做点东西放上去,所以准备把环境搭建到项目上线的过程记录下来,计划一个系列6篇文章 基于阿里云服务器环境搭建到项目上线系 ...

  9. php yar 扩展,php的基于yaf+yar+yac环境搭建

    php项目基于yaf+yar+yac环境搭建 具体配置步骤(尽量缩短文字描述): 1.php的yaf扩展安装: **      pecl install   yaf   ** 将   extensio ...

  10. Kubernetes1.24版本高可用集群环境搭建(二进制方式)

    背景: 虽然kubeadm方式安装集群更加简单些,配置相对比较少,但是生产环境还是建议二进制的方式安装,因为二进制的方式kubernetes的kube-apiserver.kube-controlle ...

最新文章

  1. Python-线程的生命周期
  2. 同一个类 cannot be cast to_2021年动漫类年历推荐
  3. 如何用word帮别人改文章呢?
  4. 陌生的是人心,是人性,是社会,是世道
  5. deb php7 fileinfo,linux安装php7.2扩展fileinfo
  6. 第十五节: EF的CodeFirst模式通过DataAnnotations修改默认协定
  7. java的class文件反编译
  8. Python小白的数学建模课-B4. 新冠疫情 SIR模型
  9. 这次要讲不清前后端分离,我都怎么地!
  10. 数模2019暑期培训Day1
  11. Android应用网络限制功能实现
  12. 开发钉钉小程序(后台)心得
  13. 模板有函数模板和类模板,这个在上学期的java课里面就学了,C++应该是一样的。
  14. CSS3 动画实现方法大全
  15. Java——Collections
  16. Clickhouse—字符串函数
  17. MESH标准配网流程
  18. 多人在线编辑文档 开发_太方便了,支持多人同时编辑,电脑和手机端实时同步保存...
  19. IDEA中下方git的提交记录上有黄、绿、紫色标记的意思
  20. 【数据结构】_树与二叉树

热门文章

  1. AirDisk产品A6/A4支持OTG数据线连接到手机吗
  2. 质点系的牛顿-欧拉动力学方程
  3. redis命中率不高问题排查
  4. RSA+Base64加密
  5. Android的各版本间的区别总结
  6. Sql语句--日期函数用法
  7. [开源]圆形FOC无刷驱动Baize_foc
  8. [青少年CTF]Misc—Easy by 周末
  9. 合天网安 第四周 | Check your source code
  10. PLC模拟量采集算法数学基础(线性传感器)