先电OpenStack搭建

本次搭建采用双节点安装,即controller node控制节点和compute node计算节点。enp8s0为内部管理网络,enp9s0为外部网络。存储节点安装操作系统时划分两个空白分区以sda,sdb为例。作为cinder和swift存储磁盘,搭建 ftp服务器作为搭建云平台的yum源。配置文件中密码需要根据实际环境进行配置。

1.1安装CentOS7说明

【CentOS7版本】
CentOS7系统选择1804版本:CentOS-7-x86_64-DVD-1804.iso
【空白分区划分】
CentOS7的安装与CentOS6.5的安装有明显的区别。在CentOS7安装过程中,设置分区都需要一个挂载点,这样一来就无法创建两个空白的磁盘分区作为cinder服务和swift服务的存储磁盘了。
所以我们应该在系统安装过程中留下足够的磁盘大小,系统安装完成后,使用命令parted划分新分区,然后使用mkfs.xfs进行文件系统格式化,完成空白分区的划分。具体命令如下:
[root@compute ~]# parted /dev/md126
(parted) mkpart swift 702G 803G //创建swift分区,从702G到803G
[root@compute ~]# mkfs.xfs /dev/md126p5
1.2配置网络、主机名
修改和添加/etc/sysconfig/network-scripts/ifcfg-enp*(具体的网口)文件。
(1)controller节点
配置网络:
enp8s0: 192.168.100.10
DEVICE=enp8s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.100.10
PREFIX=24
GATEWAY=192.168.100.1

enp9s0: 192.168.200.10
DEVICE=enp9s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.200.10
PREFIX=24
配置主机名:

hostnamectl set-hostname controller

按ctrl+d 退出 重新登陆

(2)compute 节点
配置网络:
enp8s0: 192.168.100.20
DEVICE=enp8s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.100.20
PREFIX=24
GATEWAY=192.168.100.1

enp9s0: 192.168.200.20
DEVICE=enp9s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.200.20
PREFIX=24

配置主机名:

hostnamectl set-hostname compute

按ctrl+d 退出 重新登陆

1.3配置yum源
#Controller和compute节点
(1)yum源备份
#mv /etc/yum.repos.d/* /opt/
(2)创建repo文件
【controller】
在/etc/yum.repos.d创建centos.repo源文件
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=file:///opt/iaas-repo
gpgcheck=0
enabled=1

【compute】
在/etc/yum.repos.d创建centos.repo源文件
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=ftp://192.168.100.10/iaas-repo
gpgcheck=0
enabled=1

(3)挂载iso文件
【挂载CentOS-7-x86_64-DVD-1804.iso】
[root@controller ~]# mount -o loop CentOS-7-x86_64-DVD-1804.iso /mnt/
[root@controller ~]# mkdir /opt/centos
[root@controller ~]# cp -rvf /mnt/* /opt/centos/
[root@controller ~]# umount /mnt/

【挂载XianDian-IaaS-v2.4.iso】
[root@controller ~]# mount -o loop XianDian-IaaS-v2.4.iso /mnt/
[root@controller ~]# cp -rvf /mnt/* /opt/
[root@controller ~]# umount /mnt/

(4)搭建ftp服务器,开启并设置自启
[root@controller ~]# yum install vsftpd -y
[root@controller ~]# vi /etc/vsftpd/vsftpd.conf
添加anon_root=/opt/
保存退出

[root@controller ~]# systemctl start vsftpd
[root@controller ~]# systemctl enable vsftpd

(5)配置防火墙和Selinux
【controller/compute】
编辑selinux文件

vi /etc/selinux/config

SELINUX=permissive
关闭防火墙并设置开机不自启

systemctl stop firewalld.service

systemctl disable firewalld.service

yum remove -y NetworkManager firewalld

yum -y install iptables-services

systemctl enable iptables

systemctl restart iptables

iptables -F

iptables -X

iptables -Z

service iptables save

(6)清除缓存,验证yum源
【controller/compute】

yum clean all

yum list

1.4编辑环境变量

controller和compute节点

yum install iaas-xiandian -y

编辑文件/etc/xiandian/openrc.sh,此文件是安装过程中的各项参数,根据每项参数上一行的说明及服务器实际情况进行配置。
HOST_IP=192.168.100.10
HOST_PASS=000000
HOST_NAME=controller
HOST_IP_NODE=192.168.100.20
HOST_PASS_NODE=000000
HOST_NAME_NODE=compute
network_segment_IP=192.168.100.0/24
RABBIT_USER=openstack
RABBIT_PASS=000000
DB_PASS=000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000
KEYSTONE_DBPASS=000000
GLANCE_DBPASS=000000
GLANCE_PASS=000000
NOVA_DBPASS=000000
NOVA_PASS=000000
NEUTRON_DBPASS=000000
NEUTRON_PASS=000000
METADATA_SECRET=000000
INTERFACE_IP=192.168.100.10/192.168.100.20(controllerIP/computeIP)
INTERFACE_NAME=enp9s0 (外部网络网卡名称)
Physical_NAME=provider (外部网络适配器名称)
minvlan=101 (vlan网络范围的第一个vlanID)
maxvlan=200 (vlan网络范围的最后一个vlanID)
CINDER_DBPASS=000000
CINDER_PASS=000000
BLOCK_DISK=md126p4 (空白分区)
SWIFT_PASS=000000
OBJECT_DISK=md126p5 (空白分区)
STORAGE_LOCAL_NET_IP=192.168.100.20
HEAT_DBPASS=000000
HEAT_PASS=000000
ZUN_DBPASS=000000
ZUN_PASS=000000
KURYR_DBPASS=000000
KURYR_PASS=000000
CEILOMETER_DBPASS=000000
CEILOMETER_PASS=000000
AODH_DBPASS=000000
AODH_PASS=000000
1.5通过脚本安装服务
1.6-1.8的基础配置操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller节点和Compute节点

执行脚本iaas-pre-host.sh进行安装
[root@controller ~]# iaas-pre-host.sh

安装完成后同时重启

[root@controller ~]# reboot
1.6安装Openstack包

controller和compute节点

yum -y install openstack-utils openstack-selinux python-openstackclient

yum upgrade

1.7配置域名解析
修改/etc/hosts添加一下内容
(1)controller 节点
192.168.100.10 controller
192.168.100.20 compute
(2)compute 节点
192.168.100.10 controller
192.168.100.20 compute
1.8安装chrony服务
(1)controller和compute节点

yum install -y chrony

(2)配置controller节点
编辑/etc/chrony.conf文件
添加以下内容(删除默认sever规则)
server controller iburst
allow 192.168.100.0/24
local stratum 10
启动ntp服务器

systemctl restart chronyd

systemctl enable chronyd

(3)配置compute节点
编辑/etc/chrony.conf文件
添加以下内容(删除默认sever规则)
server controller iburst
启动ntp服务器

systemctl restart chronyd

systemctl enable chronyd

1.9通过脚本安装数据库服务
1.10-1.13基础服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller节点

执行脚本iaas-install-mysql.sh进行安装

1.10安装Mysql数据库服务
(1)安装mysql服务

yum install -y mariadb mariadb-server python2-PyMySQL

(2)修改mysql配置文件参数
修改 /etc/my.cnf文件[mysqld]中添加
max_connections=10000
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8’
character-set-server = utf8
(3)启动服务
#systemctl enable mariadb.service
#systemctl start mariadb.service
(4)修改/usr/lib/systemd/system/mariadb.service文件参数
[Service]
新添加两行如下参数:
LimitNOFILE=10000
LimitNPROC=10000
(5)修改/etc/my.cnf.d/auth_gssapi.cnf文件参数
[mariadb]
注释一行参数
#plugin-load-add=auth_gssapi.so
(6)重新加载系统服务,并重启mariadb服务

systemctl daemon-reload

service mariadb restart

(7)配置Mysql

mysql_secure_installation

按enter确认后设置数据库root密码
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
(8)compute节点
#yum -y install MySQL-python
1.11安装RabbitMQ服务

yum install -y rabbitmq-server

systemctl enable rabbitmq-server.service

systemctl restart rabbitmq-server.service

rabbitmqctl add_user $RABBIT_USER $RABBIT_PASS

rabbitmqctl set_permissions $RABBIT_USER “." ".” “.*”

1.12安装memcahce服务

yum install memcached python-memcached

systemctl enable memcached.service

systemctl restart memcached.service

1.13 安装etcd服务

yum install etcd –y

(1)修改/etc/etcd/etcd.conf配置文件,添加以下内容:
ETCD_LISTEN_PEER_URLS=“http://192.168.100.10:2380”
ETCD_LISTEN_CLIENT_URLS=“http://192.168.100.10:2379”
ETCD_NAME=“controller”
ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://192.168.100.10:2380”
ETCD_ADVERTISE_CLIENT_URLS=“http://192.168.100.10:2379”
ETCD_INITIAL_CLUSTER=“controller=http://192.168.100.10:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=“new”
(2)启动服务

systemctl start etcd

systemctl enable etcd

2 安装Keystone认证服务
#Controller
2.1 通过脚本安装keystone服务
2.2-2.10的认证服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller节点

执行脚本iaas-install-keystone.sh进行安装。

2.2安装keystone服务软件包
yum install -y openstack-keystone httpd mod_wsgi
2.3创建Keystone数据库

mysql –u root -p(此处数据库密码为之前安装Mysql设置的密码) mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘localhost’ IDENTIFIED BY ‘KEYSTONE_DBPASS’; mysql> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘%’ IDENTIFIED BY ‘KEYSTONE_DBPASS’; mysql> exit

2.4配置数据库连接

crudini --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONEDBPASS@KEYSTONE_DBPASS@KEYSTONED​BPASS@HOST_NAME/keystone

2.5为keystone服务创建数据库表

su -s /bin/sh -c “keystone-manage db_sync” keystone

2.6创建令牌
#ADMIN_TOKEN=$(openssl rand -hex 10)

crudini --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN

crudini --set /etc/keystone/keystone.conf token provider fernet

2.7创建签名密钥和证书
#keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
#keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
修改/etc/httpd/conf/httpd.conf配置文件将ServerName www.example.com:80 替换为ServerName controller
创建/etc/httpd/conf.d/wsgi-keystone.conf文件,内容如下:
Listen 5000
Listen 35357

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
= 2.4>
ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined

<Directory /usr/bin><IfVersion >= 2.4>Require all granted</IfVersion><IfVersion < 2.4>Order allow,denyAllow from all</IfVersion>
</Directory>

<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
= 2.4>
ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined

<Directory /usr/bin><IfVersion >= 2.4>Require all granted</IfVersion><IfVersion < 2.4>Order allow,denyAllow from all</IfVersion>
</Directory>

Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
SetHandler wsgi-script
Options +ExecCGI

WSGIProcessGroup keystone-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On

Alias /identity_admin /usr/bin/keystone-wsgi-admin
<Location /identity_admin>
SetHandler wsgi-script
Options +ExecCGI

WSGIProcessGroup keystone-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On

#systemctl enable httpd.service #systemctl start httpd.service 2.8定义用户、租户和角色 (1)设置环境变量 export OS_TOKEN=$ADMIN_TOKEN export OS_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 (2)创建keystone相关内容 openstack service create --name keystone --description "OpenStack Identity" identity openstack endpoint create --region RegionOne identity public http://$HOST_NAME:5000/v3 openstack endpoint create --region RegionOne identity internal http://$HOST_NAME:5000/v3 openstack endpoint create --region RegionOne identity admin http://$HOST_NAME:35357/v3

openstack domain create --description “Default Domain” $DOMAIN_NAME
openstack project create --domain $DOMAIN_NAME --description “Admin Project” admin
openstack user create --domain $DOMAIN_NAME --password $ADMIN_PASS admin

openstack role create admin
openstack role add --project admin --user admin admin

openstack project create --domain $DOMAIN_NAME --description “Service Project” service
openstack project create --domain $DOMAIN_NAME --description “Demo Project” demo

openstack user create --domain $DOMAIN_NAME --password KaTeX parse error: Expected 'EOF', got '#' at position 104: …user (3)清除环境变量 #̲unset OS_TOKEN …DOMAIN_NAME
export OS_USER_DOMAIN_NAME=DOMAINNAMEexportOSPROJECTNAME=demoexportOSUSERNAME=demoexportOSPASSWORD=DOMAIN_NAME export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=DOMAINN​AMEexportOSP​ROJECTN​AME=demoexportOSU​SERNAME=demoexportOSP​ASSWORD=DEMO_PASS
export OS_AUTH_URL=http://HOSTNAME:5000/v3exportOSIDENTITYAPIVERSION=3exportOSIMAGEAPIVERSION=22.10创建admin−openrc.sh创建admin环境变量admin−openrc.shexportOSPROJECTDOMAINNAME=HOST_NAME:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 2.10创建admin-openrc.sh 创建admin环境变量admin-openrc.sh export OS_PROJECT_DOMAIN_NAME=HOSTN​AME:5000/v3exportOSI​DENTITYA​PIV​ERSION=3exportOSI​MAGEA​PIV​ERSION=22.10创建admin−openrc.sh创建admin环境变量admin−openrc.shexportOSP​ROJECTD​OMAINN​AME=DOMAIN_NAME
export OS_USER_DOMAIN_NAME=DOMAINNAMEexportOSPROJECTNAME=adminexportOSUSERNAME=adminexportOSPASSWORD=DOMAIN_NAME export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=DOMAINN​AMEexportOSP​ROJECTN​AME=adminexportOSU​SERNAME=adminexportOSP​ASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://$HOST_NAME:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
生效环境变量
#source admin-openrc.sh
3 安装Glance镜像服务
#Controller
3.1 通过脚本安装glance服务
3.2-3.9的镜像服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller 节点

执行脚本iaas-install-glance.sh进行安装
3.2 安装Glance镜像服务软件包

yum install -y openstack-glance

3.3创建Glance数据库
#mysql -u root -p
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘localhost’ IDENTIFIED BY ‘GLANCE_DBPASS’;
mysql> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘%’ IDENTIFIED BY ‘GLANCE_DBPASS’;
3.4配置数据库连接

crudini --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCEDBPASS@GLANCE_DBPASS@GLANCED​BPASS@HOST_NAME/glance

crudini --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCEDBPASS@GLANCE_DBPASS@GLANCED​BPASS@HOST_NAME/glance

3.5为镜像服务创建数据库表

su -s /bin/sh -c “glance-manage db_sync” glance

3.6创建用户

openstack user create --domain $DOMAIN_NAME --password $GLANCE_PASS glance

openstack role add --project service --user glance admin

3.7配置镜像服务

crudini --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCEDBPASS@GLANCE_DBPASS@GLANCED​BPASS@HOST_NAME/glance

crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://$HOST_NAME:5000

crudini --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_type password

crudini --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/glance/glance-api.conf keystone_authtoken project_name service

crudini --set /etc/glance/glance-api.conf keystone_authtoken username glance

crudini --set /etc/glance/glance-api.conf keystone_authtoken password $GLANCE_PASS

crudini --set /etc/glance/glance-api.conf paste_deploy flavor keystone

crudini --set /etc/glance/glance-api.conf glance_store stores file,http

crudini --set /etc/glance/glance-api.conf glance_store default_store file

crudini --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

crudini --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCEDBPASS@GLANCE_DBPASS@GLANCED​BPASS@HOST_NAME/glance

crudini --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://$HOST_NAME:5000

crudini --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password

crudini --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/glance/glance-registry.conf keystone_authtoken project_name service

crudini --set /etc/glance/glance-registry.conf keystone_authtoken username glance

crudini --set /etc/glance/glance-registry.conf keystone_authtoken password $GLANCE_PASS

crudini --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

3.8创建Endpoint和API端点

openstack service create --name glance --description “OpenStack Image” image

openstack endpoint create --region RegionOne image public http://$HOST_NAME:9292

openstack endpoint create --region RegionOne image internal http://$HOST_NAME:9292

openstack endpoint create --region RegionOne image admin http://$HOST_NAME:9292

3.9启动服务
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
3.10上传镜像
首先下载(Wget)提供的系统镜像到本地,本次以上传CentOS_7.5_x86_64镜像为例。
可以安装Wget,从Ftp服务器上下载镜像到本地。

source admin-openrc.sh

glance image-create --name “CentOS7.5” --disk-format qcow2 --container-format bare --progress < /opt/images/CentOS_7.5_x86_64_XD.qcow2

4 安装Nova计算服务
#Controller
4.1通过脚本安装nova服务
4.2-4.15计算服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-nova-controller.sh进行安装
#Compute节点
执行脚本iaas-install-nova-compute.sh进行安装

4.2安装Nova 计算服务软件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

4.3创建Nova数据库

mysql -u root -p

mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
mysql> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘%’ IDENTIFIED BY ‘NOVA_DBPASS’;
mysql> create database IF NOT EXISTS nova_api;
mysql> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’ ;
mysql> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘%’ IDENTIFIED BY ‘NOVA_DBPASS’ ;
mysql> create database IF NOT EXISTS nova_cell0;
mysql> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’ ;
mysql> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘%’ IDENTIFIED BY ‘NOVA_DBPASS’ ;
修改数据库连接

crudini --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVADBPASS@NOVA_DBPASS@NOVAD​BPASS@HOST_NAME/nova

crudini --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVADBPASS@NOVA_DBPASS@NOVAD​BPASS@HOST_NAME/nova_api

4.4为计算服务创建数据库表

su -s /bin/sh -c “nova-manage api_db sync” nova

su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova

su -s /bin/sh -c “nova-manage cell_v2 create_cell --name=cell1 --verbose” nova

su -s /bin/sh -c “nova-manage db sync” nova

4.5创建用户

openstack user create --domain $DOMAIN_NAME --password $NOVA_PASS nova

openstack role add --project service --user nova admin

4.6配置计算服务

crudini --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

crudini --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:NOVADBPASS@NOVA_DBPASS@NOVAD​BPASS@HOST_NAME

crudini --set /etc/nova/nova.conf DEFAULT my_ip $HOST_IP

crudini --set /etc/nova/nova.conf DEFAULT use_neutron True

crudini --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

crudini --set /etc/nova/nova.conf api auth_strategy keystone

crudini --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVADBPASS@NOVA_DBPASS@NOVAD​BPASS@HOST_NAME/nova_api

crudini --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVADBPASS@NOVA_DBPASS@NOVAD​BPASS@HOST_NAME/nova

crudini --set /etc/nova/nova.conf keystone_authtoken auth_url http://$HOST_NAME:5000/v3

crudini --set /etc/nova/nova.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/nova/nova.conf keystone_authtoken auth_type password

crudini --set /etc/nova/nova.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf keystone_authtoken project_name service

crudini --set /etc/nova/nova.conf keystone_authtoken username nova

crudini --set /etc/nova/nova.conf keystone_authtoken password $NOVA_PASS

crudini --set /etc/nova/nova.conf vnc enabled true

crudini --set /etc/nova/nova.conf vnc server_listen $HOST_IP

crudini --set /etc/nova/nova.conf vnc server_proxyclient_address $HOST_IP

crudini --set /etc/nova/nova.conf glance api_servers http://$HOST_NAME:9292

crudini --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

crudini --set /etc/nova/nova.conf placement os_region_name RegionOne

crudini --set /etc/nova/nova.conf placement project_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf placement project_name service

crudini --set /etc/nova/nova.conf placement auth_type password

crudini --set /etc/nova/nova.conf placement user_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf placement auth_url http://$HOST_NAME:5000/v3

crudini --set /etc/nova/nova.conf placement username placement

crudini --set /etc/nova/nova.conf placement password $NOVA_PASS

4.7创建Endpoint和API端点

openstack service create --name nova --description “OpenStack Compute” compute

openstack endpoint create --region RegionOne compute public http://$HOST_NAME:8774/v2.1

openstack endpoint create --region RegionOne compute internal http://$HOST_NAME:8774/v2.1

openstack endpoint create --region RegionOne compute admin http://$HOST_NAME:8774/v2.1

openstack user create --domain $DOMAIN_NAME --password $NOVA_PASS placement

openstack role add --project service --user placement admin

openstack service create --name placement --description “Placement API” placement

openstack endpoint create --region RegionOne placement public http://$HOST_NAME:8778

openstack endpoint create --region RegionOne placement internal http://$HOST_NAME:8778

openstack endpoint create --region RegionOne placement admin http://$HOST_NAME:8778

4.8 添加配置
在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置
<Directory /usr/bin>
= 2.4>
Require all granted

<IfVersion < 2.4>
Order allow,deny
Allow from all

4.9启动服务

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl restart httpd memcached

4.10验证Nova数据库是否创建成功

nova-manage cell_v2 list_cells

#Compute
4.11安装Nova计算服务软件包

yum install openstack-nova-compute -y

4.12配置Nova服务

crudini --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

crudini --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:NOVADBPASS@NOVA_DBPASS@NOVAD​BPASS@HOST_NAME

crudini --set /etc/nova/nova.conf DEFAULT my_ip $HOST_IP_NODE

crudini --set /etc/nova/nova.conf DEFAULT use_neutron True

crudini --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

crudini --set /etc/nova/nova.conf api auth_strategy keystone

crudini --set /etc/nova/nova.conf keystone_authtoken auth_url http://$HOST_NAME:5000/v3

crudini --set /etc/nova/nova.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/nova/nova.conf keystone_authtoken auth_type password

crudini --set /etc/nova/nova.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf keystone_authtoken project_name service

crudini --set /etc/nova/nova.conf keystone_authtoken username nova

crudini --set /etc/nova/nova.conf keystone_authtoken password $NOVA_PASS

crudini --set /etc/nova/nova.conf vnc enabled True

crudini --set /etc/nova/nova.conf vnc server_listen 0.0.0.0

crudini --set /etc/nova/nova.conf vnc server_proxyclient_address $HOST_IP_NODE

crudini --set /etc/nova/nova.conf vnc novncproxy_base_url http://$HOST_IP:6080/vnc_auto.html

crudini --set /etc/nova/nova.conf glance api_servers http://$HOST_NAME:9292

crudini --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

crudini --set /etc/nova/nova.conf placement os_region_name RegionOne

crudini --set /etc/nova/nova.conf placement project_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf placement project_name service

crudini --set /etc/nova/nova.conf placement auth_type password

crudini --set /etc/nova/nova.conf placement user_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf placement auth_url http://$HOST_NAME:5000/v3

crudini --set /etc/nova/nova.conf placement username placement

crudini --set /etc/nova/nova.conf placement password $NOVA_PASS

4.13检查系统处理器是否支持虚拟机的硬件加速
执行命令
#egrep -c ‘(vmx|svm)’ /proc/cpuinfo
(1)如果该命令返回一个1或更大的值,说明你的系统支持硬件加速,通常不需要额外的配置。
(2)如果这个指令返回一个0值,说明你的系统不支持硬件加速,你必须配置libvirt取代KVM来使用QEMU。

crudini --set /etc/nova/nova.conf libvirt virt_type qemu

4.14启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
4.15 添加计算节点
#controller

su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova

5 安装Neutron网络服务
#Controller节点
5.1通过脚本安装neutron服务
5.2-5.11网络服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-neutron-controller.sh进行安装
#Compute节点
执行脚本iaas-install-neutron-compute.sh进行安装

5.2创建Neutron数据库
#mysql -u root -p
mysql> CREATE DATABASE neutron;
mysql> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’ IDENTIFIED BY ‘NEUTRONDBPASS′;mysql>GRANTALLPRIVILEGESONneutron.∗TO′neutron′@′NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTROND​BPASS′;mysql>GRANTALLPRIVILEGESONneutron.∗TO′neutron′@′NEUTRON_DBPASS’;
5.3创建用户

openstack user create --domain $DOMAIN_NAME --password $NEUTRON_PASS neutron

openstack role add --project service --user neutron admin

5.4创建Endpoint和API端点

openstack service create --name neutron --description “OpenStack Networking” network

openstack endpoint create --region RegionOne network public http://$HOST_NAME:9696

openstack endpoint create --region RegionOne network internal http://$HOST_NAME:9696

openstack endpoint create --region RegionOne network admin http://$HOST_NAME:9696

5.5安装neutron网络服务软件包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

5.6配置Neutron服务

crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2

crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins router

crudini --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true

crudini --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:NEUTRONDBPASS@NEUTRON_DBPASS@NEUTROND​BPASS@HOST_NAME

crudini --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

crudini --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true

crudini --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true

crudini --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRONDBPASS@NEUTRON_DBPASS@NEUTROND​BPASS@HOST_NAME/neutron

crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://$HOST_NAME:35357

crudini --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

crudini --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/neutron/neutron.conf keystone_authtoken project_name service

crudini --set /etc/neutron/neutron.conf keystone_authtoken username neutron

crudini --set /etc/neutron/neutron.conf keystone_authtoken password $NEUTRON_PASS

crudini --set /etc/neutron/neutron.conf nova auth_url http://$HOST_NAME:35357

crudini --set /etc/neutron/neutron.conf nova auth_type password

crudini --set /etc/neutron/neutron.conf nova project_domain_name $DOMAIN_NAME

crudini --set /etc/neutron/neutron.conf nova user_domain_name $DOMAIN_NAME

crudini --set /etc/neutron/neutron.conf nova region_name RegionOne

crudini --set /etc/neutron/neutron.conf nova project_name service

crudini --set /etc/neutron/neutron.conf nova username nova

crudini --set /etc/neutron/neutron.conf nova password $NOVA_PASS

crudini --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks $Physical_NAME

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges PhysicalNAME:Physical_NAME:PhysicalN​AME:minvlan:$maxvlan

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges minvlan:minvlan:minvlan:maxvlan

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings PhysicalNAME:Physical_NAME:PhysicalN​AME:INTERFACE_NAME

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $INTERFACE_IP

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

crudini --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge

crudini --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq

crudini --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

#/etc/neutron/metadata_agent.ini

crudini --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host $HOST_NAME

crudini --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret $METADATA_SECRET

crudini --set /etc/nova/nova.conf neutron url http://$HOST_NAME:9696

crudini --set /etc/nova/nova.conf neutron auth_url http://$HOST_NAME:35357

crudini --set /etc/nova/nova.conf neutron auth_type password

crudini --set /etc/nova/nova.conf neutron project_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf neutron user_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf neutron region_name RegionOne

crudini --set /etc/nova/nova.conf neutron project_name service

crudini --set /etc/nova/nova.conf neutron username neutron

crudini --set /etc/nova/nova.conf neutron password $NEUTRON_PASS

crudini --set /etc/nova/nova.conf neutron service_metadata_proxy true

crudini --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret $METADATA_SECRET

5.7 创建数据库

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

5.8 启动服务和创建网桥
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

#Compute节点
5.9 安装软件包

yum install openstack-neutron-linuxbridge ebtables ipset net-tools -y

5.10 配置Neutron服务

crudini --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:NEUTRONDBPASS@NEUTRON_DBPASS@NEUTROND​BPASS@HOST_NAME

crudini --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://$HOST_NAME:35357

crudini --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

crudini --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/neutron/neutron.conf keystone_authtoken project_name service

crudini --set /etc/neutron/neutron.conf keystone_authtoken username neutron

crudini --set /etc/neutron/neutron.conf keystone_authtoken password $NEUTRON_PASS

crudini --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:$INTERFACE_NAME

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $INTERFACE_IP

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true

crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

crudini --set /etc/nova/nova.conf neutron url http://$HOST_NAME:9696

crudini --set /etc/nova/nova.conf neutron auth_url http://$HOST_NAME:35357

crudini --set /etc/nova/nova.conf neutron auth_type password

crudini --set /etc/nova/nova.conf neutron project_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf neutron user_domain_name $DOMAIN_NAME

crudini --set /etc/nova/nova.conf neutron region_name RegionOne

crudini --set /etc/nova/nova.conf neutron project_name service

crudini --set /etc/nova/nova.conf neutron username neutron

crudini --set /etc/nova/nova.conf neutron password $NEUTRON_PASS

5.11 启动服务进而创建网桥

systemctl restart openstack-nova-compute.service

systemctl start neutron-linuxbridge-agent.service

systemctl enable neutron-linuxbridge-agent.service

6 安装Dashboard服务
6.1通过脚本安装dashboard服务
6.2-6.4dashboard的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller
执行脚本iaas-install-dashboard.sh进行安装

6.2安装Dashboard服务软件包

yum install openstack-dashboard –y

6.3配置
修改/etc/openstack-dashboard/local_settings内容如下
修改
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
WEBROOT = ‘/dashboard/’
ALLOWED_HOSTS = [‘‘, ‘two.example.com’]
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
LOCAL_PATH = ‘/tmp’
SECRET_KEY=‘31880d3983dd796f54c8’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.locmem.LocMemCache’,
},
}
EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_KEYSTONE_BACKEND = {
‘name’: ‘native’,
‘can_edit_user’: True,
‘can_edit_group’: True,
‘can_edit_project’: True,
‘can_edit_domain’: True,
‘can_edit_role’: True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point’: False,
‘can_set_password’: False,
‘requires_keypair’: False,
‘enable_quotas’: True
}
OPENSTACK_CINDER_FEATURES = {
‘enable_backup’: False,
}
OPENSTACK_NEUTRON_NETWORK = {
‘enable_router’: True,
‘enable_quotas’: True,
‘enable_ipv6’: True,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_fip_topology_check’: True,
‘supported_vnic_types’: [’
’],
‘physical_networks’: [],
}
OPENSTACK_HEAT_STACK = {
‘enable_user_pass’: True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
“architecture”: _(“Architecture”),
“kernel_id”: _(“Kernel ID”),
“ramdisk_id”: _(“Ramdisk ID”),
“image_state”: _(“Euca2ools state”),
“project_id”: _(“Project ID”),
“image_type”: _(“Image Type”),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
INSTANCE_LOG_LENGTH = 35
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = “UTC”
POLICY_FILES_PATH = ‘/etc/openstack-dashboard’
LOGGING = {
‘version’: 1,
‘disable_existing_loggers’: False,
‘formatters’: {
‘console’: {
‘format’: ‘%(levelname)s %(name)s %(message)s’
},
‘operation’: {
‘format’: ‘%(message)s’
},
},
‘handlers’: {
‘null’: {
‘level’: ‘DEBUG’,
‘class’: ‘logging.NullHandler’,
},
‘console’: {
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘console’,
},
‘operation’: {
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘operation’,
},
},
‘loggers’: {
‘horizon’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘horizon.operation_log’: {
‘handlers’: [‘operation’],
‘level’: ‘INFO’,
‘propagate’: False,
},
‘openstack_dashboard’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘novaclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘cinderclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘keystoneauth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘keystoneclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘glanceclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘neutronclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘swiftclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘oslo_policy’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘openstack_auth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘nose.plugins.manager’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘django’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘django.db.backends’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘requests’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘urllib3’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘chardet.charsetprober’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘iso8601’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘scss’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
},
}
SECURITY_GROUP_RULES = {
‘all_tcp’: {
‘name’: _(‘All TCP’),
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},
‘all_udp’: {
‘name’: _(‘All UDP’),
‘ip_protocol’: ‘udp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},
‘all_icmp’: {
‘name’: _(‘All ICMP’),
‘ip_protocol’: ‘icmp’,
‘from_port’: ‘-1’,
‘to_port’: ‘-1’,
},
‘ssh’: {
‘name’: ‘SSH’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘22’,
‘to_port’: ‘22’,
},
‘smtp’: {
‘name’: ‘SMTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘25’,
‘to_port’: ‘25’,
},
‘dns’: {
‘name’: ‘DNS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘53’,
‘to_port’: ‘53’,
},
‘http’: {
‘name’: ‘HTTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘80’,
‘to_port’: ‘80’,
},
‘pop3’: {
‘name’: ‘POP3’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘110’,
‘to_port’: ‘110’,
},
‘imap’: {
‘name’: ‘IMAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘143’,
‘to_port’: ‘143’,
},
‘ldap’: {
‘name’: ‘LDAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘389’,
‘to_port’: ‘389’,
},
‘https’: {
‘name’: ‘HTTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘443’,
‘to_port’: ‘443’,
},
‘smtps’: {
‘name’: ‘SMTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘465’,
‘to_port’: ‘465’,
},
‘imaps’: {
‘name’: ‘IMAPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘993’,
‘to_port’: ‘993’,
},
‘pop3s’: {
‘name’: ‘POP3S’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘995’,
‘to_port’: ‘995’,
},
‘ms_sql’: {
‘name’: ‘MS SQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1433’,
‘to_port’: ‘1433’,
},
‘mysql’: {
‘name’: ‘MYSQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3306’,
‘to_port’: ‘3306’,
},
‘rdp’: {
‘name’: ‘RDP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3389’,
‘to_port’: ‘3389’,
},
}
REST_API_REQUIRED_SETTINGS = [‘OPENSTACK_HYPERVISOR_FEATURES’,
‘LAUNCH_INSTANCE_DEFAULTS’,
‘OPENSTACK_IMAGE_FORMATS’,
‘OPENSTACK_KEYSTONE_DEFAULT_DOMAIN’,
‘CREATE_IMAGE_DEFAULTS’,
‘ENFORCE_PASSWORD_CHECK’]
ALLOWED_PRIVATE_SUBNET_CIDR = {‘ipv4’: [], ‘ipv6’: []}
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 2,
}
6.4启动服务

systemctl restart httpd.service memcached.service

6.5访问
打开浏览器访问Dashboard
http://controller(或本机内网ip)/dashboard
注:检查防火墙规则,确保允许http服务相关端口通行,或者关闭防火墙。

6.6创建云主机
(1)管理员->资源管理->云主机类型->创建云主机类型

(2)管理员->网络->网络->创建网络

(3)项目->网络->安全组->管理规则->添加规则(ICMP、TCP、UDP)

(4)项目->资源管理->云主机->创建云主机
7 安装Cinder块存储服务
7.1 通过脚本安装Cinder服务
7.2-7.12块存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller
执行脚本iaas-install-cinder-controller.sh进行安装
#Compute节点
执行脚本iaas-install-cinder-compute.sh进行安装

7.2 安装Cinder块存储服务软件包

yum install openstack-cinder

7.3 创建数据库

mysql -u root -p

mysql> CREATE DATABASE cinder;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@‘localhost’ IDENTIFIED BY ‘CINDERDBPASS′;mysql>GRANTALLPRIVILEGESONcinder.∗TO′cinder′@′CINDER_DBPASS'; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDERD​BPASS′;mysql>GRANTALLPRIVILEGESONcinder.∗TO′cinder′@′CINDER_DBPASS’;
7.4 创建用户

openstack user create --domain $DOMAIN_NAME --password $CINDER_PASS cinder

openstack role add --project service --user cinder admin

7.5 创建Endpoint和API端点

openstack service create --name cinder --description “OpenStack Block Store” volume

openstack service create --name cinderv2 --description “OpenStack Block Store” volumev2

openstack service create --name cinderv3 --description “OpenStack Block Store” volumev3

openstack endpoint create --region RegionOne volume public http://$HOST_NAME:8776/v1/%(tenant_id)s

openstack endpoint create --region RegionOne volume internal http://$HOST_NAME:8776/v1/%(tenant_id)s

openstack endpoint create --region RegionOne volume admin http://$HOST_NAME:8776/v1/%(tenant_id)s

openstack endpoint create --region RegionOne volumev2 public http://$HOST_NAME:8776/v2/%(tenant_id)s

openstack endpoint create --region RegionOne volumev2 internal http://$HOST_NAME:8776/v2/%(tenant_id)s

openstack endpoint create --region RegionOne volumev2 admin http://$HOST_NAME:8776/v2/%(tenant_id)s

openstack endpoint create --region RegionOne volumev3 public http://$HOST_NAME:8776/v3/%(tenant_id)s

#openstack endpoint create --region RegionOne volumev3 internal http://$HOST_NAME:8776/v3/%(tenant_id)s

openstack endpoint create --region RegionOne volumev3 admin http://$HOST_NAME:8776/v3/%(tenant_id)s

7.6 配置Cinder服务

crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDERDBPASS@CINDER_DBPASS@CINDERD​BPASS@HOST_NAME/cinder

crudini --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit

crudini --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host $HOST_NAME

crudini --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid $RABBIT_USER

crudini --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password $RABBIT_PASS

crudini --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone

crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://$HOST_NAME:35357

crudini --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_type password

crudini --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/cinder/cinder.conf keystone_authtoken project_name service

crudini --set /etc/cinder/cinder.conf keystone_authtoken username cinder

crudini --set /etc/cinder/cinder.conf keystone_authtoken password $CINDER_PASS

crudini --set /etc/cinder/cinder.conf DEFAULT my_ip $HOST_IP

crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

crudini --set /etc/nova/nova.conf cinder os_region_name RegionOne

7.7 创建数据库

su -s /bin/sh -c “cinder-manage db sync” cinder

7.8 启动服务

systemctl restart openstack-nova-api.service

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

7.9 安装块存储软件
#compute

yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python-keystone -y

systemctl enable lvm2-lvmetad.service

systemctl restart lvm2-lvmetad.service

7.10 创建LVM物理和逻辑卷
以磁盘/dev/sda为例

pvcreate –f /dev/sda

vgcreate cinder-volumes /dev/sda

7.11 修改Cinder配置文件

crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDERDBPASS@CINDER_DBPASS@CINDERD​BPASS@HOST_NAME/cinder

crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone

crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm

crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://$HOST_NAME:35357

crudini --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_type password

crudini --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/cinder/cinder.conf keystone_authtoken project_name service

crudini --set /etc/cinder/cinder.conf keystone_authtoken username cinder

crudini --set /etc/cinder/cinder.conf keystone_authtoken password $CINDER_PASS

crudini --set /etc/cinder/cinder.conf DEFAULT my_ip $HOST_IP_NODE

crudini --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver

crudini --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes

crudini --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi

crudini --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm

crudini --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://$HOST_NAME:9292

crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

7.12 重启服务

systemctl enable openstack-cinder-volume.service target.service

systemctl restart openstack-cinder-volume.service target.service

7.13 验证
#Controller
使用cinder create 创建一个新的卷

cinder create --display-name myVolume 1

通过cinder list 命令查看是否正确创建

cinder list

8 安装Swift对象存储服务
8.1通过脚本安装Swift服务
8.2-8.12对象存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller
执行脚本iaas-install-swift-controller.sh进行安装
#Compute节点
执行脚本iaas-install-swift-compute.sh进行安装
8.2 安装Swift对象存储服务软件包

yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached -y

8.2创建用户

openstack user create --domain $DOMAIN_NAME --password $SWIFT_PASS swift

openstack role add --project service --user swift admin

8.3创建Endpoint和API端点

openstack service create --name swift --description “OpenStack Object Storage” object-store

openstack endpoint create --region RegionOne object-store public http://$HOST_NAME:8080/v1/AUTH_%(tenant_id)s

openstack endpoint create --region RegionOne object-store internal http://$HOST_NAME:8080/v1/AUTH_%(tenant_id)s

openstack endpoint create --region RegionOne object-store admin http://$HOST_NAME:8080/v1

8.4 编辑/etc/swift/proxy-server.conf
编辑配置文件如下
[DEFAULT]
bind_port = 8080
swift_dir = /etc/swift
user = swift
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://HOSTNAME:5000authurl=http://HOST_NAME:5000 auth_url = http://HOSTN​AME:5000authu​rl=http://HOST_NAME:35357
memcached_servers = $HOST_NAME:11211
auth_type = password
project_domain_name = $DOMAIN_NAME
user_domain_name = $DOMAIN_NAME
project_name = service
username = swift
password = $SWIFT_PASS
delay_auth_decision = True
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
memcache_servers = $HOST_NAME:11211
use = egg:swift#memcache
[filter:ratelimit]
use = egg:swift#ratelimit
[filter:domain_remap]
use = egg:swift#domain_remap
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:cname_lookup]
use = egg:swift#cname_lookup
[filter:staticweb]
use = egg:swift#staticweb
[filter:tempurl]
use = egg:swift#tempurl
[filter:formpost]
use = egg:swift#formpost
[filter:name_check]
use = egg:swift#name_check
[filter:list-endpoints]
use = egg:swift#list_endpoints
[filter:proxy-logging]
use = egg:swift#proxy_logging
[filter:bulk]
use = egg:swift#bulk
[filter:slo]
use = egg:swift#slo
[filter:dlo]
use = egg:swift#dlo
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:container_sync]
use = egg:swift#container_sync
[filter:xprofile]
use = egg:swift#xprofile
[filter:versioned_writes]
use = egg:swift#versioned_writes
8.5 创建账号、容器、对象
存储节点存储磁盘名称以sdb为例
swift-ring-builder account.builder create 18 1 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip $STORAGE_LOCAL_NET_IP --port 6002 --device $OBJECT_DISK --weight 100
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance

swift-ring-builder container.builder create 10 1 1
swift-ring-builder container.builder add --region 1 --zone 1 --ip $STORAGE_LOCAL_NET_IP --port 6001 --device $OBJECT_DISK --weight 100
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 1 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip $STORAGE_LOCAL_NET_IP --port 6000 --device $OBJECT_DISK --weight 100
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance
8.6 编辑/etc/swift/swift.conf文件
编辑如下
[swift-hash]
swift_hash_path_suffix = changeme
swift_hash_path_prefix = changeme
[storage-policy:0]
name = Policy-0
default = yes
aliases = yellow, orange
[swift-constraints]
8.7 启动服务和赋予权限
chown -R root:swift /etc/swift
systemctl enable openstack-swift-proxy.service memcached.service
systemctl restart openstack-swift-proxy.service memcached.service
8.8 安装软件包
#Compute节点
存储节点存储磁盘名称以sdb为例

yum install xfsprogs rsync openstack-swift-account openstack-swift-container openstack-swift-object -y

mkfs.xfs -i size=1024 -f /dev/sdb

echo “/dev/sdb /swift/node xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0” >> /etc/fstab

mkdir -p /swift/node/sdb

mount /dev/sdb /swift/node/sdb

scp controller:/etc/swift/*.ring.gz /etc/swift/

8.9 配置rsync
(1)编辑/etc/rsyncd.conf文件如下
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
uid = swift
gid = swift
address = 127.0.0.1
[account]
path = /swift/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/account.lock
[container]
path = /swift/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/container.lock
[object]
path = /swift/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/object.lock
[swift_server]
path = /etc/swift
read only = true
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 5
lock file = /var/lock/swift_server.lock
(2)启动服务

systemctl enable rsyncd.service

#systemctl restart rsyncd.service
8.10 配置账号、容器和对象
(1)修改/etc/swift/account-server.conf配置文件
[DEFAULT]
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /swift/node
mount_check = false
[pipeline:main]
pipeline = healthcheck recon account-server
[app:account-server]
use = egg:swift#account
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[account-replicator]
[account-auditor]
[account-reaper]
[filter:xprofile]
use = egg:swift#xprofile
(2)修改/etc/swift/container-server.conf配置文件
[DEFAULT]
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /swift/node
mount_check = false
[pipeline:main]
pipeline = healthcheck recon container-server
[app:container-server]
use = egg:swift#container
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
[filter:xprofile]
use = egg:swift#xprofile
(3)修改/etc/swift/object-server.conf配置文件
[DEFAULT]
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /swift/node
mount_check = false
[pipeline:main]
pipeline = healthcheck recon object-server
[app:object-server]
use = egg:swift#object
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
[object-replicator]
[object-reconstructor]
[object-updater]
[object-auditor]
[filter:xprofile]
use = egg:swift#xprofile
8.11 修改Swift配置文件
修改/etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix = changeme
swift_hash_path_prefix = changeme
[storage-policy:0]
name = Policy-0
default = yes
aliases = yellow, orange
[swift-constraints]
8.12 重启服务和赋予权限

chown -R swift:swift /swift/node

mkdir -p /var/cache/swift

chown -R root:swift /var/cache/swift

chmod -R 775 /var/cache/swift

chown -R root:swift /etc/swift

systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

systemctl restart openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

systemctl restart openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

systemctl restart openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

9 安装Heat编配服务

Controller节点

9.1通过脚本安装heat服务
9.2-9.8编配服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-heat.sh进行安装

9.2安装heat编配服务软件包

yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine openstack-heat-ui -y

9.3创建数据库

mysql -u root -p

mysql> CREATE DATABASE heat;
mysql> GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@‘localhost’ IDENTIFIED BY ‘HEATDBPASS′;mysql>GRANTALLPRIVILEGESONheat.∗TO′heat′@′HEAT_DBPASS'; mysql> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEATD​BPASS′;mysql>GRANTALLPRIVILEGESONheat.∗TO′heat′@′HEAT_DBPASS’;
9.4创建用户

openstack user create --domain $DOMAIN_NAME --password $HEAT_PASS heat

openstack role add --project service --user heat admin

openstack domain create --description “Stack projects and users” heat

openstack user create --domain heat --password $HEAT_PASS heat_domain_admin

openstack role add --domain heat --user-domain heat --user heat_domain_admin admin

openstack role create heat_stack_owner

openstack role add --project demo --user demo heat_stack_owner

openstack role create heat_stack_user

9.5创建Endpoint和API端点

openstack service create --name heat --description “Orchestration” orchestration

openstack service create --name heat-cfn --description “Orchestration” cloudformation

openstack endpoint create --region RegionOne orchestration public http://$HOST_NAME:8004/v1/%(tenant_id)s

openstack endpoint create --region RegionOne orchestration internal http://$HOST_NAME:8004/v1/%(tenant_id)s

openstack endpoint create --region RegionOne orchestration admin http://$HOST_NAME:8004/v1/%(tenant_id)s

openstack endpoint create --region RegionOne cloudformation public http://$HOST_NAME:8000/v1

openstack endpoint create --region RegionOne cloudformation internal http://$HOST_NAME:8000/v1

openstack endpoint create --region RegionOne cloudformation admin http://$HOST_NAME:8000/v1

9.6配置Heat服务

crudini --set /etc/heat/heat.conf database connection mysql+pymysql://heat:HEATDBPASS@HEAT_DBPASS@HEATD​BPASS@HOST_NAME/heat

crudini --set /etc/heat/heat.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/heat/heat.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/heat/heat.conf keystone_authtoken auth_url http://$HOST_NAME:35357

crudini --set /etc/heat/heat.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/heat/heat.conf keystone_authtoken auth_type password

crudini --set /etc/heat/heat.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/heat/heat.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/heat/heat.conf keystone_authtoken project_name service

crudini --set /etc/heat/heat.conf keystone_authtoken username heat

crudini --set /etc/heat/heat.conf keystone_authtoken password $HEAT_PASS

crudini --set /etc/heat/heat.conf trustee auth_plugin password

crudini --set /etc/heat/heat.conf trustee auth_url http://$HOST_NAME:35357

crudini --set /etc/heat/heat.conf trustee username heat

crudini --set /etc/heat/heat.conf trustee password $HEAT_PASS

crudini --set /etc/heat/heat.conf trustee user_domain_name $DOMAIN_NAME

crudini --set /etc/heat/heat.conf clients_keystone auth_uri http://$HOST_NAME:35357

crudini --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url http://$HOST_NAME:8000

crudini --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url http://$HOST_NAME:8000/v1/waitcondition

crudini --set /etc/heat/heat.conf DEFAULT stack_domain_admin heat_domain_admin

crudini --set /etc/heat/heat.conf DEFAULT stack_domain_admin_password $HEAT_PASS

crudini --set /etc/heat/heat.conf DEFAULT stack_user_domain_name heat

9.7创建数据库

su -s /bin/sh -c “heat-manage db_sync” heat

9.8启动服务

systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

systemctl restart openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

10 安装Zun服务
10.1通过脚本安装Zun服务
10.2-10.12zun服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-zun-controller.sh进行安装
#Compute节点
执行脚本iaas-install-zun-compute.sh进行安装
10.2 安装zun服务软件包
#Controller节点

yum install python-pip git openstack-zun openstack-zun-ui –y

10.3 创建数据库

mysql -u root -p

mysql> CREATE DATABASE zun;
mysql> GRANT ALL PRIVILEGES ON zun.* TO zun@‘localhost’ IDENTIFIED BY ‘ZUNDBPASS′;mysql>GRANTALLPRIVILEGESONzun.∗TOzun@′ZUN_DBPASS'; mysql> GRANT ALL PRIVILEGES ON zun.* TO zun@'%' IDENTIFIED BY 'ZUND​BPASS′;mysql>GRANTALLPRIVILEGESONzun.∗TOzun@′ZUN_DBPASS’;
10.4 创建用户

openstack user create --domain $DOMAIN_NAME --password $ZUN_PASS zun

openstack role add --project service --user zun admin

openstack user create --domain $DOMAIN_NAME --password $KURYR_PASS kuryr

openstack role add --project service --user kuryr admin

10.5 创建Endpoint和API端点

openstack service create --name zun --description “Container Service” container

openstack endpoint create --region RegionOne container public http://$HOST_NAME:9517/v1

openstack endpoint create --region RegionOne container internal http://$HOST_NAME:9517/v1

openstack endpoint create --region RegionOne container admin http://$HOST_NAME:9517/v1

10.6 配置zun服务

crudini --set /etc/zun/zun.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/zun/zun.conf DEFAULT log_file /var/log/zun

crudini --set /etc/zun/zun.conf api host_ip $HOST_IP

crudini --set /etc/zun/zun.conf api port 9517

crudini --set /etc/zun/zun.conf database connection mysql+pymysql://zun:ZUNDBPASS@ZUN_DBPASS@ZUND​BPASS@HOST_NAME/zun

crudini --set /etc/zun/zun.conf keystone_auth memcached_servers $HOST_NAME:11211

crudini --set /etc/zun/zun.conf keystone_auth auth_uri http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_auth project_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_auth project_name service

crudini --set /etc/zun/zun.conf keystone_auth user_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_auth password $ZUN_PASS

crudini --set /etc/zun/zun.conf keystone_auth username zun

crudini --set /etc/zun/zun.conf keystone_auth auth_url http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_auth auth_type password

crudini --set /etc/zun/zun.conf keystone_auth auth_version v3

crudini --set /etc/zun/zun.conf keystone_auth auth_protocol http

crudini --set /etc/zun/zun.conf keystone_auth service_token_roles_required True

crudini --set /etc/zun/zun.conf keystone_auth endpoint_type internalURL

crudini --set /etc/zun/zun.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/zun/zun.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_authtoken project_name service

crudini --set /etc/zun/zun.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_authtoken password $ZUN_PASS

crudini --set /etc/zun/zun.conf keystone_authtoken username zun

crudini --set /etc/zun/zun.conf keystone_authtoken auth_url http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_authtoken auth_type password

crudini --set /etc/zun/zun.conf keystone_authtoken auth_version v3

crudini --set /etc/zun/zun.conf keystone_authtoken auth_protocol http

crudini --set /etc/zun/zun.conf keystone_authtoken service_token_roles_required True

crudini --set /etc/zun/zun.conf keystone_authtoken endpoint_type internalURL

crudini --set /etc/zun/zun.conf oslo_concurrency lock_path /var/lib/zun/tmp

crudini --set /etc/zun/zun.conf oslo_messaging_notifications driver messaging

crudini --set /etc/zun/zun.conf websocket_proxy wsproxy_host $HOST_IP

crudini --set /etc/zun/zun.conf websocket_proxy wsproxy_port 6784

10.7 创建数据库

su -s /bin/sh -c “zun-db-manage upgrade” zun

10.8 启动服务

systemctl enable zun-api zun-wsproxy

systemctl restart zun-api zun-wsproxy

systemctl restart httpd memcached

10.9 安装软件包
#compute节点

yum install -y yum-utils device-mapper-persistent-data lvm2

yum install docker-ce python-pip git kuryr-libnetwork openstack-zun-compute –y

10.10 配置服务

crudini --set /etc/kuryr/kuryr.conf DEFAULT bindir /usr/libexec/kuryr

crudini --set /etc/kuryr/kuryr.conf neutron auth_uri http://$HOST_NAME:5000

crudini --set /etc/kuryr/kuryr.conf neutron auth_url http://$HOST_NAME:35357

crudini --set /etc/kuryr/kuryr.conf neutron username kuryr

crudini --set /etc/kuryr/kuryr.conf neutron user_domain_name $DOMAIN_NAME

crudini --set /etc/kuryr/kuryr.conf neutron password $KURYR_PASS

crudini --set /etc/kuryr/kuryr.conf neutron project_name service

crudini --set /etc/kuryr/kuryr.conf neutron project_domain_name $DOMAIN_NAME

crudini --set /etc/kuryr/kuryr.conf neutron auth_type password

crudini --set /etc/zun/zun.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/zun/zun.conf DEFAULT state_path /var/lib/zun

crudini --set /etc/zun/zun.conf DEFAULT log_file /var/log/zun

crudini --set /etc/zun/zun.conf database connection mysql+pymysql://zun:ZUNDBPASS@ZUN_DBPASS@ZUND​BPASS@HOST_NAME/zun

crudini --set /etc/zun/zun.conf keystone_auth memcached_servers $HOST_NAME:11211

crudini --set /etc/zun/zun.conf keystone_auth auth_uri http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_auth project_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_auth project_name service

crudini --set /etc/zun/zun.conf keystone_auth user_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_auth password $ZUN_PASS

crudini --set /etc/zun/zun.conf keystone_auth username zun

crudini --set /etc/zun/zun.conf keystone_auth auth_url http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_auth auth_type password

crudini --set /etc/zun/zun.conf keystone_auth auth_version v3

crudini --set /etc/zun/zun.conf keystone_auth auth_protocol http

crudini --set /etc/zun/zun.conf keystone_auth service_token_roles_required True

crudini --set /etc/zun/zun.conf keystone_auth endpoint_type internalURL

crudini --set /etc/zun/zun.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/zun/zun.conf keystone_authtoken auth_uri http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_authtoken project_name service

crudini --set /etc/zun/zun.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/zun/zun.conf keystone_authtoken password $ZUN_PASS

crudini --set /etc/zun/zun.conf keystone_authtoken username zun

crudini --set /etc/zun/zun.conf keystone_authtoken auth_url http://$HOST_NAME:5000

crudini --set /etc/zun/zun.conf keystone_authtoken auth_type password

crudini --set /etc/zun/zun.conf websocket_proxy base_url ws://$HOST_NAME:6784/

crudini --set /etc/zun/zun.conf oslo_concurrency lock_path /var/lib/zun/tmp

crudini --set /etc/kuryr/kuryr.conf DEFAULT capability_scope global

10.11 修改内核参数
修改/etc/sysctl.conf文件,添加以下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
生效配置

sysctl –p

10.12 启动服务

mkdir -p /etc/systemd/system/docker.service.d

修改mkdir -p /etc/systemd/system/docker.service.d文件,添加以下内容:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://HOSTNAMENODE:2375−Hunix:///var/run/docker.sock−−cluster−storeetcd://HOST_NAME_NODE:2375 -H unix:///var/run/docker.sock --cluster-store etcd://HOSTN​AMEN​ODE:2375−Hunix:///var/run/docker.sock−−cluster−storeetcd://HOST_NAME:2379

systemctl daemon-reload

systemctl restart docker

systemctl enable docker

systemctl enable kuryr-libnetwork

systemctl restart kuryr-libnetwork

systemctl enable zun-compute

systemctl restart zun-compute

10.13 上传镜像
以CentOS7_1804.tar镜像为例,CentOS7_1804.tar镜像包存放在XianDian-IaaS-v2.4.iso镜像包中。将docker镜像上传到glance中,通过openstack使用镜像启动容器。

source /etc/keystone/admin-openrc.sh

openstack image create centos7.5 --public --container-format docker --disk-format raw < CentOS7_1804.tar

10.14 启动容器
通过glance存储镜像启动容器

zun run --image-driver glance centos7.5

zun list

±-------------------------------------±-------------------±----------±--------±-----------±-------------±------+ | uuid | name | image | status | task_state | addresses | ports | ±-------------------------------------±-------------------±----------±--------±-----------±-------------±------+ | c01d89b6-b927-4a5e-9889-356f572e184d | psi-9-container | centos7.5 | Running | None | 172.30.15.9 | [22] | | ±-------------------------------------±-------------------±----------±--------±-----------±-------------±------+
11 安装Ceilometer监控服务
11.1通过脚本安装Ceilometer服务
11.2-11.14ceilometer监控服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-ceilometer-controller.sh进行安装
#Compute节点
执行脚本iaas-install-ceilometer-compute.sh进行安装

11.2 安装Ceilometer监控服务软件包
#Controller节点

yum install openstack-gnocchi-api openstack-gnocchi-metricd python2-gnocchiclient openstack-ceilometer-notification openstack-ceilometer-central python2-ceilometerclient python-ceilometermiddleware -y

11.3 创建数据库

mysql -u root -p

mysql> CREATE DATABASE gnocchi;
mysql> GRANT ALL PRIVILEGES ON gnocchi.* TO gnocchi@‘localhost’ IDENTIFIED BY ‘CEILOMETERDBPASS′;mysql>GRANTALLPRIVILEGESONgnocchi.∗TOgnocchi@′CEILOMETER_DBPASS'; mysql> GRANT ALL PRIVILEGES ON gnocchi.* TO gnocchi@'%' IDENTIFIED BY 'CEILOMETERD​BPASS′;mysql>GRANTALLPRIVILEGESONgnocchi.∗TOgnocchi@′CEILOMETER_DBPASS’;
11.4 创建用户

openstack user create --domain $DOMAIN_NAME --password $CEILOMETER_PASS ceilometer

openstack role add --project service --user ceilometer admin

openstack user create --domain $DOMAIN_NAME --password $CEILOMETER_PASS gnocchi

openstack role add --project service --user gnocchi admin

openstack role create ResellerAdmin

openstack role add --project service --user ceilometer ResellerAdmin

11.5 创建Endpoint和API端点

openstack service create --name ceilometer --description “OpenStack Telemetry Service” metering

openstack service create --name gnocchi --description “Metric Service” metric

openstack endpoint create --region RegionOne metric public http://$HOST_NAME:8041

openstack endpoint create --region RegionOne metric internal http://$HOST_NAME:8041

openstack endpoint create --region RegionOne metric admin http://$HOST_NAME:8041

11.6 配置Ceilometer

crudini --set /etc/gnocchi/gnocchi.conf DEFAULT log_dir /var/log/gnocchi

crudini --set /etc/gnocchi/gnocchi.conf api auth_mode keystone

crudini --set /etc/gnocchi/gnocchi.conf database backend sqlalchemy

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken auth_type password

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken auth_url http://$HOST_NAME:5000

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken project_name service

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken username gnocchi

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken password $CEILOMETER_PASS

crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken service_token_roles_required true

crudini --set /etc/gnocchi/gnocchi.conf indexer url mysql+pymysql://gnocchi:CEILOMETERDBPASS@CEILOMETER_DBPASS@CEILOMETERD​BPASS@HOST_NAME/gnocchi

crudini --set /etc/gnocchi/gnocchi.conf storage file_basepath /var/lib/gnocchi

crudini --set /etc/gnocchi/gnocchi.conf storage driver file

crudini --set /etc/ceilometer/ceilometer.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/ceilometer/ceilometer.conf api auth_mode keystone

crudini --set /etc/ceilometer/ceilometer.conf dispatcher_gnocchi filter_service_activity False

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken www_authenticate_uri = http://$HOST_NAME:5000

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_url = http://$HOST_NAME:5000

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken memcached_servers = $HOST_NAME:11211

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_type = password

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_domain_name = $DOMAIN_NAME

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken user_domain_name = $DOMAIN_NAME

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_name = service

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken username = gnocchi

crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken password = $CEILOMETER_PASS

crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_type password

crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_url http://$HOST_NAME:5000

crudini --set /etc/ceilometer/ceilometer.conf service_credentials memcached_servers $HOST_NAME:11211

crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_domain_name $DOMAIN_NAME

crudini --set /etc/ceilometer/ceilometer.conf service_credentials user_domain_name $DOMAIN_NAME

crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_name service

crudini --set /etc/ceilometer/ceilometer.conf service_credentials username ceilometer

crudini --set /etc/ceilometer/ceilometer.conf service_credentials password $CEILOMETER_PASS

11.7 创建监听端点
创建/etc/httpd/conf.d/10-gnocchi_wsgi.conf文件,添加以下内容:
Listen 8041
<VirtualHost *:8041>
DocumentRoot /var/www/cgi-bin/gnocchi

<Directory /var/www/cgi-bin/gnocchi>
AllowOverride None
Require all granted

CustomLog /var/log/httpd/gnocchi_wsgi_access.log combined
ErrorLog /var/log/httpd/gnocchi_wsgi_error.log
SetEnvIf X-Forwarded-Proto https HTTPS=1
WSGIApplicationGroup %{GLOBAL}
WSGIDaemonProcess gnocchi display-name=gnocchi_wsgi user=gnocchi group=gnocchi processes=6 threads=6
WSGIProcessGroup gnocchi
WSGIScriptAlias / /var/www/cgi-bin/gnocchi/app

11.8 创建数据库

mkdir /var/www/cgi-bin/gnocchi

cp /usr/lib/python2.7/site-packages/gnocchi/rest/gnocchi-api /var/www/cgi-bin/gnocchi/app

chown -R gnocchi. /var/www/cgi-bin/gnocchi

su -s /bin/bash gnocchi -c “gnocchi-upgrade”

su -s /bin/bash ceilometer -c “ceilometer-upgrade --skip-metering-database”

11.9 启动服务

systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service

systemctl restart openstack-gnocchi-api.service openstack-gnocchi-metricd.service

#systemctl restart httpd memcached

systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service

systemctl restart openstack-ceilometer-notification.service openstack-ceilometer-central.service

11.10 监控组件

crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2

crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2

systemctl restart openstack-glance-api openstack-glance-registry

crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/cinder/cinder.conf oslo_messaging_notifications driver messagingv2

systemctl restart openstack-cinder-api openstack-cinder-scheduler

crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2

crudini --set /etc/heat/heat.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

systemctl restart openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2

systemctl restart neutron-server.service

crudini --set /etc/swift/proxy-server.conf filter:keystoneauth operator_roles “admin, user, ResellerAdmin”

crudini --set /etc/swift/proxy-server.conf pipeline:main pipeline “catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging ceilometer proxy-server”

crudini --set /etc/swift/proxy-server.conf filter:ceilometer paste.filter_factory ceilometermiddleware.swift:filter_factory

crudini --set /etc/swift/proxy-server.conf filter:ceilometer url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME:5672/

crudini --set /etc/swift/proxy-server.conf filter:ceilometer driver messagingv2

crudini --set /etc/swift/proxy-server.conf filter:ceilometer topic notifications

crudini --set /etc/swift/proxy-server.conf filter:ceilometer log_level WARN

systemctl restart openstack-swift-proxy.service

11.11 添加变量参数

echo “export OS_AUTH_TYPE=password” >> /etc/keystone/admin-openrc.sh

11.12 安装软件包

compute 节点

yum install openstack-ceilometer-compute -y

11.13 配置Ceilometer

crudini --set /etc/ceilometer/ceilometer.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_url http://$HOST_NAME:5000

crudini --set /etc/ceilometer/ceilometer.conf service_credentials memcached_servers = $HOST_NAME:11211

crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_domain_name $DOMAIN_NAME

crudini --set /etc/ceilometer/ceilometer.conf service_credentials user_domain_name $DOMAIN_NAME

crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_name service

crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_type password

crudini --set /etc/ceilometer/ceilometer.conf service_credentials username ceilometer

crudini --set /etc/ceilometer/ceilometer.conf service_credentials password $CEILOMETER_PASS

crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True

crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour

crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state

crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2

11.14 启动服务

systemctl enable openstack-ceilometer-compute.service

systemctl restart openstack-ceilometer-compute.service

systemctl restart openstack-nova-compute

12 安装Aodh监控服务
12.1通过脚本安装Aodh服务
12.2-12.9 Alarm监控服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-aodh.sh进行安装

12.2 创建数据库

mysql -u root -p

mysql> CREATE DATABASE aodh;
mysql> GRANT ALL PRIVILEGES ON aodh.* TO aodh@‘localhost’ IDENTIFIED BY ‘AODHDBPASS′;mysql>GRANTALLPRIVILEGESONaodh.∗TOaodh@′AODH_DBPASS'; mysql> GRANT ALL PRIVILEGES ON aodh.* TO aodh@'%' IDENTIFIED BY 'AODHD​BPASS′;mysql>GRANTALLPRIVILEGESONaodh.∗TOaodh@′AODH_DBPASS’;
12.3 创建keystone用户

openstack user create --domain $DOMAIN_NAME --password $AODH_PASS aodh

openstack role add --project service --user aodh admin

12.4 创建Endpoint和API

openstack service create --name aodh --description “Telemetry Alarming” alarming

openstack endpoint create --region RegionOne alarming public http://$HOST_NAME:8042

openstack endpoint create --region RegionOne alarming internal http://$HOST_NAME:8042

openstack endpoint create --region RegionOne alarming admin http://$HOST_NAME:8042

12.5 安装软件包

yum -y install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python2-aodhclient

12.6 配置aodh

crudini --set /etc/aodh/aodh.conf DEFAULT log_dir /var/log/aodh

crudini --set /etc/aodh/aodh.conf DEFAULT transport_url rabbit://RABBITUSER:RABBIT_USER:RABBITU​SER:RABBIT_PASS@$HOST_NAME

crudini --set /etc/aodh/aodh.conf api auth_mode keystone

crudini --set /etc/aodh/aodh.conf api gnocchi_external_project_owner service

crudini --set /etc/aodh/aodh.conf database connection mysql+pymysql://aodh:AODHDBPASS@AODH_DBPASS@AODHD​BPASS@HOST_NAME/aodh

crudini --set /etc/aodh/aodh.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000

crudini --set /etc/aodh/aodh.conf keystone_authtoken auth_url http://$HOST_NAME:5000

crudini --set /etc/aodh/aodh.conf keystone_authtoken memcached_servers $HOST_NAME:11211

crudini --set /etc/aodh/aodh.conf keystone_authtoken auth_type password

crudini --set /etc/aodh/aodh.conf keystone_authtoken project_domain_name $DOMAIN_NAME

crudini --set /etc/aodh/aodh.conf keystone_authtoken user_domain_name $DOMAIN_NAME

crudini --set /etc/aodh/aodh.conf keystone_authtoken project_name service

crudini --set /etc/aodh/aodh.conf keystone_authtoken username aodh

crudini --set /etc/aodh/aodh.conf keystone_authtoken password $AODH_PASS

crudini --set /etc/aodh/aodh.conf service_credentials auth_url http://$HOST_NAME:5000/v3

crudini --set /etc/aodh/aodh.conf service_credentials auth_type password

crudini --set /etc/aodh/aodh.conf service_credentials project_domain_name $DOMAIN_NAME

crudini --set /etc/aodh/aodh.conf service_credentials user_domain_name $DOMAIN_NAME

crudini --set /etc/aodh/aodh.conf service_credentials project_name service

crudini --set /etc/aodh/aodh.conf service_credentials username aodh

crudini --set /etc/aodh/aodh.conf service_credentials password $AODH_PASS

crudini --set /etc/aodh/aodh.conf service_credentials interface internalURL

12.7 创建监听端点
修改/etc/httpd/conf.d/20-aodh_wsgi.conf文件,添加以下内容:
Listen 8042
<VirtualHost *:8042>
DocumentRoot “/var/www/cgi-bin/aodh”
<Directory “/var/www/cgi-bin/aodh”>
AllowOverride None
Require all granted

CustomLog "/var/log/httpd/aodh_wsgi_access.log" combined
ErrorLog "/var/log/httpd/aodh_wsgi_error.log"
SetEnvIf X-Forwarded-Proto https HTTPS=1
WSGIApplicationGroup %{GLOBAL}
WSGIDaemonProcess aodh display-name=aodh_wsgi user=aodh group=aodh processes=6 threads=3
WSGIProcessGroup aodh
WSGIScriptAlias / "/var/www/cgi-bin/aodh/app"

12.8 同步数据库 # mkdir /var/www/cgi-bin/aodh # cp /usr/lib/python2.7/site-packages/aodh/api/app.wsgi /var/www/cgi-bin/aodh/app # chown -R aodh. /var/www/cgi-bin/aodh # su -s /bin/bash aodh -c "aodh-dbsync" 12.9 启动服务 # systemctl restart openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener #systemctl enable openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener # systemctl restart httpd memcached 13 添加控制节点资源到云平台 13.1 修改openrc.sh 把compute节点的IP和主机名改为controller节点的IP和主机名

13.2 运行iaas-install-nova-compute.sh
在控制节点运行iaas-install-nova-compute.sh
执行过程中需要确认登录controller节点和输入controller节点root用户密码。

先电OpenStack搭建相关推荐

  1. 先电openstack云计算比赛个人记录

    云计算openstack搭建及运维 1.参赛内容简介 2.环境搭建 预先准备 安装openstack组件 3.iaas云平台运维 4.Docker容器部署 5.Docker容器运维 6.大数据+安卓 ...

  2. openstack搭建问题

    1.yum install centos-release-openstack-mitaka 问题:No package centos-release-openstack-mitaka availabl ...

  3. openstack搭建教程

    一.   什么是云计算 云计算(cloud computing)是基于互联网的相关服务的增加.使用和交付模式,通常涉及通过互联网来提供动态易扩展且经常是虚拟化的资源.云是网络.互联网的一种比喻说法.过 ...

  4. OpenStack 搭建记录——筑梦之路

    网络规划: 192.168.25.34   openstack 第一部分 openstack搭建 官方文档 http://docs.openstack.org/ 操作系统:centos 7 minal ...

  5. 先电Openstack云平台搭建【超级详细】【附带镜像】

    前言 大二上学期学习Openstack,苦于百度与CSDN上没有对应版本的教程,学的十分艰难,在此,将我的Openstack云平台搭建过程写出,留给新手学习 准备工作: VMware Workstat ...

  6. 先电iaas云平台搭建(openstack)————搭建过程

    写在前面: 1,在上一篇博客中具体记录了创建controller和compute这两个节点的过程 2,此片博客继续上一篇博客内容 3,使用的远程连接工具为CRT 4,搭建过程两个节点有重复步骤,均以c ...

  7. OpenStack搭建M版本 (VM安装)

    理论基础 1.云计算的起源 早在2006年3月,亚马逊公司首先提出弹性计算云服务. 2006年8月9日,谷歌公司首席执行官埃里克·施密特(Eric Schmidt)在搜索引擎大会上首次提出" ...

  8. 基于openstack搭建百万级并发负载均衡器的解决方案

    最近,喜欢研究一些国外技术大咖们的文章,而这篇文章是基于openstack负载均衡器的解决方案,做的一些总结~希望能够给小伙伴带来一些灵感或者帮助. openstack现有的负载均衡解决方案,无论是l ...

  9. 基于RDO的单机的openstack搭建

    单机openstack的搭建 内存7G以上.借鉴别人的自己添加修改的,经实践成功. 安装CentOS 7.3 成功引导系统后,会出现下面的界面 界面说明: Install CentOS 7 #安装Ce ...

最新文章

  1. [iOS翻译]《The Swift Programming Language》系列:Welcome to Swift-01
  2. doc命令下查看java安装路径
  3. 【控制】系统典型环节及其拉氏变换并绘制阶跃响应曲线和脉冲响应曲线
  4. eclipse启动tomcat遇到404错误
  5. x的平方根—leetcode69
  6. Android Studio如何导出可供Unity使用的aar插件详解
  7. new 实例化对象是啥意思_new 关键字、实现一个new
  8. 微课|玩转Python轻松过二级(1.3节):编码规范与代码优化建议1
  9. P1103 书本整理
  10. 如何生成有向图_八十六、从拓扑排序探究有向图
  11. Vue实现百度离线地图(v2.0)
  12. B站 TOP10 Python视频教程
  13. mysql查询这一周数据库_MYSQL查询一周,一月内的数据
  14. bmi计算器公式_bmi计算公式
  15. android远程控制(三)----通过后台服务实现系统点击事件模拟
  16. 并查集算法(优化) | Union by Rank and Path Compression
  17. 相关矩阵 Correlation matrix
  18. Linux GccGcc-c++安装
  19. kettle资源库备份
  20. Eclipse崩溃后无法启动的问题解决

热门文章

  1. 【Unity3D】3D游戏学习
  2. 等待事件之enq: HW - contention
  3. MicroStation:MDL常用API(持续更新)
  4. 阿里云周明:磐久,下一代云计算基础设施
  5. [奥塔在线]JAVA启蒙:WIN10下的JDK环境部署
  6. 推荐几个小白学习的网站
  7. Springer的投稿模板LLNCS类使用教程
  8. (四)MySQL学习笔记——多表设计、多表查询、多表查询练习题
  9. 数据科学库之——matplotlib
  10. 向日葵Android受控端老版本,向日葵Android端版本更新:支持远程开关机