openstack搭建

环境准备

主机 配置 ip地址 网卡模式
控制节点 4核4G eth0:192.168.200.10/24;eth1:10.0.1.10/24 NAT/仅主机
计算节点 4核4G eth0:192.168.200.20/24;eth1:10.0.1.20/24 NAT/仅主机/
部署两台虚拟机(节点),两台节点都需要勾选虚拟化引擎;
开机到install界面,按tab键在quite前输入 net.ifnames=0 biosdevname=0 设置网卡名
称eth0和eth1(两个节点都是);
systemctl stop firewalld 关闭防火墙
vim /etc/selinux/config 编辑selinux配置文件
SELINUX=disabled 设置开机不自启
setenforce 0 关闭selinux

  1. 配置controller节点地址域名(compute步骤相同所以省
    略)
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 设置eth0的IP地址
以及DNS子网掩码等;
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 设置eth1的IP地址
以及子网掩码;
[root@localhost ~]# systemctl restart network 重启网络
NAT模式网卡配置(eth0)
BOOTPROTO="static"
ONBOOT="yes"
IPADDR=192.168.0.10
NETMASK=255.255.255.0
GETWAY=192.168.0.2
DNS1=192.168.0.2
DNS2=144.144.144.144
仅主机模式网卡配置(eth1)
ONBOOT=yes
IPADDR=10.0.1.10
NETMASK=255.255.255.0
[root@localhost ~]# hostnamectl set-hostname controller设置主机名
[root@localhost ~]# bash 立即生效
[root@controller ~]# vi /etc/hosts 配置完reboot生效
互ping,看是否能解析成功
[root@contaroller ~]# ping -c 4 docs.openstack.org
测试对互联网的访问(控制和计算节点相同命令):
  • 网络时间协议(NTP)
    1.控制节点安装contaroller
[root@contaroller /]# yum install chrony
[root@contaroller /]# vi /etc/chrony.conf 将3-6行注释掉,添加两行
server ntp1.aliyun.com iburst
allow 192.168.200.0/24
Failed to restart chryond.server.service: Unit not found.
[root@contaroller ~]# systemctl restart chronyd.service
[root@contaroller ~]# systemctl enable chronyd.service
[root@contaroller ~]# chronyc sources

2.计算节点compute

[root@compute ~]# vi /etc/chrony.conf 添加控制节点的ip地址
[root@compute ~]# systemctl restart chronyd.service
[root@compute ~]# systemctl enable chronyd.service
[root@compute ~]# chronyc source
  1. 安装openstack包(两个节点相同步骤)
[root@controller ~]# yum install centos-release-openstack-train 安装trainyum源
[root@controller ~]# yum upgrade 升级所有节点程序安装包
  1. 安装openstack客户端
[root@controller ~]#yum install python-openstackclient
[root@compute ~]#yum install python-openstackclient
  1. 安装和配置sql数据库(控制节点)
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf

##添加以下字段:

bind-address = 192.168.200.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

##重启服务

[root@controller ~]# systemctl restart mariadb.service
[root@controller ~]# systemctl enable mariadb.service
[root@controller ~]# mysql_secure_installation 初始化数据库并设置数据库密码为
123456

安装和配置消息列队(控制节点)

  1. 安装启动消息列队
[root@controller ~]# yum install rabbitmq-server
[root@controller ~]# systemctl enable rabbitmq-server.service
Created symlink from /etc/systemd/system/multiuser.target.wants/rabbitmq-server.service
to
/usr/lib/systemd/system/rabbitmq-server.service.
[root@controller ~]# systemctl start rabbitmq-server.service
[root@controller ~]# systemctl status rabbitmq-server.service
  1. 配置消息列队允许对进行配置、写入和读取访问openstack用户:
[root@controller ~]# rabbitmqctl add_user openstack 123456
Creating user "openstack"
[root@controller ~]# rabbitmqctl set_permissions openstack "." "." ".*"
Setting permissions for user "openstack" in vhost "/"
  1. 安装Memcached缓存令牌(控制节点)
[root@controller ~]# yum install memcached python-memcached
  1. 配置Memcached
[root@controller ~]# vi /etc/sysconfig/memcached
  • 打开配置文件在OPTIONS=“-l 127.0.0.1,::1”.,::1后面加入控制节点名
    启动和设置开机自启Memcached
root@controller ~]# systemctl enable memcached.service
Created symlink from /etc/systemd/system/multiuser.target.wants/memcached.service
to
/usr/lib/systemd/system/memcached.service.
[root@controller ~]# systemctl start memcached.service
[root@controller ~]# systemctl status memcached.service 查看是否开启成功

安装和配置组件Etcd(控制节点)

[root@controller ~]# yum install etcd
root@contaroller mnt]# vi /etc/etcd/etcd.conf编辑/etc/etcd/etcd.conf文件删除或注
释源文件中(ip地址为控制节点的))
添加如下(有注释的行将注释去掉):
[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.200.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.200.10:2379"
ETCD_NAME="controller"
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.200.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.200.10:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.200.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@controller ~]# systemctl restart etcd
[root@controller ~]# systemctl enable etcd
[root@controller ~]# systemctl status etcd

查看是否开启服务成功
部署openstack服务

安装Keystone

  1. 创建数据库
[root@controller ~]# mysql -u root -p123456 登录数据库
MariaDB [(none)]> CREATE DATABASE keystone; 创建数据库
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO
'keystone'@'localhost' IDENTIFIED BY '123456';
授予对的适当访问权限keystone数据库:
Query OK, 0 rows affected (0.003 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO
'keystone'@'%'IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.000 sec)
  1. 安装软件包,以及配置文件
    [
root@controller ~]# yum install openstack-keystone httpd mod_wsgi 安装软件包:
[root@controller ~]# vi /etc/keystone/keystone.conf
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone 填
充身份服务数据库
[root@controller ~]# keystone-manage fernet_setup --keystone-user
keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user
keystone --keystone-group keystone
[root@controller ~]# keystone-manage bootstrap --bootstrap-password 123456 \
> --bootstrap-admin-url http://controller:5000/v3/ \
> --bootstrap-internal-url http://controller:5000/v3/ \
> --bootstrap-public-url http://controller:5000/v3/ \
> --bootstrap-region-id RegionOne
[root@controller ~]# echo "ServerName controller" >>
/etc/httpd/conf/httpd.conf
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf
/etc/httpd/conf.d/
设置服务开机自启,查看开启状态
> [root@controller ~]# systemctl enable httpd.service
> [root@controller ~]# systemctl start httpd.service
> [root@controller ~]# systemctl status httpd.service
为admin用户添加环境变量,
[root@controller ~]# vi admin-openrc
1. #admin-openrc
2. export OS_USERNAME=admin
3. export OS_PASSWORD=123456
4. export OS_PROJECT_NAME=admin
5. export OS_USER_DOMAIN_NAME=Default
6. export OS_PROJECT_DOMAIN_NAME=Default
7. export OS_AUTH_URL=http://controller:5000/v3
8. export OS_IDENTITY_API_VERSION=3
>export OS_IMAGE_API_VERSION=2
  • 创建域项目用户等
[root@controller ~]#source ~/admin-openrc
1.##创建新域的方法
[root@controller ~]# openstack domain create --description "An Example
Domain" example
2.##创建service 项目
[root@controller ~]# openstack project create --domain default --description
"Service Project" service
3.##创建myproject项目
[root@controller ~]# openstack project create --domain default --
description "Demo Project" myproject
4.##创建myuser用户,需要输入新用户的密码
[root@controller ~]# openstack user create --domain default --password
123456 myuser
5.##创建user角色
[root@controller ~]# openstack role create user[root@controller ~]# openstack role list 查看角色
6.##将user角色添加到myproject项目和myuser用户
[root@controller ~]# openstack role add --project myproject --user myuser
user

7.##验证keystone

[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD
以admin用户身份请求身份验证令牌,使用admin用户密码
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
为创建的myuser用户,请请求认证令牌, 使用myuser用户密码
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3
--os-project-domain-name Default --os-user-domain-name Default
--os-project-name myproject --os-username myuser token issue
  • 创建OpenStack客户端环境脚本
创建编辑admin-openrc(上一步已编辑好)和demo-openrc文件
[root@controller ~]# vi demo-openrc
#admin-openrc
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
加载admin-openrc文件使用身份服务的位置和admin项目和用户凭据
[root@controller ~]# . admin-openrc
请求身份验证令牌:
[root@controller ~]# openstack token issue

部署glance服务

  1. 创建数据库并授权
mysql -u root
MariaDB [(none)]> create database glance;
MariaDB [(none)]>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'
IDENTIFIED BY '123456';
MariaDB [(none)]>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'
IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
  1. 创建glance用户
 source ~/admin-openrcopenstack user create --domain default --password 123456 glance
将管理员admin用户添加到glance用户和项目中
openstack role add --project service --user glance admin
  1. 创建glance服务实体
openstack service create --name glance --description "OpenStack Image"
image
  1. 创建glance服务API端点,OpenStack使用三种API端点变种代表每种服务:admin、
    internal、public
>openstack endpoint create --region RegionOne image public
http://controller:9292
openstack endpoint create --region RegionOne image internal
http://controller:9292
openstack endpoint create --region RegionOne image admin
http://controller:9292
  1. 安装glance软件包编辑配置文件你
yum install openstack-glance -y
编辑/etc/glance/glance-api.conf
>[database]
connection=mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
>[keystone_authtoken]
.www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456
>[paste_deploy]
flavor = keystone
>[glance_store]
stores = file,http
default_store = filefile
system_store_datadir = /var/lib/glance/images/
同步写入镜像数据库
su -s /bin/sh -c "glance-manage db_sync" glance
##启动影像服务,并将其配置为在系统启动时启动:
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service

验证操作
[root@controller ~]# . admin-openrc

  • 下载镜像
[root@controller ~]#wget https://download.cirros-cloud.net/0.4.0/cirros0.4.0-x86_64-disk.img
##使用将影像上传到影像服务QCOW2磁盘格式
[root@controller ~]#glance image-create --name "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public
##确认上传图像并验证属性:
[root@controller ~]# glance image-lis

部署plancement

  1. 创建placement数据库
MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO
'placement'@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.011 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%'
IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.010 sec)
  1. 创建placement用户
openstack user create --domain default --password 123456 placement
将Placement用户添加到服务项目中
openstack role add --project service --user placement admin
创建placement API服务实体
[root@controller ~]# openstack service create --name placement --description
"Placement API" placement
创建placement API服务访问端点
[root@controller ~]# openstack endpoint create --region RegionOne placement
public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement
internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement
admin http://controller:8778
  1. 安装软件包以及编辑配置文件
[root@controller ~]# yum install openstack-placement-api -y
[root@controller ~]#/etc/placement/placement.conf
> [placement_database]
connection=mysql+pymysql://placement:123456@controller/placement
>[api]
auth_strategy = keystone
>[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
##填充placement数据库:
su -s /bin/sh -c "placement-manage db sync" placement
验证安装
[root@controller ~]# . admin-openrc
[root@controller ~]# placement-status upgrade check
  • 安装pip
[root@controller~]#wget https://bootstrap.pypa.io/pip/2.7/get-pip.py
[root@controller ~]# python get-pip.py
##安装osc-placement插件
[root@controller ~]# pip install osc-placement

##列出可用的资源类别和特征:

[root@controller ~]# openstack --os-placement-api-version 1.2 resource class
list --sort-column name
[root@controller ~]# openstack --os-placement-api-version 1.6 trait list --
sort-column name

部署nova服务(控制和计算节点都要)
1.控制节点部署过程;

[root@controller ~]# mysql -u root -p123456
#创建nova_api, nova,以及nova_cell0数据库:
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.002 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.001 sec)
授予对数据库的适当访问权限:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.010 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'
\
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.000 sec)
[root@controller ~]# . admin-openrc
  • 创建计算服务凭据:
  1. 创建nova用户:
[root@controller ~]# openstack user create --domain default --password-prompt
nova
  1. 添加admin角色到nova用户:
[root@controller ~]# openstack role add --project service --user nova admin

##此命令不输出内容
3. 创建nova服务实体:

[root@controller ~]# openstack service create --name nova --description
"OpenStack Compute" compute
  1. 创建计算nova API三个服务端点:
[root@controller ~]# openstack endpoint create --region RegionOne compute
public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute
internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute
admin http://controller:8774/v2.1
  1. 安装软件包:
#yum install openstack-nova-api openstack-nova-conductor openstack-novanovncproxy
openstack-nova-scheduler
编辑配置文件:
[root@controller ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url =rabbit://openstack:123456@controller:5672/
my_ip = 192.168.200.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
##配置数据库访问:
[api_database]
connection=mysql+pymysql://nova:123456@controller/nova_api
[database]
connection=mysql+pymysql://nova:123456@controller/nova
##配置身份服务访问
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
##将VNC代理配置为使用控制器节点的管理接口IP地址
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
##配置影像服务API的位置:
[glance]
api_servers = http://controller:9292
##配置锁定路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
##配置对安置服务的访问
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
  1. 填充nova-api数据库:忽略输出中的任何反对信息
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
##注册cell0数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
##创建cell1单元格:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --
name=cell1 --verbose" nova
##填充nova数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
##验证nova cell0和cell1是否正确注册:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
  1. 启动计算服务,并将其配置为在系统启动时启动,并查看启动状态:
[root@controller ~]# systemctl enable \
> openstack-nova-api.service \
> openstack-nova-scheduler.service \
> openstack-nova-conductor.service \
> openstack-nova-novncproxy.service
[root@controller ~]# systemctl start \
> openstack-nova-api.service \
> openstack-nova-scheduler.service \
> openstack-nova-conductor.service \
> openstack-nova-novncproxy.service
[root@controller ~]# systemctl status\
> openstack-nova-api.service \
> openstack-nova-scheduler.service \
> openstack-nova-conductor.service \
> openstack-nova-novncproxy.service
服务状态显示开启即可

2.计算节点部署nova

  1. 安装软件包以及编辑配置文件
[root@compute ~]# yum install openstack-nova-compute -y
# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url =rabbit://openstack:123456@controller
my_ip =192.168.200.20
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
2. ##配置身份服务访问:
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
##配置锁定路径
[glance]
api_servers = http://controller:929
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
##配置放置API:
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

确定计算节点是否支持虚拟机的硬件加速:,有数字不为0返回便是支持
[root@compute ~]# egrep -c ‘(vmx|svm)’ /proc/cpuinfo
4###获取管理员凭据以启用仅限管理员的CLI命令,然后确认数据库中有计算主机:

[root@controller ~]#source admin-openrc
[root@controller ~]# openstack compute service list --service nova-compute
1. 发现计算主机
[root@controller ~]#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --
verbose" nova
2. 列出服务组件以验证每个流程的成功启动和注册:
[root@controller ~]# openstack compute service list
3. 列出身份服务中的API端点以验证与身份服务的连接:
[root@controller ~]# openstack catalog list
4. 列出影像服务中的影像以验证与影像服务的连通性:
[root@controller ~]# openstack image list
5. 检查单元和放置API是否成功工作,以及其他必要的先决条件是否到位:
[root@controller ~]# nova-status upgrade check

部署网络服务neutron

先决条件
1.创建neutron数据库:

[root@controller ~]# mysql -u root -p123456
MariaDB [(none)]> create database neutron;
Query OK, 1 row affected (0.008 sec)
###授权neutron数据库访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'
\
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.031 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
-> IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
admin获得访问仅限管理员使用的CLI命令的凭据:
[root@controller ~]# . admin-openrc
创建neutron用户:
[root@controller ~]# openstack user create --domain default --password-prompt
neutron
  1. 添加admin角色到neutron用户:$ openstack role ad
    [root@controller ~]# openstack role add --project service --user neutron
    admin
  2. 创建neutron服务实体:
    [root@controller ~]# openstack service create --name neutron \

–description “OpenStack Networking” network

  1. 创建网络服务API端点:
    [root@controller ~]# openstack endpoint create --region RegionOne \

network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne
network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne
network admin http://controller:9696
自助服务网络
控制节点部署

  1. 安装提供者网络(桥接)
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2
openstack-neutron-linuxbridge ebtables
  1. 配置服务器组件
[root@controller ~]# vim /etc/neutron/neutron.conf
###配置数据库访问:
[database]
connection=mysql+pymysql://neutron:123456@controller/neutron
###启用模块化第2层(ML2)插件、路由器服务和重叠IP地址:
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
###配置RabbitMQ消息队列访问:
transport_url = rabbit://openstack:123456@controller
###配置身份服务访问:
auth_strategy = keystone
###配置身份服务访问:
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
###配置网络以通知计算网络拓扑变化:
[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
配置锁定路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3. 配置模块化第2层(ML2)插件
vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan ##启用扁平、VLAN和VXLAN网络:
tenant_network_types = vxlan ##启用VXLAN自助服务网络:
mechanism_drivers = linuxbridge,l2population 启用Linux桥和第2层填充机制:
extension_drivers = port_security 启用端口安全扩展驱动程序:
###将提供商虚拟网络配置为平面网络:
[ml2_type_flat]
flat_networks = provider
###为自助服务网络配置VXLAN网络标识符范围:
[ml2_type_vxlan]
vni_ranges = 1:1000
###启用ipset以提高安全组规则的效率:
[securitygroup]
enable_ipset = true
4.配置Linux桥接代理¶
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.200.10
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#修改linux内核参数设置为1

[root@controller ~]#echo 'net.bridge.bridge-nf-calliptables=1'>>/etc/sysctl.conf
[root@controller ~]#echo 'net.bridge.bridge-nf-callip6tables=1'>>/etc/sysctl.conf

#启用网络桥接器支持,加载 br_netfilter 内核模块

[root@controller ~]# modprobe br_netfilter
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
  1. 配置第3层 l3代理为自助式虚拟网络提供路由和NAT服务
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
  1. 配置DHCP代理,DHCP代理为虚拟网络提供DHCP服务
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
  1. 配置元数据代理
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
  1. 在控制节点上配置Nova服务与网络服务进行交互
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password =123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

完成安装

  1. 创建ml2的软连接 文件指向ML2插件配置的软链接
[root@controller ~]# ln-s/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/neutron/plugin.ini
  1. 填充数据库:
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file
/etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  1. 重新启动计算API服务:
[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]# systemctl status openstack-nova-api.service
  1. 启动网络服务,并将其配置为在系统启动时启动。(设置开机自启,启动服务,查看是
    否启动成功)
[root@controller ~]# systemctl enable neutron-server.service neutronlinuxbridge-agent.service
neutron-dhcp-agent.service neutron-metadataagent.service
[root@controller ~]# systemctl start neutron-server.service neutronlinuxbridge-agent.service
neutron-dhcp-agent.service neutron-metadataagent.service
[root@controller ~]# systemctl status neutron-server.service neutronlinuxbridge-agent.service
neutron-dhcp-agent.service neutron-metadataagent.service
  1. 因配置了第3层l3网络服务 需要启动第三层服务
[root@controller ~]# systemctl enable neutron-l3-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-
agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.
[root@controller ~]# systemctl restart neutron-l3-agent.service
[root@controller ~]# systemctl status neutron-l3-agent.service

计算节点部署

  1. 在计算节点安装neutron网络服务
    [root@compute ~]# yum install openstack-neutron-linuxbridge ebtables ipset
  2. 配置通用组件,编辑/etc/neutron/neutron.conf 文件
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller ##配置服务消息列队
auth_strategy = keystone ##配置身份服务访问
###配置身份服务访问:
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
###配置锁定路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3. 配置Linux桥接代理¶
[root@compute~]#vim/etc/neutron/plugins/ml2/linuxbridge_agent.ini
##将提供商虚拟网络映射到提供商物理网络接口
[linux_bridge]
physical_interface_mappings = provider:eth0
##启用VXLAN覆盖网络,配置处理覆盖网络的物理网络接口的IP地址,并启用第2层填
充:
[vxlan]
enable_vxlan = true
local_ip = 192.168.200.20
l2_population = true
##启用安全组并配置Linux bridge iptables防火墙驱动程序:
[securitygroup]
enable_security_group = true
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 修改linux系统内核网桥参数为1
[root@compute ~]# echo 'net.bridge.bridge-nf-calliptables=1'
>>/etc/sysctl.conf
[root@compute ~]# echo 'net.bridge.bridge-nf-callip6tables=1'
>>/etc/sysctl.conf
[root@compute ~]# modprobe br_netfilter
[root@compute ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
  1. 配置计算节点上Nova服务使用网络服务
[root@compute ~]# vi /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

完成安装

  1. 重新启动计算服务:
[root@compute ~]# systemctl restart openstack-nova-compute.service
[root@compute ~]# systemctl status openstack-nova-compute.service
  1. 启动neutron网桥代理服务 设置开机自启动
[root@compute ~]# systemctl restart neutron-linuxbridge-agent.service
[root@compute ~]# systemctl status neutron-linuxbridge-agent.service

验证安装

[root@controller ~]# openstack extension list --network ##列出加载的扩展以验

证成功启动neutron-server流程:
[root@controller ~]# openstack network agent list ##列出试剂以验证中子试剂的
成功发射:
安装和配置仪表板。控制节点或计算节点都可
以安装
两种安装模式(安装软件包,源安装(需要手动安装))
由于horizon运行需要apache,为了不影响控制节点上的keystone等其他服务使用的
apache,故在计算节点上安装。安装之前确认以前安装的服务是否正常启动。

  1. 选择软件包安装
    [root@compute ~]# yum install openstack-dashboard memcached python-memcached
    -y
  2. 启动服务并设置开机自启
[root@compute ~]# systemctl restart memcached.service
[root@compute ~]# systemctl enable memcached.service
  1. 修改配置文件/etc/openstack-dashboard/local_settings
[root@compute~]#vim/etc/openstackdashboard/local_settings
OPENSTACK_HOST = "controller" 118行
ALLOWED_HOSTS = ['*'] #允许主机访问仪表板,接受所有主机,不安全不应在生产中使
用39行
#配置memcached会话存储服务
SESSION_ENGINE='django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
}
#启用身份API版本3
OPENSTACK_KEYSTONE_URL="http://%s:5000/v3"%OPENSTACK_HOST
#启用对域的支持需要添加
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#配置API版本
OPENSTACK_API_VERSIONS =
{ "identity": 3, "image": 2, "volume": 3, }
#配置Default为通过仪表板创建的用户的默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN= "Default"
#配置user为通过仪表板创建的用户的默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#如果选择网络选项1,请禁用对第3层网络服务的支持,如果选择网络选项2,则可以打开
OPENSTACK_NEUTRON_NETWORK = {
'enable_auto_allocated_network': True,
'enable_distributed_router': True,
'enable_fip_topology_check': True,
'enable_ha_router': True,
'enable_ipv6': True,
'enable_lb': true,
'enable_firewall': true,
'enable_vpn': true,
'enable_quotas': True,
'enable_rbac_policy': True,
'enable_router': True,
}
##配置时区为上海
TIME_ZONE = "Asia/Shanghai"

##重建apache的dashboard配置文件

[root@compute openstack-dashboard]# vim /etc/openstackdashboard/local_settings
[root@compute openstack-dashboard]# python manage.py make_web_conf --apache >
/etc/httpd/conf.d/openstack-dashboard.conf

##重新启动compute01计算节点上的apache服务和memcache服务

[root@compute openstack-dashboard]# systemctl restart httpd.service
memcached.service
[root@compute openstack-dashboard]# systemctl enable httpd.service
memcached.service
[root@compute openstack-dashboard]# systemctl status httpd.service
memcached.service

验证访问
在浏览器访问仪表板,网址为 http://192.168.200.20
使用admin或myuser用户和default域凭据进行身份验证。
域: default 用户名: admin 密码: 123456

创建虚拟网络并启动实例

  1. 创建提供商网络
    [root@controller ~]# . admin-openrc
    创建网络
[root@controller ~]# openstack network create --share --external \
> --provider-physical-network provider \
> --provider-network-type flat provider
创建子网
[root@controller ~]#openstack subnet create --network provider \
--allocation-pool start=192.168.200.101,end=192.168.200.250 \
--dns-nameserver 8.8.4.4 --gateway 192.168.200.251\
--subnet-range 192.168.200.0/24 provider
查看网络
[root@controller ~]# openstack network list
+--------------------------------------+----------+--------------------------
------------+
| ID | Name | Subnets
|
+--------------------------------------+----------+--------------------------
------------+
| 60c6f620-a143-49c5-b4c8-7def8103f57c | provider | b8988625-476b-463e-ae96-
e2be2becd898 |
+--------------------------------------+----------+--------------------------
------------+
查看子网
[root@controller ~]# openstack subnet list
+--------------------------------------+----------+--------------------------
------------+------------------+
| ID | Name | Network
| Subnet |
+--------------------------------------+----------+--------------------------
------------+------------------+
| b8988625-476b-463e-ae96-e2be2becd898 | provider | 60c6f620-a143-49c5-b4c8-
7def8103f57c | 192.168.200.0/24 |
+--------------------------------------+----------+--------------------------
------------+------------------+

创建自助网络

  1. 使用demo凭据
    [root@controller ~]# . demo-openrc
2. 创建网络
[root@controller ~]# openstack network create selfservice
3.创建子网
[root@controller ~]# openstack subnet create --network selfservice \
> --dns-nameserver 223.5.5.5 --gateway 172.18.1.1 \
> --subnet-range 172.18.1.0/24 selfservice
4. 查看网络
[root@controller ~]# openstack network list
5. 查看子网
[root@controller ~]# openstack subnet list
7.使用凭据
[root@controller ~]# . demo-openrc
8. 添加路由
[root@controller ~]# openstack router create router01
9.查看路由
[root@controller ~]# openstack router list
10. 将创建的租户自助服务网络子网添加为路由器上的接口
[root@controller ~]# openstack router add subnet router01 selfservice
11. 在路由器的公共提供商网络上设置网关
[root@controller ~]# openstack router set router01 --external-gateway
provider
  1. 使用ip netns命令找到这个虚拟路由器之后,用这个虚拟路由器ping真实物理网络
    中的网关
[root@controller ~]# ip netns
qrouter-aae248c6-6252-4157-8e82-bae43e001bc9 (id: 2)
qdhcp-7ffe4ebb-22f3-448f-a823-1b9feba9ab6c (id: 1)
qdhcp-414130ae-3e61-446f-9083-16efdcb1f24a (id: 0)
[root@controller ~]# ip netns exec qrouter-aae248c6-

验证查看
[root@controller ~]# . admin.openc
##列出路由器上的端口,以确定提供商网络上的网关IP地址
[root@controller ~]# openstack port list --router router01
##从控制器节点或物理提供商网络上的任何主机ping此IP地址进行验证

[root@controller ~]# ping 192.168.200.239
PING 192.168.200.239 (192.168.200.239) 56(84) bytes of data.
64 bytes from 192.168.200.239: icmp_seq=1 ttl=64 time=4.47 ms
64 bytes from 192.168.200.239: icmp_seq=2 ttl=64 time=0.101 ms

创建虚拟网络

1. 创建m1
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk
1 m1.nano
验证凭据
[root@controller ~]# . demo-openrc
2. 生成密钥对,
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
3. 生成密钥对并添加公钥:
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub
mykey
4. 验证密钥对的添加:
[root@controller ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 6a:27:52:85:f1:2a:e8:97:a9:15:c3:3e:3c:26:4e:a6 |
+-------+-------------------------------------------------+
添加安全组规则
##将规则添加到default安全组:
[root@controller ~]# openstack security group rule create --proto icmp
default
##允许安全外壳(SSH)访问:
[root@controller ~]# openstack security group rule create --proto tcp --dstport
22 default
在自助服务网络上启动实例
1. 确定实例选项
##获取demo获取访问仅限用户使用的CLI命令的凭据:
[root@controller ~]# . demo-openrc
##列出可用的
[root@controller ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
##列出可用图像:
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 78248bea-4163-4d59-ba8b-bdbe60d7ddae | cirros | active |
+--------------------------------------+--------+--------+
##列出可用网络:
[root@controller ~]# openstack network list
+--------------------------------------+----------+--------------------------
------------+
| ID | Name | Subnets
|
+--------------------------------------+----------+--------------------------
------------+
| 60c6f620-a143-49c5-b4c8-7def8103f57c | provider | b8988625-476b-463e-ae96-
e2be2becd898 |
+--------------------------------------+----------+--------------------------
------------+
##列出可用安全组
[root@controller ~]# openstack security group list
+--------------------------------------+---------+-------------+-------------
---------------------+------+
| ID | Name | Description | Project
| Tags |
+--------------------------------------+---------+-------------+-------------
---------------------+------+
| ed8a46f4-55ae-4197-b69f-279c8e646125 | default | 缺省安全组 |
19b619aa4421430194298a9d626ddb0d | [] |
+--------------------------------------+---------+-------------+-------------
---------------------+------+
##启动实例:
[root@controller ~]# openstack server create --flavor m1.nano --image cirros
\
> --nic net-id=60c6f620-a143-49c5-b4c8-7def8103f57c --security-group
default \
> --key-name mykey provider-instance
##检查实例的状态:
[root@controller ~]# openstack server list
+--------------------------------------+-------------------+--------+--------
------------------+--------+---------+
| ID | Name | Status |
Networks | Image | Flavor |
+--------------------------------------+-------------------+--------+--------
------------------+--------+---------+
| 19c3d065-aacb-45af-bd5a-0621e43f552a | provider-instance | ACTIVE |
provider=192.168.200.229 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+--------
------------------+--------+---------+
状态从BUILD到ACTIVE当构建过程成功完成时
使用虚拟控制台访问实例(提供商网络)
1. 获得虚拟网络计算(VNC)实例的会话URL,并从web浏览器访问它:
[root@controller ~]# openstack console url show provider-instance
+-------+--------------------------------------------------------------------
-----------------------+
| Field | Value
|
+-------+--------------------------------------------------------------------
-----------------------+
| type | novnc
|
| url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3D956ca13e-c903-
4a47-b163-7f48a4fee6cf |
+-------+--------------------------------------------------------------------
-----------------------+

查看IP地址
登陆到cirros实例验证对公共提供商网络网关的访问
ping百度
注意:如果ping不同外网,直接修改虚拟路由的默认网关(改成和真实虚拟机的网关)
[root@controller ~]# ip netns exec qrouter-aae248c6-6252-4157-8e82-
bae43e001bc9 route add default gw 192.168.200.2
[root@controller ~]# ip netns exec qrouter-aae248c6-6252-4157-8e82-
bae43e001bc9 route -n

openstack跟着官网部署过程相关推荐

  1. Flink v1.11 - 官网 - 部署与运维

    Flink v1.11 - 官网 - 部署与运维 Flink v1.11 - 官网 - 部署与运维 一.集群与部署 1.1 概览 1.1.1 部署方式 1.1.2 部署目标 1.1.3 Applica ...

  2. 跟着官网学Python(8):输入输出

    "Python输入.输出.文件读写以及异常知识." 01 面临问题 继续跟着官网学Python,第7章输入输出. 前面已经基本学完Python的语法部分,也学会如何使用轮子,但是编 ...

  3. 3.odoo13之跟着官网做项目/实例(模型关联,模型类模型表的关联)

    1.建立房地产属性类型的表 还是在models中的estate_property.py中, 并且添加关联,让房地产表和属性表关联起来, 并且加上买方和销售人员,默认系统的模型表, 完整estate_p ...

  4. 2.odoo13之跟着官网做项目/实例(创建菜单,创建action,创建视图、搜索过滤器,分组)

    目录 1.创建菜单 2.创建动作,action 3.运行程序,创建数据 4.添加自定义列表视图(树视图) 5.添加表单视图 6.添加搜索视图 7.搜索视图的过滤器,以及搜索分组 1.创建菜单 在vie ...

  5. 1.odoo13之跟着官网做项目/实例(创建模块、创建模型类、配置角色安全权限文件)

    目录 1.创建模块 2.运行程序,安装上模块 3.创建模型类 4.配置角色安全权限文件 1.创建模块 在主目录下,新建custom的文件夹 进入到pycharm中的命令行,创建estate命令 pyt ...

  6. 跟着官网学k8s-05运行应用程序的多个实例

    一般情况下pod的数量不会是一个,多个pod可以让服务更加的可靠并且可以承受更大的流量. service的type类型可以为LoadBalancer即负载均衡.使用该类型service需要指定外部负载 ...

  7. 跟着官网学k8s-04使用Service暴露您的应用

    本节主要介绍sk8s中的service Kubernetes 的 Service 是一个抽象层,它定义了一组 Pod 的逻辑集,并为这些 Pod 支持外部流量暴露.负载平衡和服务发现 简单说,serv ...

  8. 【MySQL】Linux部署MySQL服务(官网部署方式rpm)

    目录 开始语

  9. 林云会数字经济研究院官网和数字化平台部署流程

    林云会数字经济研究院官网部署到(www.lincloudhui.cn)是一个复杂的过程.为了节省服务器资源,只购买了一个云服务器,也就是说在同一个服务器下要部署官网的前端页面.林云会的后台系统的前端页 ...

最新文章

  1. python中若干关于类的问题
  2. 源码学习【原子类AtomicInteger】Java原子类底层实现(解释详细)
  3. 用php做一个分页显示的,php一个分页显示类
  4. 霸气!Power 支持混合云、多云,性能完胜 x86!
  5. java并发编程实战读书笔记 ExecutorCompletionService
  6. hive启动debug问题
  7. 丢机者要哭:苹果移除了 iCloud 激活锁状态查询页面
  8. linux下rar下载地址,linux下rar解压(rarlinux下载,安装 ,使用)
  9. 个人项目——机智云开源APP基础修改教程(Android)
  10. sql打印每年入职人数_2015年3D打印基础知识-拥有16年全职经验的人
  11. Java跨年祝福语代码_2018跨年夜精选祝福语贺词
  12. 计算机应用技术 快捷键,几个实用的电脑使用技巧和快捷键
  13. 高通和LG携手于2018年开始测试5G车辆互联网;Google 宣布攻破 SHA-1 加密│IoT黑板报...
  14. 窗口切换_Sinno_Song_新浪博客
  15. c语言用指针分离字符串数字与字符,c语言实验报告,指针的应用分别输出字符串中的数字和其他字符(共10篇).docx...
  16. 129.精读《React Conf 2019 - Day2》
  17. 源码分享 | 一套高质量个人主页
  18. Kafka Consumer位移(Offset)提交——解决Consumer重复消费和消息丢失问题
  19. 单位怎么发年终奖才能合理避税,用Python程序实现
  20. 计算机网络技术及应用

热门文章

  1. 格林函数一阶常微分方程方法介绍
  2. Python改变图片像素值
  3. Openwrt编译feeds机制
  4. Windows 安装 Windows 版 iCloud 之后我的电脑里多了个 iCloud 照片图标
  5. 如何用计算机设计衣服,怎样才能成为出色的服装设计师_电脑服装设计图怎么画...
  6. 禁用Ctrl+Alt+Del最有效的方法
  7. 第三期“一生一芯”报名启动:100个名额
  8. android 录屏方案 VFR和CFR
  9. 凡客之困:物流和信誉在缩水
  10. 吼 困 鸭