OpenStack:
IaaS云栈,CloudOS
私有云(公司内建使用)
公有云(租用云提供商)
混合云(租用和自建)

    IaaS(OpenStack,CloudStack,PaaS(Docker,Openshift),SaaS)DBaaS(数据库及服务),FWaaS(防火墙及服务)IaaS 按需提供VM

OpenStack组件:
Dashboard:Horizon,WebGUI;
Compute:Nova,管理VM的整个生命周期,主要职责创建、调度、启动虚拟机实例;
Networking:Neutron,早期叫Quantum,独立之前为nova-networking,启动网络连接及服务,能够为用户提供按需创建网络连接API,插件化设计,支持更多网络服务提供商提供的网络框架,支持openvswitch;
Object Storage:Swift,通过RESTful接口提供存储和检索没有结构化的数据对象,它是高可容错数据复制及伸缩架构,分布式存储;
Block Storage:Cinder,早起由Nova提供,代码为nova-storate,为虚拟机提供持久的块存储能力;
Identify service:Keystone,为OpenStack所有服务提供认证授权服务及访问端点边路服务;
Image service:Glance,用于存储和检索磁盘映像文件;
Telemetry:Cilometer,用于实现监控和计量服务的实现;
Orchestration:Heat,用于多组件联动
Database service:Trove,提供DBaaS服务的实现;
Data processing service:Sahara,用于在OpenStack中实现Hadoop的管理;

OpenStack capabilities:
VMs on demand
provisioning
snapshotting
Volumes
Networks
Multi-tenancy
quotas for different users
user can be associated with multiple tenants
Object storage for VM images and arbitrary files

Cloud Computing

Cloud Service Model

Cloud Deployment Model

OpenStack基本组件

OpenStack软件环境

OpenStack Projects:
OpenStack Compute(code-name Nova)
core project since Austin release
OpenStack Networking(code-name Neutron)
core project since Folsom release
OpenStack Object Storage(code-name Swift)
core project since Austin release
OpenStack Block Storage(code-name Cinder)
core project since Folsom release
OpenStack Identity(code-name Keystone)
core project since Essex release
OpenStack Image Service (code-name Glance)
core project since Bexar release
OpenStack Dashboard(code-name Horizon)
core project since Essex release
OpenStack Metering(code-name Ceilometer)
core project since the Havana release
OpenStack Orchestration(code-name Heat)
core project since the Havana release

OpenStack conceptual architecture(Havana)

Openstack Logical Architecture

OpenStack conceptual arch

Request Flow for Provisioning Instance

Two-node architecture with legacy networking(nova-network)

Three-node architecture with OpenStack Networking(neutron)

OpenStack安装配置:
KeyStone:
Identity:主要由两个功能
用户管理:认证和授权
认证方式有两种:
token,
账号和密码;
服务目录:所有可用服务的信息库,包含其API endpoint路径;
核心术语:
User:一个user可以关联至多个tenant
Tenant:租户,一个tenant对应于一个project,或一个组织
Role:角色
Token:令牌,用于认证及授权等
Service:服务
Endpoint:服务的访问入口

epel6 icehouse 安装源:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/

管理工具:keystone-manager
客户端程序:keystone

OpenStack services and clients

Logical Architecture - keystone

Backend
Token Driver:kvs/memcached/sql
Catalog Driver:ksvs/sql/template
Identity Driver:kvs/sql/ldapap/pam
Policy Driver:rules

Auth Token Usage

Image Service:
代码名:Glance,用于在OpenStack中注册、发现及获取磁盘映像文件;
VM的映像文件存储于何处?
普通文件系统、对象存储系统(swift)、S3存储,以及HTTP服务上;

Logical Architecture - glance

glance访问机制

Glance组件:
glance-api
glance的API服务接口,负责接收对Image Service API中影像文件的查看、下载及存储请求;
glance-registry
存储、处理及获取映像文件的元数据,例如映像文件的大小及类型;
database
存储映像文件元数据;
映像文件存储仓库
支持多种类型的映像文件存储机制,包括使用普通的文件系统、对象存储、RADOS块设备、HTTP以及Amazon的S3等;

Glance Architecture

Glance存储方式

磁盘映像文件:
(1)制作,Oz(KVM)、VMBuilder(KVM,XEN)、VeeWee(KVM)、imagefactory
(2)获取别人制作模板,CirrOS、Ubuntu、Fedora、OpenSUSE、Rackspace云映像文件生成器

OpenStack中的磁盘映像文件要满足以下要求:
(1)支持由OpenStack获取其元数据信息和修改数据;
(2)支出对映像文件的大小进行调整;

Compute Service
代码为Nova

Supporting Service:AMQP:Advanced Messaging Queue ProtocolApache Qpid(5672/tcp),RabbitMQ,ZeroMQDatabase

安装qpid:
# yum install qpid-cpp-server
编辑配置文件,设置"auth-no"
# server qpidd start

    Nova组件:API:nova-api,nova-api-metadataCompute Core:nova-compute,nova-scheduler,nova-conductorNetwork for VMs:nova-network,nova-dhcpagentConsole interface:nova-consoleauth,nova-novncproxy,nova-x***vncproxy,nova-certCommand line and other interfaces:nova,nova-manage

Logical Architecture - nova-compute

Compute服务的角色:
管理角色
hypervisor

注意:配置compute节点时,额外需要在[DEFAULT]配置段设定的参数;
vif_plugging_timeout=10
vif_plugging_is_fatal=false

Network Service:
代码为:Neutron,早期叫Quantum;

有两种配置机制:legacy networkNeutronNeutron Server:controllerNetwork NodeCompute Nodes:Computes功能:OVS,l3(netns),dhcpagent,NAT,LBaaS,FWaaS,IPSec ×××Networking API                      network,subnetNetwork:隔离的2层网络,类似于VLAN;Subnet:有着关联配置状态的3层网络,或者说是Ipv4或Ipv6定义的地址块形成的网络;Port:将主机连入设备的连接接口;插件:plug in agent:netutron-*-agentdhcp agentl3 agentl2 agent

OpenStack中物理网络连接架构:
管理网络(management network):
数据网络(data network):
外部网络(external network):
API网络:

Tenant network:tenant内部使用的网络;Flat:所有VMs在同一个网络中,不支持VLAN及其他网络隔离机制;Local:所有的VMs位于本地Compute节点,且不与外部网络通信,网络隔离;VLAN:通过使用VLAN的IDs创建多个providers或tenant网络VXLAN和GRE:Provider network:不专属于某tenant,为各tenant提供通信承载的网络;         

Openstack Networking Architecture

Logical Architecture - neutron

Flat Network
Network bridge
'dnsmasq' DHCP server
Bridge as default gw
Limitations:
Single L2 domain and ARP space,notenant isolation
Singele IP poll

Flat Network Deployment

Flat Network Manger

Vlan Network
Network bridge
Vlan interface over physical interface
Limitations:
Scaling limit in 4k vlan tags
Vlans are configured manually on switch by admin
May have issues with overlapping MACs

Vlan Network Deployment

Neutron
It provides "network connectivity as a service" between interface devices(e.g.,vNICs)managed by other OpenStack service(e.g.,nova)
The service works by allowing users to create their own netorks and then attach interfaces to them
Neutron has a pluggable architecture to support many popular networking vendors and technologies
neutron-server accept API requests and route them to the correct neutron plugin
Plugins and agents perform actual actions,like plug/unplug ports,creating networks and subnets and IP addresing
It also has a message queue to route info between neutron-server and various agents
It has a neutron database to store networking state for particular plugins

DashBoard:
Python Django
Web Framewerk

启动虚拟机实例流程

注意事项:
1、Neutron的配置文件中要把auth-uri换成identity_uri;
2、各配置文件属组该为相应服务的运行者用户身份,否则其将导致服务启动失败;

Block Storage Service
代码名:Cinder
组件:
cinder-api
cinder-volume
cinder-scheduler

部署工具:
fuel:mirantis
devstack

**实验环境:
Controller:
操作系统:cento6.6
内核版本:2.6.32-754.12.1.el6.x86_64
网卡1:vm0 192.168.10.6
网卡2:vm8 192.168.243.138
Compute1:
操作系统:cento6.6
内核版本:2.6.32-754.12.1.el6.x86_64
网卡1:vm0 192.168.10.7
网卡2:vm1
网络节点:
操作系统:cento6.6
内核版本:2.6.32-754.12.1.el6.x86_64
网卡1:vm0 192.168.10.8
网卡2:vm1
网卡3:vm8
存储节点:
操作系统:cento6.6
内核版本:2.6.32-754.12.1.el6.x86_64
网卡1:vm0 192.168.10.9

**
Controller:

[root@controller ~]# hostname
controller.smoke.com
[root@controller ~]# vim /etc/hosts
192.168.10.6    controller.smoke.com controller
192.168.10.7    compute1.smoke.com compute1
192.168.10.8    network.smoke.com network
192.168.10.9    stor1.smoke.com stro1
需要禁用网卡的NM_CONTROLLED=no
[root@controller ~]# chkconfig NetworkManager off
[root@controller ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:99:D9:9Einet addr:192.168.10.6  Bcast:192.168.10.255  Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fe99:d99e/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:3657 errors:0 dropped:0 overruns:0 frame:0TX packets:2985 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:291772 (284.9 KiB)  TX bytes:279044 (272.5 KiB)eth1      Link encap:Ethernet  HWaddr 00:0C:29:99:D9:A8inet addr:192.168.243.138  Bcast:192.168.243.255  Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fe99:d9a8/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:1279 errors:0 dropped:0 overruns:0 frame:0TX packets:277 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:87222 (85.1 KiB)  TX bytes:19819 (19.3 KiB)lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:24 errors:0 dropped:0 overruns:0 frame:0TX packets:24 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:1892 (1.8 KiB)  TX bytes:1892 (1.8 KiB)
[root@controller ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.243.0   0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
0.0.0.0         192.168.243.2   0.0.0.0         UG    0      0        0 eth1
[root@controller ~]# crontab -l
#time sync by haojiang at 2019-01-10
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null

Compute1:

[root@compute1 ~]# hostname
compute1.smoke.com
[root@compute1 ~]# vim /etc/hosts
192.168.10.6    controller.smoke.com controller
192.168.10.7    compute1.smoke.com compute1
192.168.10.8    network.smoke.com network
192.168.10.9    stor1.smoke.com stro1
[root@compute1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:90:D0:92inet addr:192.168.10.7  Bcast:192.168.10.255  Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fe90:d092/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:2593 errors:0 dropped:0 overruns:0 frame:0TX packets:1187 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:259629 (253.5 KiB)  TX bytes:120828 (117.9 KiB)eth1      Link encap:Ethernet  HWaddr 00:0C:29:90:D0:9Cinet6 addr: fe80::20c:29ff:fe90:d09c/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:1458 errors:0 dropped:0 overruns:0 frame:0TX packets:27 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:216549 (211.4 KiB)  TX bytes:2110 (2.0 KiB)lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:2 errors:0 dropped:0 overruns:0 frame:0TX packets:2 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:168 (168.0 b)  TX bytes:168 (168.0 b)
[root@compute1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
0.0.0.0         192.168.10.6    0.0.0.0         UG    0      0        0 eth0
[root@compute1 ~]# crontab -l
#time sync by haojiang at 2019-01-10
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null

Controller:

[root@controller ~]# iptables -t nat -A POSTROUTING  -s 192.168.10.0/24 -j SNAT --to-source 192.168.243.138
[root@controller ~]# service iptables save
[root@controller ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@controller ~]# sysctl -p

Network:

[root@network ~]# hostname
network.smoke.com
[root@network ~]# vim /etc/hosts
192.168.10.6    controller.smoke.com controller
192.168.10.7    compute1.smoke.com compute1
192.168.10.8    network.smoke.com network
192.168.10.9    stor1.smoke.com stro1
[root@network ~]# chkconfig NetworkManager off
[root@network ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:D6:6A:92inet addr:192.168.10.8  Bcast:192.168.10.255  Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fed6:6a92/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:325 errors:0 dropped:0 overruns:0 frame:0TX packets:317 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:33099 (32.3 KiB)  TX bytes:36628 (35.7 KiB)eth1      Link encap:Ethernet  HWaddr 00:0C:29:D6:6A:9Cinet6 addr: fe80::20c:29ff:fed6:6a9c/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:1 errors:0 dropped:0 overruns:0 frame:0TX packets:18 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:342 (342.0 b)  TX bytes:1404 (1.3 KiB)eth2      Link encap:Ethernet  HWaddr 00:0C:29:D6:6A:A6inet addr:192.168.243.129  Bcast:192.168.243.255  Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fed6:6aa6/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:196 errors:0 dropped:0 overruns:0 frame:0TX packets:58 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:14099 (13.7 KiB)  TX bytes:4981 (4.8 KiB)lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:8 errors:0 dropped:0 overruns:0 frame:0TX packets:8 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:600 (600.0 b)  TX bytes:600 (600.0 b)
[root@network ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.243.0   0.0.0.0         255.255.255.0   U     0      0        0 eth2
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
169.254.0.0     0.0.0.0         255.255.0.0     U     1004   0        0 eth2
0.0.0.0         192.168.243.2   0.0.0.0         UG    0      0        0 eth2
[root@network ~]# crontab -l
#time sync by haojiang at 2019-01-10
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null

stor1:

[root@stor1 ~]# hostname
stor1.smoke.com
[root@stor1 ~]# vim /etc/hosts
192.168.10.6    controller.smoke.com controller
192.168.10.7    compute1.smoke.com compute1
192.168.10.8    network.smoke.com network
192.168.10.9    stor1.smoke.com stro1
[root@stor1 ~]# chkconfig NetworkManager off
[root@stor1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:C7:68:35inet addr:192.168.10.9  Bcast:192.168.10.255  Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fec7:6835/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:9710 errors:0 dropped:0 overruns:0 frame:0TX packets:2721 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:900402 (879.2 KiB)  TX bytes:200470 (195.7 KiB)lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:757 errors:0 dropped:0 overruns:0 frame:0TX packets:757 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:266573 (260.3 KiB)  TX bytes:266573 (260.3 KiB)
[root@stor1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
0.0.0.0         192.168.10.6    0.0.0.0         UG    0      0        0 eth0
[root@stor1 ~]# crontab -l
#time sync by haojiang at 2019-01-10
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null

安装Mariadb:

[root@controller ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6
enabled=1
skip_if_unavailable=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
priority=98
openstack-icehouse yum源出现:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection
[root@controller ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak
[root@controller ~]# yum -y update    #方法一
[root@controller ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo    #方法二
enabled=0
[root@controller ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak
[root@controller ~]# yum -y install ca-certificates
[root@controller ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
enabled=1
[root@controller ~]# yum -y update curl    #方法三
[root@controller ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
epel yum源出现:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
[root@controller ~]# vim /etc/yum.repos.d/epel.repo    #注释mirrorlist,取消baseurl注释;
[root@controller ~]# vim /etc/yum.repos.d/epel-testing.repo
[root@controller ~]# yum -y install mariadb-galera-server
[root@controller ~]# service mysqld start
[root@controller ~]# chkconfig mysqld on

安装keystone:

[root@controller ~]# yum -y install openstack-keystone python-keystoneclient openstack-utils
[root@controller ~]# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.44-MariaDB-log Source distributionCopyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.04 sec)MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.01 sec)mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller.smoke.com' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.04 sec)MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> exit
Bye
[root@controller ~]# su -s /bin/sh -c 'keystone-manage db_sync' keystone    #完成数据库同步
[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection=mysql://keystone:keystone@192.168.10.6/keystone
[root@controller ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
[root@controller ~]# echo $ADMIN_TOKEN
010a2a38a44d76e269ed
[root@controller ~]# echo $ADMIN_TOKEN > .admin_token.rc
[root@controller ~]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token=010a2a38a44d76e269ed
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl/
[root@controller ~]# chmod -R o-rwx /etc/keystone/ssl/
[root@controller ~]# service openstack-keystone start
[root@controller ~]# chkconfig openstack-keystone on
[root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
[root@controller ~]# keystone --os-token $ADMIN_TOKEN user-list
[root@controller ~]# keystone user-list
[root@controller gmp-6.1.0]# keystone user-create --name=admin --pass=admin --email=admin@smoke.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         admin@smoke.com          |
| enabled  |               True               |
|    id    | ece7cbced7b84c9c917aac88ee7bd8a1 |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+
[root@controller ~]#  keystone user-list
+----------------------------------+-------+---------+-----------------+
|                id                |  name | enabled |      email      |
+----------------------------------+-------+---------+-----------------+
| ece7cbced7b84c9c917aac88ee7bd8a1 | admin |   True  | admin@smoke.com |
+----------------------------------+-------+---------+-----------------+
[root@controller ~]# keystone role-create --name=admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 47debc9436274cd4b81375ab9948cf70 |
|   name   |              admin               |
+----------+----------------------------------+
[root@controller ~]#  keystone role-list
+----------------------------------+----------+
|                id                |   name   |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| 47debc9436274cd4b81375ab9948cf70 |  admin   |
+----------------------------------+----------+
[root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | bbdc7fe3de4448b19e877902e8274736 |
|     name    |              admin               |
+-------------+----------------------------------+
[root@controller ~]# keystone user-role-add --user admin --role admin --tenant admin
[root@controller ~]# keystone user-role-add --user admin --role _member_ --tenant admin
[root@controller ~]#  keystone user-role-list --user admin --tenant admin
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | ece7cbced7b84c9c917aac88ee7bd8a1 | bbdc7fe3de4448b19e877902e8274736 |
| 47debc9436274cd4b81375ab9948cf70 |  admin   | ece7cbced7b84c9c917aac88ee7bd8a1 | bbdc7fe3de4448b19e877902e8274736 |
+----------------------------------+----------+----------------------------------+----------------------------------+
[root@controller ~]# keystone user-create --name=demo --pass=demo --email=demo@smoke.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          demo@smoke.com          |
| enabled  |               True               |
|    id    | 0dba9dc5d4ff414ebfd1b5844d7b92a3 |
|   name   |               demo               |
| username |               demo               |
+----------+----------------------------------+
[root@controller ~]# keystone tenant-create --name=demo --description="Demo Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Demo Tenant            |
|   enabled   |               True               |
|      id     | 9a748acebd0741f5bb6dc3875772cf0a |
|     name    |               demo               |
+-------------+----------------------------------+
[root@controller ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo
[root@controller ~]# keystone user-role-list --tenant=demo --user=demo
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | 0dba9dc5d4ff414ebfd1b5844d7b92a3 | 9a748acebd0741f5bb6dc3875772cf0a |
+----------------------------------+----------+----------------------------------+----------------------------------+
[root@controller ~]# keystone tenant-create --name=service --description="Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | f37916ade2bc44adae440ab13f31d9cf |
|     name    |             service              |
+-------------+----------------------------------+
[root@controller ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | 4fea5e76c58c4a4c8d2e58ad8bf7a268 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+
[root@controller ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
|                id                |   name   |   type   |    description     |
+----------------------------------+----------+----------+--------------------+
| 4fea5e76c58c4a4c8d2e58ad8bf7a268 | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://controller:35357/v2.0   |
|      id     | f1e088348590453e9cfc2bfdbf0d1c96 |
| internalurl |   http://controller:5000/v2.0    |
|  publicurl  |   http://controller:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | 4fea5e76c58c4a4c8d2e58ad8bf7a268 |
+-------------+----------------------------------+
[root@controller ~]#  keystone endpoint-list
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
|                id                |   region  |          publicurl          |         internalurl         |           adminurl           |            service_id            |
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
| f1e088348590453e9cfc2bfdbf0d1c96 | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 4fea5e76c58c4a4c8d2e58ad8bf7a268 |
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
[root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
[root@controller ~]# keystone --os-username=admin --os-password=admin --os-auth-url=http://controller:35357/v2.0 token-get
[root@controller ~]# vim .admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
[root@controller ~]# . .admin-openrc.sh
[root@controller ~]# keystone user-list
+----------------------------------+-------+---------+-----------------+
|                id                |  name | enabled |      email      |
+----------------------------------+-------+---------+-----------------+
| ece7cbced7b84c9c917aac88ee7bd8a1 | admin |   True  | admin@smoke.com |
| 0dba9dc5d4ff414ebfd1b5844d7b92a3 |  demo |   True  |  demo@smoke.com |
+----------------------------------+-------+---------+-----------------+

安装glance:

[root@controller ~]# yum -y install  openstack-glance python-glanceclient
[root@controller ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 16
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE DATABASE glance CHARACTER SET utf8;
Query OK, 1 row affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON glance.* TO glance@'%' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON glance.* TO glance@'localhost' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON glance.* TO glance@'controller.smoke.com' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql> exit
Bye
[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
connection=mysql://glance:glance@192.168.10.6/glance
[root@controller ~]# vim /etc/glance/glance-registry.conf
[database]
connection=mysql://glance:glance@192.168.10.6/glance
同步数据库报错为ImportError: No module named Crypto.Random,主要是因为缺少pycrypto模块,通过pip install pycryton安装,如果没有pip命令,先使用yum安装;
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@controller ~]# keystone user-create --name=glance --pass=glance --email=glance@smoke.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         glance@smoke.com         |
| enabled  |               True               |
|    id    | 28f86c14cadb4fcbba64aabdc2f642e2 |
|   name   |              glance              |
| username |              glance              |
+----------+----------------------------------+
[root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin
[root@controller ~]#  keystone user-role-list --user=glance --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 47debc9436274cd4b81375ab9948cf70 | admin | 28f86c14cadb4fcbba64aabdc2f642e2 | f37916ade2bc44adae440ab13f31d9cf |
+----------------------------------+-------+----------------------------------+----------------------------------+
[root@controller ~]# vim /etc/glance/glance-api.conf
[keystone_authtoken]
auth_host=controller
auth_uri=http://controller:5000
auth_port=35357
auth_protocol=http
admin_tenant_name=service
admin_user=glance
admin_password=glance
[paste_deploy]
flavor=keystone
[root@controller ~]# vim /etc/glance/glance-registry.conf
[keystone_authtoken]
auth_host=controller
auth_uri=http://controller:5000
auth_port=35357
auth_protocol=http
admin_tenant_name=service
admin_user=glance
admin_password=glance
[paste_deploy]
flavor=keystone
[root@controller ~]# keystone service-create --name=glance --type=image --description="OpenStack Image Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|   enabled   |               True               |
|      id     | dc7ece631f9143a784de300b4aab5ba0 |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9292      |
|      id     | 4763c25b0bbf4a46bdf713ab4d4d7d04 |
| internalurl |      http://controller:9292      |
|  publicurl  |      http://controller:9292      |
|    region   |            regionOne             |
|  service_id | dc7ece631f9143a784de300b4aab5ba0 |
+-------------+----------------------------------+
[root@controller ~]# for svc in api registry; do service openstack-glance-$svc start; chkconfig openstack-glance-$svc on; done
[root@controller ~]# yum -y install qemu-img
[root@controller ~]# ll
total 23712
-rw-------. 1 root root     1285 Jan 10 05:44 anaconda-ks.cfg
-rw-r--r--  1 root root 11010048 Apr  6 21:59 cirros-no_cloud-0.3.0-i386-disk.img
-rw-r--r--  1 root root 11468800 Apr  6 21:59 cirros-no_cloud-0.3.0-x86_64-disk.img
-rw-r--r--  1 root root  1760426 Apr  6 20:39 get-pip.py
-rw-r--r--. 1 root root    21867 Jan 10 05:44 install.log
-rw-r--r--. 1 root root     5820 Jan 10 05:42 install.log.syslog
[root@controller ~]# qemu-img info cirros-no_cloud-0.3.0-i386-disk.img
image: cirros-no_cloud-0.3.0-i386-disk.img
file format: qcow2
virtual size: 39M (41126400 bytes)
disk size: 11M
cluster_size: 65536
[root@controller ~]# qemu-img info cirros-no_cloud-0.3.0-x86_64-disk.img
image: cirros-no_cloud-0.3.0-x86_64-disk.img
file format: qcow2
virtual size: 39M (41126400 bytes)
disk size: 11M
cluster_size: 65536
[root@controller ~]# glance image-create --name=cirros-0.3.0-i386 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-i386-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ccdb7b71efb7cbae0ea4a437f55a5eb9     |
| container_format | bare                                 |
| created_at       | 2019-04-13T12:19:04                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 3ae56e48-9a1e-4efe-9535-3683359ab518 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-i386                    |
| owner            | bbdc7fe3de4448b19e877902e8274736     |
| protected        | False                                |
| size             | 11010048                             |
| status           | active                               |
| updated_at       | 2019-04-13T12:19:05                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[root@controller ~]# glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-x86_64-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 2b35be965df142f00026123a0fae4aa6     |
| container_format | bare                                 |
| created_at       | 2019-04-13T12:19:32                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 3eebb09a-4076-4504-87cb-608caf464aae |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-x86_64                  |
| owner            | bbdc7fe3de4448b19e877902e8274736     |
| protected        | False                                |
| size             | 11468800                             |
| status           | active                               |
| updated_at       | 2019-04-13T12:19:32                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[root@controller ~]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 3ae56e48-9a1e-4efe-9535-3683359ab518 | cirros-0.3.0-i386   | qcow2       | bare             | 11010048 | active |
| 3eebb09a-4076-4504-87cb-608caf464aae | cirros-0.3.0-x86_64 | qcow2       | bare             | 11468800 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
[root@controller ~]# ls /var/lib/glance/images/
3ae56e48-9a1e-4efe-9535-3683359ab518  3eebb09a-4076-4504-87cb-608caf464aae
[root@controller ~]# glance image-show cirros-0.3.0-i386
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ccdb7b71efb7cbae0ea4a437f55a5eb9     |
| container_format | bare                                 |
| created_at       | 2019-04-13T12:19:04                  |
| deleted          | False                                |
| disk_format      | qcow2                                |
| id               | 3ae56e48-9a1e-4efe-9535-3683359ab518 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-i386                    |
| owner            | bbdc7fe3de4448b19e877902e8274736     |
| protected        | False                                |
| size             | 11010048                             |
| status           | active                               |
| updated_at       | 2019-04-13T12:19:05                  |
+------------------+--------------------------------------+
[root@controller ~]# glance image-download --file=/tmp/cirros-0.3.0-i386.img --progress cirros-0.3.0-i386
[root@controller ~]# ls /tmp/
cirros-0.3.0-i386.img  keystone-signing-IKuXj3  keystone-signing-MaBElz  yum_save_tx-2019-04-12-21-58zwvqk8.yumtx

安装qpid消息队列服务:

[root@controller ~]# yum -y install qpid-cpp-server
[root@controller ~]# vim /etc/qpidd.conf
auth=no
[root@controller ~]# service qpidd start
[root@controller ~]# chkconfig qpidd on
[root@controller ~]# ss -tnlp | grep qpid
LISTEN     0      10                       :::5672                    :::*      users:(("qpidd",46435,13))
LISTEN     0      10                        *:5672                     *:*      users:(("qpidd",46435,12))

安装nova:
controller:

[root@controller ~]# yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
[root@controller ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 28
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE DATABASE nova CHARACTER SET 'utf8';
Query OK, 1 row affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller.smoke.com' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql> exit
Bye
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
qpid_hostname=192.168.10.6
qpid_port=5672
rpc_backend=qpid
my_ip=192.168.10.6
vncserver_listen=192.168.10.6
vncserver_proxyclient_address=192.168.10.6
[database]
connection=mysql://nova:nova@192.168.10.6/nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
[root@controller ~]#  keystone user-create --name=nova --pass=nova --email=nova@smoke.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          nova@smoke.com          |
| enabled  |               True               |
|    id    | 2426c4cb03f643cc8a896aa2420c1644 |
|   name   |               nova               |
| username |               nova               |
+----------+----------------------------------+
[root@controller ~]# keystone user-role-add --user=nova --role=admin --tenant=service
[root@controller ~]# keystone user-role-list --tenant=service --user=nova
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 47debc9436274cd4b81375ab9948cf70 | admin | 2426c4cb03f643cc8a896aa2420c1644 | f37916ade2bc44adae440ab13f31d9cf |
+----------------------------------+-------+----------------------------------+----------------------------------+
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=nova
admin_tenant_name=service
admin_password=nova
[root@controller ~]#  keystone service-create --name=nova --type=compute --description="OpenStack Compute"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Compute         |
|   enabled   |               True               |
|      id     | 67b26ddb60f34303b8f708e62916fbdc |
|     name    |               nova               |
|     type    |             compute              |
+-------------+----------------------------------+
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8774/v2/%(tenant_id)s |
|      id     |     20c9ca37ff6e414fb6cf636c1669e29d    |
| internalurl | http://controller:8774/v2/%(tenant_id)s |
|  publicurl  | http://controller:8774/v2/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     67b26ddb60f34303b8f708e62916fbdc    |
+-------------+-----------------------------------------+
[root@controller ~]# for svc in api cert consoleauth scheduler conductor novncproxy; do service openstack-nova-$svc start; chkconfig openstack-nova-$svc on; done
启动openstack-nova-novncproxy报错:Apr  9 22:23:19 CentOS6 abrt: detected unhandled Python exception in '/usr/bin/nova-novncproxy'由于websockify版本过高,导致novnc启动失败,参考链接:https://www.unixhot.com/article/27
升级python2.7和pip2.7方法:https://www.jb51.net/article/107475.htm
解决方法:
[root@controller ~]# yum install python-pip
[root@controller ~]# pip uninstall websockify
[root@controller ~]# pip2.6 install websockify==0.5.1
[root@controller ~]# service openstack-nova-novncproxy start
[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 3ae56e48-9a1e-4efe-9535-3683359ab518 | cirros-0.3.0-i386   | ACTIVE |        |
| 3eebb09a-4076-4504-87cb-608caf464aae | cirros-0.3.0-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

compute1:

[root@compute1 ~]# grep -E -i --color=auto "(svm|vmx)" /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp l                           m constant_tsc arch_perfmon xtopology tsc_reliable nonstop_tsc unfair_spinlock pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popc                           nt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm arat tpr_shadow vnmi ept vpid fsgsbase smep
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp l                           m constant_tsc arch_perfmon xtopology tsc_reliable nonstop_tsc unfair_spinlock pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popc                           nt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm arat tpr_shadow vnmi ept vpid fsgsbase smep
[root@compute1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6
enabled=1
skip_if_unavailable=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
priority=98
openstack-icehouse yum源出现:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection
[root@compute1 ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak
[root@compute1 ~]# yum -y update    #方法一
[root@compute1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo    #方法二
enabled=0
[root@compute1 ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak
[root@compute1 ~]# yum -y install ca-certificates
[root@compute1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
enabled=1
[root@compute1 ~]# yum -y update curl    #方法三
[root@compute1 ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
epel yum源出现:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
[root@compute1 ~]# vim /etc/yum.repos.d/epel.repo    #注释mirrorlist,取消baseurl注释;
[root@compute1 ~]# vim /etc/yum.repos.d/epel-testing.repo
[root@compute1 ~]# yum -y install openstack-nova-compute
[root@compute1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
qpid_hostname=192.168.10.6
rpc_backend=qpid
auth_strategy=keystone
glance_host=controller
my_ip=192.168.10.7
novncproxy_base_url=http://controller:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.10.7
vnc_enabled=true
vif_plugging_is_fatal=false
vif_plugging_timeout=10
[database]
connection=mysql://nova:nova@192.168.10.6/nova
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=nova
admin_tenant_name=service
admin_password=nova
[libvirt]
virt_type=kvm
[root@compute1 ~]# service libvirtd start
[root@compute1 ~]# lsmod | grep kvm
kvm_intel              55496  0
kvm                   337772  1 kvm_intel
[root@compute1 ~]# service messagebus start
[root@compute1 ~]# service openstack-nova-compute start
[root@compute1 ~]# chkconfig libvirtd on
[root@compute1 ~]# chkconfig messagebus on
[root@compute1 ~]# chkconfig openstack-nova-compute oncontroller:
[root@controller ~]# nova hypervisor-list
+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 1  | compute1.smoke.com  |
+----+---------------------+

安装Neutron:

controller:
[root@controller ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 299
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE DATABASE neutron CHARACTER SET 'utf8';
Query OK, 1 row affected (0.01 sec)mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller.smoke.com' IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql> exit
Bye
[root@controller ~]# keystone user-create --name=neutron --pass=neutron --email=neutron@smoke.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |        neutron@smoke.com         |
| enabled  |               True               |
|    id    | 42e66f0114c442c4af9d75595baca9c0 |
|   name   |             neutron              |
| username |             neutron              |
+----------+----------------------------------+
[root@controller ~]# keystone user-role-add --user=neutron --tenant=service --role=admin
[root@controller ~]# keystone user-role-list --user=neutron --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 47debc9436274cd4b81375ab9948cf70 | admin | 42e66f0114c442c4af9d75595baca9c0 | f37916ade2bc44adae440ab13f31d9cf |
+----------------------------------+-------+----------------------------------+----------------------------------+
[root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       OpenStack Networking       |
|   enabled   |               True               |
|      id     | c6cd9124bf3f4b8b89728ca5aa1b92b7 |
|     name    |             neutron              |
|     type    |             network              |
+-------------+----------------------------------+
[root@controller ~]# keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9696      |
|      id     | 55523c3b86ec4cc28e6bc6055ac79229 |
| internalurl |      http://controller:9696      |
|  publicurl  |      http://controller:9696      |
|    region   |            regionOne             |
|  service_id | c6cd9124bf3f4b8b89728ca5aa1b92b7 |
+-------------+----------------------------------+
[root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient
[root@controller ~]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| bbdc7fe3de4448b19e877902e8274736 |  admin  |   True  |
| 9a748acebd0741f5bb6dc3875772cf0a |   demo  |   True  |
| f37916ade2bc44adae440ab13f31d9cf | service |   True  |
+----------------------------------+---------+---------+
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
auth_strategy = keystone
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.10.6
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = f37916ade2bc44adae440ab13f31d9cf
nova_admin_password = nova
nova_admin_auth_url = http://controller:35357/v2.0
core_plugin = ml2
ervice_plugins = router
[keystone_authtoken]
identity_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_tenant_name=service
admin_password=neutron
[database]
connection = mysql://neutron:neutron@192.168.10.6:3306/neutron
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
[root@controller ~]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller ~]# service openstack-nova-api restart
[root@controller ~]# service openstack-nova-scheduler restart
[root@controller ~]# service openstack-nova-conductor restart
[root@controller ~]# service neutron-server start
[root@controller ~]# chkconfig neutron-server on

network:

[root@network ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6
enabled=1
skip_if_unavailable=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
priority=98
openstack-icehouse yum源出现:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection
[root@network ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak
[root@network ~]# yum -y update    #方法一
[root@network ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo    #方法二
enabled=0
[root@network ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak
[root@network ~]# yum -y install ca-certificates
[root@network ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
enabled=1
[root@network ~]# yum -y update curl    #方法三
[root@network ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
epel yum源出现:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
[root@network ~]# vim /etc/yum.repos.d/epel.repo    #注释mirrorlist,取消baseurl注释;
[root@network ~]# vim /etc/yum.repos.d/epel-testing.repo
[root@network ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@network ~]# sysctl -p
[root@network ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
[root@network ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.10.6
core_plugin = ml2
service_plugins = router
[keystone_authtoken]
identity_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_tenant_name=service
admin_password=neutron
[root@network ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
[root@network ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
[root@network ~]# vim /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1454
[root@network ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
verbose = True
auth_url = http://controller:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

controller:

[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
service_neutron_metadata_proxy=true
neutron_metadata_proxy_shared_secret=METADATA_SECRET
[root@controller ~]# service openstack-nova-api restart

network:

[root@network ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=192.168.20.254
NETMASK=255.255.255.0
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth1"
[root@network ~]# ifdown eth1 && ifup eth1
[root@network ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ovs]
local_ip = 192.168.20.254
tunnel_type = gre
enable_tunneling = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@network ~]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@network ~]# service openvswitch start
[root@network ~]# chkconfig openvswitch on
[root@network ~]# ovs-vsctl add-br br-in
[root@network ~]# ovs-vsctl add-br br-ex
[root@network ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth2"
[root@network ~]# service network restart
[root@network ~]# ovs-vsctl add-port br-ex eth2
[root@network ~]# ovs-vsctl show
0d8784ce-e5b5-4416-8212-738bc6094d82Bridge br-inPort br-inInterface br-intype: internalBridge br-exPort br-exInterface br-extype: internalPort "eth2"Interface "eth2"ovs_version: "2.1.3"
[root@network ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex
[root@network ~]# ethtool -K eth2 gro off    #关闭外部网卡gro功能,开启会导致性能底下;
[root@network ~]# ifconfig br-ex 192.168.243.129 netmask 255.255.255.0
[root@network ~]# route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.243.2
[root@network ~]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig    #由于bug原因执行下面操作
[root@network ~]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
[root@network ~]# for svc in openvswitch l3 dhcp metadata; do service neutron-${svc}-agent start; chkconfig neutron-${svc}-agent on; done

compute1:

[root@compute1 ~]# vim /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@compute1 ~]# sysctl -p
[root@compute1 ~]# yum -y install openstack-neutron-ml2 openstack-neutron-openvswitch    #多个compute节点都需要做出如此配置
[root@compute1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.10.6
core_plugin = ml2
service_plugins = router
[keystone_authtoken]
identity_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_tenant_name=service
admin_password=neutron
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ovs]
local_ip = 192.168.20.1
tunnel_type = gre
enable_tunneling = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@compute1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.20.1
NETMASK=255.255.255.0
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth1"
[root@compute1 ~]# ifdown eth1 && ifup eth1
[root@compute1 ~]# service openvswitch start
[root@compute1 ~]# chkconfig openvswitch on
[root@compute1 ~]# ovs-vsctl add-br br-in
[root@compute1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_auth_url=http://controller:5000/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
[root@compute1 ~]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@compute1 ~]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
[root@compute1 ~]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
[root@compute1 ~]# service openstack-nova-compute restart
[root@compute1 ~]# service neutron-openvswitch-agent start
[root@compute1 ~]# chkconfig neutron-openvswitch-agent on
问题:
启动neutron-openvswitch-agent后查看运行状态为dead but pid file exists,是由于/etc/neutron/plugins/ml2/ml2_conf.ini属组为root,需要更改为neutron;
[root@compute1 ~]# chown root:neutron /etc/neutron/plugins/ml2/ml2_conf.ini

controller:

[root@controller ~]# . .admin-openrc.sh
[root@controller ~]# neutron net-list
[root@controller ~]# neutron subnet-list
[root@controller ~]# neutron net-create ext-net --shared --router:external=True    #创建外部网络
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 38503f84-9675-4813-b9a4-7548d9ebf0b6 |
| name                      | ext-net                              |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | bbdc7fe3de4448b19e877902e8274736     |
+---------------------------+--------------------------------------+
[root@controller ~]# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.243.151,end=192.168.243.170 --disable-dhcp --gateway 192.168.243.2 192.168.243.0/24
Created a new subnet:
+------------------+--------------------------------------------------------+
| Field            | Value                                                  |
+------------------+--------------------------------------------------------+
| allocation_pools | {"start": "192.168.243.151", "end": "192.168.243.170"} |
| cidr             | 192.168.243.0/24                                       |
| dns_nameservers  |                                                        |
| enable_dhcp      | False                                                  |
| gateway_ip       | 192.168.243.2                                          |
| host_routes      |                                                        |
| id               | ed204ed4-5752-4b75-8459-c0a913e92bc0                   |
| ip_version       | 4                                                      |
| name             | ext-subnet                                             |
| network_id       | 38503f84-9675-4813-b9a4-7548d9ebf0b6                   |
| tenant_id        | bbdc7fe3de4448b19e877902e8274736                       |
+------------------+--------------------------------------------------------+
[root@controller ~]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| bbdc7fe3de4448b19e877902e8274736 |  admin  |   True  |
| 9a748acebd0741f5bb6dc3875772cf0a |   demo  |   True  |
| f37916ade2bc44adae440ab13f31d9cf | service |   True  |
+----------------------------------+---------+---------+
[root@controller ~]# vim .demo-os.sh
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://controller:35357/v2.0
[root@controller ~]# keystone user-list
+----------------------------------+---------+---------+-------------------+
|                id                |   name  | enabled |       email       |
+----------------------------------+---------+---------+-------------------+
| ece7cbced7b84c9c917aac88ee7bd8a1 |  admin  |   True  |  admin@smoke.com  |
| 0dba9dc5d4ff414ebfd1b5844d7b92a3 |   demo  |   True  |   demo@smoke.com  |
| 28f86c14cadb4fcbba64aabdc2f642e2 |  glance |   True  |  glance@smoke.com |
| 42e66f0114c442c4af9d75595baca9c0 | neutron |   True  | neutron@smoke.com |
| 2426c4cb03f643cc8a896aa2420c1644 |   nova  |   True  |   nova@smoke.com  |
+----------------------------------+---------+---------+-------------------+
[root@controller ~]# . .demo-os.sh
[root@controller ~]# neutron net-list
+--------------------------------------+---------+-------------------------------------------------------+
| id                                   | name    | subnets                                               |
+--------------------------------------+---------+-------------------------------------------------------+
| 38503f84-9675-4813-b9a4-7548d9ebf0b6 | ext-net | ed204ed4-5752-4b75-8459-c0a913e92bc0 192.168.243.0/24 |
+--------------------------------------+---------+-------------------------------------------------------+
[root@controller ~]# neutron subnet-list
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| id                                   | name       | cidr             | allocation_pools                                       |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| ed204ed4-5752-4b75-8459-c0a913e92bc0 | ext-subnet | 192.168.243.0/24 | {"start": "192.168.243.151", "end": "192.168.243.170"} |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
[root@controller ~]# neutron net-create demo-net
Created a new network:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| id             | 6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 |
| name           | demo-net                             |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 9a748acebd0741f5bb6dc3875772cf0a     |
+----------------+--------------------------------------+
[root@controller ~]# neutron subnet-create demo-net --name demo-subnet --gateway 192.168.30.254 192.168.30.0/24
Created a new subnet:
+------------------+----------------------------------------------------+
| Field            | Value                                              |
+------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.30.1", "end": "192.168.30.253"} |
| cidr             | 192.168.30.0/24                                    |
| dns_nameservers  |                                                    |
| enable_dhcp      | True                                               |
| gateway_ip       | 192.168.30.254                                     |
| host_routes      |                                                    |
| id               | 120c1dcd-dd7f-4222-a42a-b247bf4b7bad               |
| ip_version       | 4                                                  |
| name             | demo-subnet                                        |
| network_id       | 6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2               |
| tenant_id        | 9a748acebd0741f5bb6dc3875772cf0a                   |
+------------------+----------------------------------------------------+

network:

[root@network ~]# yum -y update iproute    #没有ip netns命名空间需要升级iproute;
[root@network ~]# ip netns list

controller:

[root@controller ~]# neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 |
| name                  | demo-router                          |
| status                | ACTIVE                               |
| tenant_id             | 9a748acebd0741f5bb6dc3875772cf0a     |
+-----------------------+--------------------------------------+
[root@controller ~]# neutron router-gateway-set demo-router ext-net    #添加外部网络到路由器
[root@controller ~]# . .admin-openrc.sh
[root@controller ~]# neutron router-port-list demo-router
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| add5d94c-214f-4caa-967b-5311969c253a |      | fa:16:3e:34:37:7a | {"subnet_id": "ed204ed4-5752-4b75-8459-c0a913e92bc0", "ip_address": "192.168.243.151"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
[root@controller ~]# . .demo-os.sh
[root@controller ~]# neutron router-interface-add demo-router demo-subnet
[root@controller ~]# . .admin-openrc.sh
[root@controller ~]# neutron router-port-list demo-router
[root@controller ~]# neutron router-port-list demo-router
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| 77e009c7-8806-4440-90db-f328465bc35c |      | fa:16:3e:54:4e:3b | {"subnet_id": "120c1dcd-dd7f-4222-a42a-b247bf4b7bad", "ip_address": "192.168.30.254"}  |
| add5d94c-214f-4caa-967b-5311969c253a |      | fa:16:3e:34:37:7a | {"subnet_id": "ed204ed4-5752-4b75-8459-c0a913e92bc0", "ip_address": "192.168.243.151"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

network:

[root@network ~]# ip netns list
qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94
[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 ifconfig
lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)qg-add5d94c-21 Link encap:Ethernet  HWaddr FA:16:3E:34:37:7Ainet addr:192.168.243.151  Bcast:192.168.243.255  Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe34:377a/64 Scope:LinkUP BROADCAST RUNNING  MTU:1500  Metric:1RX packets:56 errors:0 dropped:0 overruns:0 frame:0TX packets:8 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:6243 (6.0 KiB)  TX bytes:636 (636.0 b)qr-77e009c7-88 Link encap:Ethernet  HWaddr FA:16:3E:54:4E:3Binet addr:192.168.30.254  Bcast:192.168.30.255  Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe54:4e3b/64 Scope:LinkUP BROADCAST RUNNING  MTU:1500  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:7 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b)  TX bytes:558 (558.0 b)
[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-INPUT  all  --  0.0.0.0/0            0.0.0.0/0Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
neutron-filter-top  all  --  0.0.0.0/0            0.0.0.0/0
neutron-l3-agent-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
neutron-filter-top  all  --  0.0.0.0/0            0.0.0.0/0
neutron-l3-agent-OUTPUT  all  --  0.0.0.0/0            0.0.0.0/0Chain neutron-filter-top (2 references)
target     prot opt source               destination
neutron-l3-agent-local  all  --  0.0.0.0/0            0.0.0.0/0Chain neutron-l3-agent-FORWARD (1 references)
target     prot opt source               destinationChain neutron-l3-agent-INPUT (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  0.0.0.0/0            127.0.0.1           tcp dpt:9697Chain neutron-l3-agent-OUTPUT (1 references)
target     prot opt source               destinationChain neutron-l3-agent-local (1 references)
target     prot opt source               destination

安装dashboard:

controller:
[root@controller ~]# yum -y install memcached python-memcached mod_wsgi openstack-dashboard
问题:
安装报错,提示Requires: Django14
[root@controller ~]# yum -y localinstall Django14-1.4.20-1.el6.noarch.rpm
[root@controller ~]# service memcached start
[root@controller ~]# chkconfig memcached on
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
ALLOWED_HOSTS = ['*']
CACHES = {'default': {'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache','LOCATION' : '192.168.10.6:11211',}
}
#CACHES = {
#    'default': {
#        'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
#    }
#}
TIME_ZONE = "Asia/Shanghai"
[root@controller ~]# service httpd start
[root@controller ~]# chkconfig httpd on

通过浏览器访问http://192.168.10.6 用户名admin,密码admin;

问题:dashbord点击实例菜单提示错误:无法连接到Neutron;我们在Icehouse版本创建虚拟机会遇到错误:无法连接到Neutron.的报错,但是虚拟机还可以创建成功,这个是一个已知的bug,可以通过修改源码解决。

[root@controller ~]# vim /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.pydef is_simple_associate_supported(self):# NOTE: There are two reason that simple association support# needs more considerations. (1) Neutron does not support the# default floating IP pool at the moment. It can be avoided# in case where only one floating IP pool exists.# (2) Neutron floating IP is associated with each VIF and# we need to check whether such VIF is only one for an instance# to enable simple association support.return Falsedef is_supported(self):network_config = getattr(settings, 'OPENSTACK_NEUTRON_NETWORK', {})return network_config.get('enable_router', True)

启动实例:
controller:

[root@controller ~]# ssh-keygen -t rsa -P ''
[root@controller ~]# . .demo-os.sh
[root@controller ~]# nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub demokey
[root@controller ~]# nova keypair-list
+---------+-------------------------------------------------+
| Name    | Fingerprint                                     |
+---------+-------------------------------------------------+
| demokey | 34:8e:e0:a5:59:fa:92:ff:0e:8b:c2:12:fa:f6:5a:4e |
+---------+-------------------------------------------------+
[root@controller ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller ~]# . .admin-openrc.sh
[root@controller ~]# nova flavor-create --is-public true m1.cirros 6 128 1 1
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller ~]# . .demo-os.sh
[root@controller ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 3ae56e48-9a1e-4efe-9535-3683359ab518 | cirros-0.3.0-i386   | ACTIVE |        |
| 3eebb09a-4076-4504-87cb-608caf464aae | cirros-0.3.0-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+
[root@controller ~]# nova net-list
+--------------------------------------+----------+------+
| ID                                   | Label    | CIDR |
+--------------------------------------+----------+------+
| 38503f84-9675-4813-b9a4-7548d9ebf0b6 | ext-net  | -    |
| 6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 | demo-net | -    |
+--------------------------------------+----------+------+
[root@controller ~]# nova secgroup-list
+--------------------------------------+---------+-------------+
| Id                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 6c7ff7a2-6731-49fd-8b44-b29e0be61b8d | default | default     |
+--------------------------------------+---------+-------------+
[root@controller ~]# nova secgroup-list-rules default
+-------------+-----------+---------+----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+----------+--------------+
|             |           |         |          | default      |
|             |           |         |          | default      |
+-------------+-----------+---------+----------+--------------+
[root@controller ~]# neutron net-list
+--------------------------------------+----------+-------------------------------------------------------+
| id                                   | name     | subnets                                               |
+--------------------------------------+----------+-------------------------------------------------------+
| 38503f84-9675-4813-b9a4-7548d9ebf0b6 | ext-net  | ed204ed4-5752-4b75-8459-c0a913e92bc0 192.168.243.0/24 |
| 6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 | demo-net | 120c1dcd-dd7f-4222-a42a-b247bf4b7bad 192.168.30.0/24  |
+--------------------------------------+----------+-------------------------------------------------------+
[root@controller ~]# nova boot --flavor m1.cirros --image cirros-0.3.0-i386 --nic net-id=6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 --security-                            group default --key-name demokey demo-0001
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 0                                                        |
| OS-EXT-STS:task_state                | scheduling                                               |
| OS-EXT-STS:vm_state                  | building                                                 |
| OS-SRV-USG:launched_at               | -                                                        |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| adminPass                            | q5ryd4xHPi3z                                             |
| config_drive                         |                                                          |
| created                              | 2019-04-27T12:35:51Z                                     |
| flavor                               | m1.cirros (6)                                            |
| hostId                               |                                                          |
| id                                   | 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4                     |
| image                                | cirros-0.3.0-i386 (3ae56e48-9a1e-4efe-9535-3683359ab518) |
| key_name                             | demokey                                                  |
| metadata                             | {}                                                       |
| name                                 | demo-0001                                                |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | BUILD                                                    |
| tenant_id                            | 9a748acebd0741f5bb6dc3875772cf0a                         |
| updated                              | 2019-04-27T12:35:51Z                                     |
| user_id                              | 0dba9dc5d4ff414ebfd1b5844d7b92a3                         |
+--------------------------------------+----------------------------------------------------------+
[root@controller ~]# nova list
+--------------------------------------+-----------+--------+------------+-------------+-----------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks              |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------+
| 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 | demo-0001 | ACTIVE | -          | Running     | demo-net=192.168.30.6 |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------+
[root@controller ~]# iptables -t nat -A POSTROUTING  -s 192.168.10.0/24 -j SNAT --to-source 192.168.243.138

network:

问题:WARNING neutron.agent.linux.dhcp [req-8e9c51e2-b0c7-47c2-b0eb-20b008139c9d None] FAILED VERSION REQUIREMENT FOR DNSMASQ. DHCP AGENT MAY NOT RUN CORRECTLY! Please ensure that its version is 2.59 or above! 虚拟机无法获取ip地址;
[root@network ~]# rpm -Uvh dnsmasq-2.65-1.el6.rfx.x86_64.rpm
compute1:
[root@compute1 ~]# virsh listId    Name                           State
----------------------------------------------------5     instance-0000000d              running
[root@compute1 ~]# yum -y install tigervnc
[root@compute1 ~]# vncviewer :5900

虚拟机实例获取的ip地址为192.168.30.6;

network:

[root@network ~]# ip netns list
qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94
qdhcp-6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2
[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 ifconfig
lo        Link encap:Local Loopbackinet addr:127.0.0.1  Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)qg-add5d94c-21 Link encap:Ethernet  HWaddr FA:16:3E:34:37:7Ainet addr:192.168.243.151  Bcast:192.168.243.255  Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe34:377a/64 Scope:LinkUP BROADCAST RUNNING  MTU:1500  Metric:1RX packets:21894 errors:0 dropped:0 overruns:0 frame:0TX packets:8 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:2443652 (2.3 MiB)  TX bytes:636 (636.0 b)qr-77e009c7-88 Link encap:Ethernet  HWaddr FA:16:3E:54:4E:3Binet addr:192.168.30.254  Bcast:192.168.30.255  Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe54:4e3b/64 Scope:LinkUP BROADCAST RUNNING  MTU:1500  Metric:1RX packets:30 errors:0 dropped:0 overruns:0 frame:0TX packets:10 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:3204 (3.1 KiB)  TX bytes:740 (740.0 b)
[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 ping 192.168.30.2    #ping不通是由于安全组没有放通;
PING 192.168.30.2 (192.168.30.2) 56(84) bytes of data.
From 192.168.30.254 icmp_seq=2 Destination Host Unreachable
From 192.168.30.254 icmp_seq=3 Destination Host Unreachable
From 192.168.30.254 icmp_seq=4 Destination Host Unreachable
^C
--- 192.168.30.2 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3589ms

controller:

[root@controller ~]# nova get-vnc-console 7745576f-9cd0-48ec-948d-1082485996ad novnc
+-------+---------------------------------------------------------------------------------+
| Type  | Url                                                                             |
+-------+---------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=d54b0928-d846-4945-a69c-ffa4687ff0ca |
+-------+---------------------------------------------------------------------------------+
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
[root@controller ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

通过虚拟机实例访问外部网络;

network:

[root@network ~]# ovs-vsctl show
0d8784ce-e5b5-4416-8212-738bc6094d82Bridge br-intfail_mode: securePort "tap7835305c-ed"tag: 1Interface "tap7835305c-ed"type: internalPort "qr-77e009c7-88"tag: 1Interface "qr-77e009c7-88"type: internalPort patch-tunInterface patch-tuntype: patchoptions: {peer=patch-int}Port br-intInterface br-inttype: internalBridge br-inPort br-inInterface br-intype: internalBridge br-tunPort "gre-c0a81401"Interface "gre-c0a81401"type: greoptions: {in_key=flow, local_ip="192.168.20.254", out_key=flow, remote_ip="192.168.20.1"}Port patch-intInterface patch-inttype: patchoptions: {peer=patch-tun}Port br-tunInterface br-tuntype: internalBridge br-exPort "qg-add5d94c-21"Interface "qg-add5d94c-21"type: internalPort br-exInterface br-extype: internalPort "eth2"Interface "eth2"ovs_version: "2.1.3"
[root@network ~]# ip netns list
qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94
qdhcp-6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2
[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-PREROUTING  all  --  0.0.0.0/0            0.0.0.0/0Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0
neutron-postrouting-bottom  all  --  0.0.0.0/0            0.0.0.0/0Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-OUTPUT  all  --  0.0.0.0/0            0.0.0.0/0Chain neutron-l3-agent-OUTPUT (1 references)
target     prot opt source               destinationChain neutron-l3-agent-POSTROUTING (1 references)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           ! ctstate DNATChain neutron-l3-agent-PREROUTING (1 references)
target     prot opt source               destination
REDIRECT   tcp  --  0.0.0.0/0            169.254.169.254     tcp dpt:80 redir ports 9697Chain neutron-l3-agent-float-snat (1 references)
target     prot opt source               destinationChain neutron-l3-agent-snat (1 references)
target     prot opt source               destination
neutron-l3-agent-float-snat  all  --  0.0.0.0/0            0.0.0.0/0
SNAT       all  --  192.168.30.0/24      0.0.0.0/0           to:192.168.243.151Chain neutron-postrouting-bottom (1 references)
target     prot opt source               destination
neutron-l3-agent-snat  all  --  0.0.0.0/0            0.0.0.0/0

创建floating-ip,让外部主机可以直接访问虚拟机实例:
controller:

[root@controller ~]# . .demo-os.sh
[root@controller ~]# neutron floatingip-create ext-net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.243.153                      |
| floating_network_id | 38503f84-9675-4813-b9a4-7548d9ebf0b6 |
| id                  | e9392e6d-faf7-402d-8eeb-62172b4bf11c |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 9a748acebd0741f5bb6dc3875772cf0a     |
+---------------------+--------------------------------------+
[root@controller ~]# nova floating-ip-associate demo-0001 192.168.243.153

network:

[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-PREROUTING  all  --  0.0.0.0/0            0.0.0.0/0Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0
neutron-postrouting-bottom  all  --  0.0.0.0/0            0.0.0.0/0Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
neutron-l3-agent-OUTPUT  all  --  0.0.0.0/0            0.0.0.0/0Chain neutron-l3-agent-OUTPUT (1 references)
target     prot opt source               destination
DNAT       all  --  0.0.0.0/0            192.168.243.153     to:192.168.30.6Chain neutron-l3-agent-POSTROUTING (1 references)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           ! ctstate DNATChain neutron-l3-agent-PREROUTING (1 references)
target     prot opt source               destination
REDIRECT   tcp  --  0.0.0.0/0            169.254.169.254     tcp dpt:80 redir ports 9697
DNAT       all  --  0.0.0.0/0            192.168.243.153     to:192.168.30.6Chain neutron-l3-agent-float-snat (1 references)
target     prot opt source               destination
SNAT       all  --  192.168.30.6         0.0.0.0/0           to:192.168.243.153Chain neutron-l3-agent-snat (1 references)
target     prot opt source               destination
neutron-l3-agent-float-snat  all  --  0.0.0.0/0            0.0.0.0/0
SNAT       all  --  192.168.30.0/24      0.0.0.0/0           to:192.168.243.151Chain neutron-postrouting-bottom (1 references)
target     prot opt source               destination
neutron-l3-agent-snat  all  --  0.0.0.0/0            0.0.0.0/0

在windows外部主机ping虚拟机floating-ip;

[Smoke.Smoke-PC] ➤ ping 192.168.243.153正在 Ping 192.168.243.153 具有 32 字节的数据:
来自 192.168.243.153 的回复: 字节=32 时间=1ms TTL=63
来自 192.168.243.153 的回复: 字节=32 时间=1ms TTL=63
来自 192.168.243.153 的回复: 字节=32 时间=1ms TTL=63
来自 192.168.243.153 的回复: 字节=32 时间=1ms TTL=63192.168.243.153 的 Ping 统计信息:数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):最短 = 1ms,最长 = 1ms,平均 = 1ms

compute1:

[root@compute1 ~]# tcpdump -i tap6cbd64f0-94 -nne icmp
tcpdump: WARNING: tap6cbd64f0-94: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap6cbd64f0-94, link-type EN10MB (Ethernet), capture size 65535 bytes
21:25:33.770506 fa:16:3e:f9:17:63 > fa:16:3e:54:4e:3b, ethertype IPv4 (0x0800), length 98: 192.168.30.6 > 192.168.243.2: ICMP echo request, id 47104, seq 684, length 64
21:25:33.770891 fa:16:3e:54:4e:3b > fa:16:3e:f9:17:63, ethertype IPv4 (0x0800), length 98: 192.168.243.2 > 192.168.30.6: ICMP echo reply, id 47104, seq 684, length 64
21:25:34.305662 fa:16:3e:54:4e:3b > fa:16:3e:f9:17:63, ethertype IPv4 (0x0800), length 74: 192.168.243.1 > 192.168.30.6: ICMP echo request, id 1, seq 309, length 40
21:25:34.306095 fa:16:3e:f9:17:63 > fa:16:3e:54:4e:3b, ethertype IPv4 (0x0800), length 74: 192.168.30.6 > 192.168.243.1: ICMP echo reply, id 1, seq 309, length 40
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel

stor1:

[root@stor1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6
enabled=1
skip_if_unavailable=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
priority=98
openstack-icehouse yum源出现:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection
[root@stor1 ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak
[root@stor1 ~]#  yum -y update
[root@stor1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo    #方法二
enabled=0
[root@stor1 ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak
[root@stor1 ~]# yum -y install ca-certificates
[root@stor1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo
enabled=1
[root@stor1 ~]# yum -y update curl    #方法三
[root@stor1 ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
epel yum源出现:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
[root@stor1 ~]# vim /etc/yum.repos.d/epel.repo    #注释mirrorlist,取消baseurl注释;
[root@stor1 ~]# vim /etc/yum.repos.d/epel-testing.repo

安装cinder:
controller:

[root@controller ~]# yum -y install openstack-cinder
[root@controller ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2129
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE DATABASE cinder CHARACTER SET 'utf8';
Query OK, 1 row affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller.smoke.com' IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql> \q
Bye
[root@controller ~]# vim /etc/cinder/cinder.conf
[database]
connection=mysql://cinder:cinder@192.168.10.6/cinder
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
[root@controller ~]# . .admin-openrc.sh
[root@controller ~]# keystone user-create --name=cinder --pass=cinder --email=cinder@smoke.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         cinder@smoke.com         |
| enabled  |               True               |
|    id    | 8de27d48c09347378aa1c2baf4ba9e8e |
|   name   |              cinder              |
| username |              cinder              |
+----------+----------------------------------+
[root@controller ~]# keystone user-role-add --user=cinder --tenant=service --role=admin
[root@controller ~]# keystone user-role-list --user=cinder --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 47debc9436274cd4b81375ab9948cf70 | admin | 8de27d48c09347378aa1c2baf4ba9e8e | f37916ade2bc44adae440ab13f31d9cf |
+----------------------------------+-------+----------------------------------+----------------------------------+
[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy=keystone
rpc_backend=qpid
qpid_hostname=192.168.10.6
[keystone_authtoken]
identity_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=cinder
admin_tenant_name=service
admin_password=cinder
[root@controller ~]# keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | e00af6e9ebbe436aa8d9466786328af4 |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8776/v1/%(tenant_id)s |
|      id     |     ba5720dff2ee4e7ea09f2093521eace7    |
| internalurl | http://controller:8776/v1/%(tenant_id)s |
|  publicurl  | http://controller:8776/v1/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     e00af6e9ebbe436aa8d9466786328af4    |
+-------------+-----------------------------------------+
[root@controller ~]# keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |    OpenStack Block Storage v2    |
|   enabled   |               True               |
|      id     | c13f7c26b77e440286e7fd8cc280f3ab |
|     name    |             cinderv2             |
|     type    |             volumev2             |
+-------------+----------------------------------+
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)s
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8776/v2/%(tenant_id)s |
|      id     |     ea0cb565aa7f477fa3f062096f4ab8f0    |
| internalurl | http://controller:8776/v2/%(tenant_id)s |
|  publicurl  | http://controller:8776/v2/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     c13f7c26b77e440286e7fd8cc280f3ab    |
+-------------+-----------------------------------------+
[root@controller ~]# service openstack-cinder-api start
[root@controller ~]# service openstack-cinder-scheduler start
[root@controller ~]# chkconfig openstack-cinder-api on
[root@controller ~]# chkconfig openstack-cinder-scheduler on

stor1:

[root@stor1 ~]# pvcreate /dev/sdb
[root@stor1 ~]# vgcreate cinder-volumes /dev/sdb
[root@stor1 ~]# vim /etc/lvm/lvm.conf
devices {
filter = [ "a/sda1/", "a/sdb/","r/.*/" ]
}
[root@stor1 ~]# yum -y install openstack-cinder scsi-target-utils
[root@stor1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy=keystone
rpc_backend=qpid
qpid_hostname=192.168.10.6
my_ip=192.168.10.9
glance_host=controller
iscsi_helper=tgtadm
volumes_dir=/etc/cinder/volumes
[keystone_authtoken]
identity_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=cinder
admin_tenant_name=service
admin_password=cinder
[database]
connection=mysql://cinder:cinder@192.168.10.6/cinder
[root@stor1 ~]# vim /etc/tgt/targets.conf
include /etc/cinder/volumes/*
[root@stor1 ~]# vim /etc/init.d/openstack-cinder-volume    #启动脚本存在问题,需要修改,去掉distconfig配置文件;
start() {[ -x $exec ] || exit 5[ -f $config ] || exit 6echo -n $"Starting $prog: "daemon --user cinder --pidfile $pidfile "$exec --config-file $config --logfile $logfile &>/dev/null & echo \$! > $pidfile"retval=$?echo[ $retval -eq 0 ] && touch $lockfilereturn $retval
}
[root@stor1 ~]# service openstack-cinder-volume start
[root@stor1 ~]# service tgtd start
[root@stor1 ~]# chkconfig openstack-cinder-volume on
[root@stor1 ~]# chkconfig tgtd on

controller:

[root@controller ~]# . .demo-os.sh
[root@controller ~]#  cinder create --display-name testVolume 2
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2019-04-30T13:58:00.551667      |
| display_description |                 None                 |
|     display_name    |              testVolume              |
|      encrypted      |                False                 |
|          id         | a21403a0-7891-4d4a-b27d-daa7070be4d7 |
|       metadata      |                  {}                  |
|         size        |                  2                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@controller ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a21403a0-7891-4d4a-b27d-daa7070be4d7 | available |  testVolume  |  2   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

stor1:

[root@stor1 ~]# lvsLV                                          VG             Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convertvolume-a21403a0-7891-4d4a-b27d-daa7070be4d7 cinder-volumes -wi-a----- 2.00g

controller:

[root@controller ~]# . .demo-os.sh
[root@controller ~]# nova list
+--------------------------------------+-----------+--------+------------+-------------+----------------------------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks                               |
+--------------------------------------+-----------+--------+------------+-------------+----------------------------------------+
| 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 | demo-0001 | ACTIVE | -          | Running     | demo-net=192.168.30.6, 192.168.243.153 |
+--------------------------------------+-----------+--------+------------+-------------+----------------------------------------+
[root@controller ~]# nova volume-attach demo-0001 a21403a0-7891-4d4a-b27d-daa7070be4d7
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | a21403a0-7891-4d4a-b27d-daa7070be4d7 |
| serverId | 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 |
| volumeId | a21403a0-7891-4d4a-b27d-daa7070be4d7 |
+----------+--------------------------------------+

在虚拟机实例demo-0001查看添加的硬盘;

controller:

[root@controller ~]# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| a21403a0-7891-4d4a-b27d-daa7070be4d7 | in-use |  testVolume  |  2   |     None    |  false   | 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
[root@controller ~]# nova volume-detach demo-0001 a21403a0-7891-4d4a-b27d-daa7070be4d7
[root@controller ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a21403a0-7891-4d4a-b27d-daa7070be4d7 | available |  testVolume  |  2   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

转载于:https://blog.51cto.com/smoke520/2300323

openstack-Icehouse版本部署安装相关推荐

  1. openstack icehouse版本glance上传镜像

    openstack icehouse 版本glance上传镜像的命令是: glance p_w_picpath-create --name "win2k8_x86_100GB" - ...

  2. OpenStack Newton版本部署----计算服务(nova)

    OpenStackNewton版本部署----计算服务(Nova) Nova是OpenStack计算的弹性控制器.OpenStack云实例生命期所需的各种动作都将由Nova进行处理和支撑,这就意味着N ...

  3. Centos7手动安装OpenStack Mitaka版本--KeyStone安装

    按照官方文档手动安装的话,基本上是复制粘贴的过程,小心点的话基本上能安装成功!如果报错我基本上干掉重来,我使用的是VM,有快照的... 创建Keystne数据库 [root@openstack-3 ~ ...

  4. openstack Q版部署-----安装报错问题

    1.实例开机提示找不到磁盘Booting from Hard Disk... GRUB. 开启 CPU 虚拟化支持. 将计算节点 nova.conf 配置修改如下即可: [libvirt] cpu_m ...

  5. OpenStack Redhat部署安装详解

    [资料] 社区OpenStack Queens版本部署安装详解 KeyStone配置详细解释 openstack之keystone部署 照着官网来安装openstack pike之创建并启动insta ...

  6. Fuel 5.1安装openstack I版本环境 (ESXi)

    2019独角兽企业重金招聘Python工程师标准>>> Fuel 简介 Fuel是Mirantis公司开发的部署openstack集群工具,主要功能为裸机PXE安装操作系统,mast ...

  7. OpenStack Train Magnum部署Kubernetes(1)--部署OpenStack Train

    基于Packstack部署OpenStack Train版本 部署环境 操作系统:CentOS Linux release 7.7.1908 OpenStack:Train PackStack:ope ...

  8. OpenStack Icehouse私有云实战部署

    linux运维 OpenStack Icehouse私有云实战部署 前言 相信你一定对"云主机"一词并不陌生吧,通过在Web页面选择所需主机配置,即可快速定制一台属于自己的虚拟主机 ...

  9. openstack 之 使用ansible安装部署试验

      上面左边是我的个人微信,如需进一步沟通,请加微信.  右边是我的公众号"Openstack私有云",如有兴趣,请关注. 前期一直使用Mirantis公司的fuel工具进行安装部 ...

最新文章

  1. npm中package-lock.json的作用:npm install安装时使用
  2. AbstractQueuedSynchronizer原理分析
  3. IMDB是否提供API? [关闭]
  4. boost::gil::rgb8_image_t::recreate用法的测试程序
  5. 比较中的Commons VFS,SSHJ和JSch
  6. 加解密技术(Cryptography)基本概念
  7. 也谈莫言荣获诺贝尔文学奖后我的“低调”
  8. 搭建网站-Disczu
  9. 悲观锁 HibernateTest.java
  10. 重置User Profile
  11. 端到端--流量控制、可靠传输和滑动窗口机制学习资料整合
  12. centos mysql 二进制_CentOS 7.6 安装二进制Mysql
  13. python中eof什么意思_python eof表什么意思
  14. 【淘宝0元购】,所有人无门槛参与!
  15. 我想健康富有聪明怎么导告_想要成为一个快乐而富有成效的程序员吗? 使用心理学的这5种技巧...
  16. 帮老婆系列-关于计算Excel表去除指定时间段后的时间差
  17. 如何在交通事故中保障自己的安全
  18. 已知两角及其夹边,解三角形
  19. 支持向量机(SVM)——线性支持向量机
  20. Oracle索引的维护

热门文章

  1. 全球及中国光伏建筑一体化(BIPV)产业专项可行性与容量规模预测报告2022版
  2. 全球及中国木材加工行业运行状况与投资产值预测报告2022版
  3. 中国新疆保险产业发展动态与投资机遇研究报告2022版
  4. 中国电力设备行业运行状况与产量趋势研究报告2022版
  5. 十分经典的批处理教程
  6. 功能性平台创新-农业大健康·杨建国:谋定都江堰精华灌区
  7. 有什么值得推荐的Java Web练手项目?
  8. Office HPDeskjetD2468 打印机电源灯闪烁不停,打印机不工作怎么办
  9. 分布式入门之3:副本控制
  10. 【转载】如何组建一支优秀的数据分析团队?