目标:3台物理机组成单控制节点openstack
作者:BCEC大马哥
Fuel master IPMI login:
http://10.255.2.101
admin/12345678
Fuel master example:
10.254.1.1
root/sybk2015
Notes/Rules:
- 这些浪潮服务器共有4个网口,其中eth0/eth1是光口,eth2/eth3是电口
- IT分配的vlan都是业务vlan,所有业务vlan都走的是光口,所以eth2/eth3是不能用这些vlan的;同时,eth0/eth1 不能作为pxe口,否则dhcp boot request不会带vlan tag,所以到不了master的eth0/eth1
- IPMI地址和PXE地址是一一对应的。比如:IPMI为10.255.2.x,那么其PXE地址为10.254.2.x。而且IT要求两个地址的x一定要一致。
- Fuel master做dhcp server的时候,必须指定mac,且IP和mac绑定
- DHCP pool for node discovery里面的地址,需要在cobbler中dnsmasq中配置
- Static pool for node installed里面的地址,可以通过修改postgresql或者restful api设置
- Fuel的服务跑中容器中,arch: https://wiki.openstack.org/wiki/Fuel
- Fuel doc:https://docs.mirantis.com/openstack/fuel/fuel-6.0/operations.html (包含docker例子)
https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html (包含CLI例子)
https://docs.mirantis.com/openstack/fuel/fuel-6.0/reference-architecture.html#network-architecture
- 服务器管理地址上看到的eth0不能设置“局域网设置”为“启用”
- Fuel configure file: /etc/fuel/astute.yaml
Fuel配置:
disable eth0
enable eth2
static pool和discovery pool可以一样:
退出菜单后,可以执行fuelmenu再次打开:
安装完成之后:
http://10.254.2.101:8000/ (admin/admin)
ssh root@10.254.2.101 (password: r00tme)
在reset slaver之前,需要在dnsmasq中绑定mac和IP地址。具体配置参见Issue-3。
在Fuel页面上配置完Openstack环境之后,在deploy之前需要修改数据库(或者使用REST API),以绑定PXE static IP和节点之间的关系:
- 在master上执行一下(否则下一步看不到某些IP):
[root@fuel ~]# fuel --env 1 deployment default
Default deployment info was downloaded to /root/deployment_1
[root@fuel ~]# fuel node list
id | status | name | cluster | ip | mac | roles | pending_roles | online
---|----------|------------------|---------|--------------|-------------------|-------|-------------------|-------
2 | discover | Untitled (56:6d) | 1 | 10.254.2.104 | 6c:92:bf:0d:56:6d | | cinder, compute | True
3 | discover | Untitled (30:2d) | 1 | 10.254.2.103 | 6c:92:bf:0d:30:2d | | cinder, compute | True
1 | discover | Untitled (5c:8d) | 1 | 10.254.2.105 | 6c:92:bf:0d:5c:8d | | controller, mongo | True
- 用navicat连接Nailgun数据库,修改ip_addr表,并保存!
另,这个表应该记录了mac地址和ip地址的绑定关系。在以下遇到的问题中,有需要重启cobbler/dnsmasq的,重启之后,会由 /etc/cobbler/dnsmasq.template的内容重写/etc/dnsmasq.conf。而/etc/cobbler /dnsmasq.template中有一句“$insert_cobbler_system_definitions”,应该是读取这个表的内容并插入 /etc/dnsmasq.conf
Issue-1:
http://10.254.2.101:8000/
403 Forbidden
docker ps
dockerctl list
ngnix logs:
[root@fuel nginx]# pwd
/var/log/docker-logs/nginx
[root@fuel nginx]# ll
总用量 20
-rw-r--r-- 1 root root 0 6月 11 17:22 access.log
-rw-r--r-- 1 root root 6332 6月 15 13:53 access_nailgun.log
-rw-r--r-- 1 root root 0 6月 11 17:22 access_repo.log
-rw-r--r-- 1 root root 0 6月 11 17:22 error.log
-rw-r--r-- 1 root root 8590 6月 15 13:53 error_nailgun.log
-rw-r--r-- 1 root root 0 6月 11 17:22 error_repo.log
==> error_nailgun.log <==
2015/06/15 15:08:21 [error] 464#0: *34 "/usr/share/nailgun/static/index.html" is forbidden (13: Permission denied), client: 172.17.42.1, server: localhost, request: "GET / HTTP/1.1", host: "10.254.2.101:8000"
==> access_nailgun.log <==
172.17.42.1 - - [15/Jun/2015:15:08:21 +0100] "GET / HTTP/1.1" 403 564 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
[root@fuel nginx]# ll /usr/share/nailgun
ls: 无法访问/usr/share/nailgun: 没有那个文件或目录
[root@fuel nginx]#
gaojin:
应该进docker里面去看,而不是走host上看。
进到nailgun里面发现的确没有/usr/share/nailgun/static/index.html
尝试手动启动一个nailgun:
[root@fuel nailgun]# docker run -it -v /var/www/nailgun:/var/www/nailgun -v /etc/fuel/:/etc/fuel/ fuel/nailgun_5.1.1 /bin/bash
docker inspect container-id 可以查看该容器的配置,发现HostIp不对,应该是eth2的IP 10.254.2.101,而不是eth0上的default IP 10.20.0.2
"NetworkSettings": {
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"8001/tcp": [
{
"HostIp": "10.20.0.2",
"HostPort": "8001"
},
{
"HostIp": "127.0.0.1",
"HostPort": "8001"
}
]
}
},
试fuelmenu,disable eth0,然后docker build,但还是用了10.20.0.2
------------------------
[root@fuel nailgun]# docker inspect 9c64bbf8a8f1
[{
"ID": "9c64bbf8a8f1f19a293280f65c3cc205806130c6f7a2b669f0f6ca7016c4e35d",
"Created": "2015-06-11T17:22:46.524364557Z",
"Path": "/bin/sh",
"Args": [
"-c",
"/usr/local/bin/start.sh"
],
"Config": {
"Hostname": "9c64bbf8a8f1",
"Domainname": "",
"User": "",
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"8001/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"/usr/local/bin/start.sh"
],
"Image": "fuel/nailgun_5.1.1",
"Volumes": {
"/etc/fuel": {},
"/etc/nailgun": {},
"/etc/yum.repos.d": {},
"/root/.ssh": {},
"/usr/share/nailgun/static": {},
"/var/log": {},
"/var/www/nailgun": {}
},
"WorkingDir": "/root",
"Entrypoint": null,
"NetworkDisabled": false,
"OnBuild": null
},
"State": {
"Running": true,
"Pid": 11655,
"ExitCode": 0,
"StartedAt": "2015-06-15T11:37:34.064883412Z",
"FinishedAt": "2015-06-15T11:37:33.624798124Z",
"Ghost": false
},
"Image": "531bb9a11ac38bd0ff2263e562aebca0519f1d86ef5b3a1b1b2a82e7b4d5e7ae",
"NetworkSettings": {
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"8001/tcp": [
{
"HostIp": "10.20.0.2",
"HostPort": "8001"
},
{
"HostIp": "127.0.0.1",
"HostPort": "8001"
}
]
}
},
"ResolvConfPath": "/etc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9c64bbf8a8f1f19a293280f65c3cc205806130c6f7a2b669f0f6ca7016c4e35d/hostname",
"HostsPath": "/var/lib/docker/containers/9c64bbf8a8f1f19a293280f65c3cc205806130c6f7a2b669f0f6ca7016c4e35d/hosts",
"Name": "/fuel-core-5.1.1-nailgun",
"Driver": "devicemapper",
"ExecDriver": "lxc-0.9.0",
"Volumes": {
"/etc/fuel": "/etc/fuel",
"/etc/nailgun": "/var/lib/docker/vfs/dir/249ad99fd67e72b36e6c66e22ba87f0bd7a16298886fe59a54f39ab62e6b8185",
"/etc/yum.repos.d": "/etc/yum.repos.d",
"/root/.ssh": "/root/.ssh",
"/usr/share/nailgun/static": "/var/lib/docker/vfs/dir/183fd1742237cc99931ffb6e56f176bf0c0712953c201cd028115dc82e8d55e2",
"/var/log": "/var/log/docker-logs",
"/var/www/nailgun": "/var/www/nailgun"
},
"VolumesRW": {
"/etc/fuel": false,
"/etc/nailgun": true,
"/etc/yum.repos.d": true,
"/root/.ssh": false,
"/usr/share/nailgun/static": true,
"/var/log": true,
"/var/www/nailgun": true
},
"HostConfig": {
"Binds": [
"/etc/fuel:/etc/fuel:ro",
"/var/log/docker-logs:/var/log",
"/root/.ssh:/root/.ssh:ro",
"/var/www/nailgun:/var/www/nailgun:rw",
"/etc/yum.repos.d:/etc/yum.repos.d:rw"
],
"ContainerIDFile": "",
"LxcConf": [],
"Privileged": false,
"PortBindings": {
"8001/tcp": [
{
"HostIp": "10.20.0.2",
"HostPort": "8001"
},
{
"HostIp": "127.0.0.1",
"HostPort": "8001"
}
]
},
"Links": null,
"PublishAllPorts": false,
"Dns": null,
"DnsSearch": null,
"VolumesFrom": null
}
}]
------------------------
docker容器里面的一个启动脚本:/usr/local/bin/start.sh
进入nailgun:
[root@fuel ~]# docker ps | grep nail
6aa0792b5009 fuel/nailgun_5.1.1:latest /bin/sh -c /usr/loca 50 minutes ago Up 28 minutes 10.254.2.101:8001->8001/tcp, 127.0.0.1:8001->8001/tcp fuel-core-5.1.1-nailgun
[root@fuel ~]# dockerctl shell 6aa0792b5009
换来Fuel iso之后还是403Forbidden,执行/usr/local/bin/start.sh:
Notice: /Stage[main]/Nailgun::Venv/Exec[nailgun_syncdb]/returns: executed successfully
Notice: /Stage[main]/Nailgun::Venv/Exec[nailgun_upload_fixtures]/returns: executed successfully
Notice: /Stage[main]/Nailgun::Supervisor/Service[supervisord]/enable: enable changed 'false' to 'true'
Notice: Finished catalog run in 2.64 seconds
/etc/sysconfig/supervisord: line 15: ulimit: open files: cannot modify limit: Operation not permitted
Stopping supervisord: Shut down
Waiting roughly 60 seconds for /var/run/supervisord.pid to be removed after child processes exit
[root@a446914b82cf static]# namei -mo /usr/share/nailgun/static/
f: /usr/share/nailgun/static/
drwxr-xr-x root root /
drwxr-xr-x root root usr
drwxr-xr-x root root share
drwxr-xr-x root root nailgun
drwx------ root root static
[root@a446914b82cf static]#
最后用了正确的iso之后就没这个问题了。
Issue 2: slaver重启之后拿不到IP地址
Fuel的dhcp/dns都是用dnsmasq实现的,在container cobbler中
查看dnsmasq的log:
[root@fuel ~]# docker ps | grep cob
6cad38e2894e fuel/cobbler_5.1.1:latest /bin/sh -c /usr/loca 4 hours ago Up 24 minutes 53/tcp, 127.0.0.1:53->53/udp, 10.254.2.101:53->53/udp, 67/tcp, 67/udp, 127.0.0.1:69->69/udp, 10.254.2.101:69->69/udp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp fuel-core-5.1.1-cobbler
[root@fuel ~]# dockerctl shell 6cad38e2894e
[root@6cad38e2894e ~]# tail -f /var/log/dnsmasq.log
Jun 16 15:08:03 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0c:e6:5c no address available
Jun 16 15:08:03 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.172 6c:92:bf:0c:b2:90 no address available
Jun 16 15:08:04 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.179 6c:92:bf:0d:5b:86 no address available
Jun 16 15:08:04 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.104 6c:92:bf:0c:e4:ee no address available
Jun 16 15:08:04 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.102 6c:92:bf:0d:5a:82 no address available
Jun 16 15:08:05 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.103 6c:92:bf:0d:30:26 no address available
Jun 16 15:08:05 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.170 6c:92:bf:0c:e6:08 no address available
Jun 16 15:08:05 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.102 6c:92:bf:0d:5a:e6 no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0c:c8:6a no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0d:5c:8e no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.153 6c:92:bf:0d:5c:92 no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0c:e6:5c no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.175 6c:92:bf:0d:5a:6a no address available
Jun 16 15:08:07 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.179 6c:92:bf:0d:5b:86 no address available
因为我们在Fuel menu里面配置的static IP pool是10.254.2.102 -10.254.2.103,pool for discovery是10.254.2.104-10.254.2.105
但是这几个IP地址已经被占用了,所以出现no address available
同时,应该修改dnsmasq的配置,限制哪些mac地址能dhcp获得对应的IP地址:
- 修改dnsmasq配置文件:/etc/cobbler/dnsmasq.template
….
dhcp-option=6,10.254.2.101
dhcp-range=internal,10.254.2.104,10.254.2.105,255.255.255.0,120m
dhcp-option=net:internal,option:router,10.254.2.101
pxe-service=net:#gpxe,x86PC,"Install",pxelinux,10.254.2.101
dhcp-boot=net:internal,pxelinux.0,boothost,10.254.2.101
# added
dhcp-ignore=tag:!known
dhcp-host=net:x86_64,6c:92:bf:0d:30:25,10.254.2.102 # node-5
dhcp-host=net:x86_64,6c:92:bf:0d:30:2d,10.254.2.103
dhcp-host=net:x86_64,6c:92:bf:0d:56:6d,10.254.2.104
dhcp-host=net:x86_64,6c:92:bf:0d:5c:8d,10.254.2.105
dhcp-host=net:x86_64,6c:92:bf:0c:e5:5d,10.254.2.118 # node-4 (102)
- 清空dnsmasq的数据文件:
[root@a9be15498990 cobbler]# > /var/lib/dnsmasq/dnsmasq.leases
- cobbler sync (会写入/etc/dnsmasq.conf)
- 重启dnsmasq
[root@6cad38e2894e ~]# service dnsmasq restart
Shutting down dnsmasq: [ OK ]
Starting dnsmasq: [ OK ]
注意:不能只重启容器,否则修改会丢失(因为没有commit)
PS:
获得分配的vlan之后,通过fuelmenu修改了pxe-admin的网段(改为10.132.40.0/24, vlan 1240)
Fuel master为10.132.40.1,但是ping不通GW 10.132.40.254(该网关在外面是能ping通的)
10.132.40.0/24, vlan 1240是业务网络,配置在光口上,所以fuel master只能用eth0/eth1,同时需要在命令行上手动打上vlan tag;但是,slaver的eth0/eth1无法在boot的时候带上vlan tag,所以得不到ip。
(而10.254.2.x是在电口上的,即eth2/eth3。slaver从eth2发送dhcp request的时候,switch会给这些包打上van tag;业务网络中,switch不会主动打vlan tag,而是有openstack来完成的)
tcpdump抓dhcp包:
tcpdump -i any -vvv -s 1500 '((port 67 or port 68) and (udp[8:1] = 0x1))'
Issue-3:
一开始没有在dnsmasq的配置文件中制定mac和ip的绑定关系,导致10.255.2.105启动的时候eth2拿到了 10.254.2.104,10.255.2.104拿到了10.254.2.105(启动之后又丢了这个ip,执行ifdown eth2;ifup eth2之后又拿到了这个ip)。
我想让这两个IP交换一下,所以:
- 在/etc/cobbler/dnsmasq.template指定绑定关系:
dhcp-ignore=tag:!known
dhcp-host=net:x86_64,6c:92:bf:0d:56:81
dhcp-host=net:x86_64,6c:92:bf:0d:30:25,10.254.2.102
dhcp-host=net:x86_64,6c:92:bf:0d:30:2d,10.254.2.103
dhcp-host=net:x86_64,6c:92:bf:0d:56:6d,10.254.2.104
dhcp-host=net:x86_64,6c:92:bf:0d:5c:8d,10.254.2.105
dhcp-host=net:x86_64,6c:92:bf:0c:e5:5d,10.254.2.118
- sync cobbler
cobbler sync
- 重启dnsmasq
service dnsmasq restart
- 在两个主机上ifdown eth2;ifup eth2
一个主机成功获得了10.254.2.104,另一个失败了:
[root@a9be15498990 etc]# tail -f /var/log/dnsmasq.log
Jun 17 14:29:33 dnsmasq-dhcp[1996]: not using configured address 10.254.2.104 because it was previously declined
Jun 17 14:29:33 dnsmasq-dhcp[1996]: DHCPDISCOVER(eth0) 10.254.2.105 6c:92:bf:0d:56:6d no address available
原因是104这个ip原来被client接受了,后来client又发出了DECLINE消息给server,所有server把这个ip标记为declined且暂时不会分配给client。
Update 2015//6/30: 发出decline的原因是机器重启进入bootstrap的适合,会再次发送DHCP request,拿到IP之后,会通过ARP广播自己的MAC/IP,但是发现这个IP已经被占用了,所以OS会发送decline给DHCP server,取消这个IP。
解决办法:
http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2008q3/002285.html
等待一段时间(十几分钟),再次ifdown eth2;ifup eth2
配置Openstack的抓图:
controller disk:
comupte-1
disk(默认的)
Note:
1. public/mgmt/storage对应的network都是provider network,这些vlan tag可以中ovs上看到:
控制节点(很多neutron service也跑在这里):
[root@node-1 ~]# ovs-vsctl show | grep tag
tag: 1240
tag: 1241
tag: 1242
tag: 2
[root@node-1 ~]# service --status-all | grep neutron
22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport ports 9696 /* 110 neutron */
neutron-dhcp-agent (pid 32108) is running...
neutron-l3-agent (pid 32124) is running...
neutron-lbaas-agent is stopped
neutron-metadata-agent (pid 32207) is running...
neutron-openvswitch-agent (pid 32307) is running...
neutron (pid 32512) is running...
计算节点:
[root@node-2 ~]# ovs-vsctl show | grep tag
tag: 1242
tag: 1241
tag: 10
结合下面的网络分配(eth0/eth1做bond,所有网络都在光口上),可以看到Fuel会创建多个ovs bridge:
[root@node-1 ~]# ovs-vsctl list-br
br-eth2
br-eth3
br-ex
br-fw-admin
br-int
br-mgmt
br-ovs-bond0
br-prv
br-storage
br-ovs-bond0/br-int负责vlan tag翻译:
Bridge "br-ovs-bond0"
Port "br-ovs-bond0--br-prv"
Interface "br-ovs-bond0--br-prv"
type: patch
options: {peer="br-prv--br-ovs-bond0"}
Port "br-ovs-bond0"
Interface "br-ovs-bond0"
type: internal
Port "ovs-bond0"
Interface "eth1"
Interface "eth0"
Port "br-ovs-bond0--br-ex"
tag: 1240
Interface "br-ovs-bond0--br-ex"
type: patch
options: {peer="br-ex--br-ovs-bond0"}
Port "br-ovs-bond0--br-mgmt"
tag: 1241
Interface "br-ovs-bond0--br-mgmt"
type: patch
options: {peer="br-mgmt--br-ovs-bond0"}
Port "br-ovs-bond0--br-storage"
tag: 1242
Interface "br-ovs-bond0--br-storage"
type: patch
options: {peer="br-storage--br-ovs-bond0"}
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "tapefa09ef2-e3"
tag: 2
Interface "tapefa09ef2-e3"
type: internal
Port int-br-prv
Interface int-br-prv
2. Neutron L2 vlan range,会写入neutron配置文件:
[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
network_vlan_ranges =physnet2:1243:1244
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2
[ovs]
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet2:br-prv
3. 网络拓扑
compute:
再次强调:电口不能用业务vlan!
修改后:
Issue-4: deploy之后,节点安装openstack各种出错:
其实这个节点没有被Fuel重装,看到的还是以前的配置。
解决方法 - 在deploy之前检查dnsmasq,有必要的话修改:
- dockerctl shell cobbler
查看/etc/dnsmasq.conf是否有mac和ip的绑定。
- 尝试service dnsmasq restart
我执行的时候失败了:配置文件中有重复纪录。
原因是/etc/cobbler/dnsmasq.template有这样一条命令:
dhcp-option=6,10.254.2.101
dhcp-range=internal,10.254.2.103,10.254.2.105,255.255.255.0,120m
dhcp-option=net:internal,option:router,10.254.2.101
pxe-service=net:#gpxe,x86PC,"Install",pxelinux,10.254.2.101
dhcp-boot=net:internal,pxelinux.0,boothost,10.254.2.101
$insert_cobbler_system_definitions #数据库中的纪录其实和下面的几条记录重复了,所以把下面纪录删除并执行Issue-2中的相关命令
dhcp-ignore=tag:!known
dhcp-host=net:x86_64,6c:92:bf:0d:56:81
dhcp-host=net:x86_64,6c:92:bf:0d:30:25,10.254.2.102
dhcp-host=net:x86_64,6c:92:bf:0d:30:2d,10.254.2.103
dhcp-host=net:x86_64,6c:92:bf:0d:56:6d,10.254.2.104
dhcp-host=net:x86_64,6c:92:bf:0d:5c:8d,10.254.2.105
dhcp-host=net:x86_64,6c:92:bf:0c:e5:5d,10.254.2.118
Issue-5:
2015/06/29 增加计算节点,deploy之前做了network测试失败了:
这个10.132.43.2是neutron上一个port的ip
[root@node-1 ~]# neutron port-list | grep 10.132.43.2
| efa09ef2-e378-4041-97fb-1fa851c1cce4 | | fa:16:3e:e0:e4:0f | {"subnet_id": "c8d4f664-e776-4375-88a2-61e7328a8d47", "ip_address": "10.132.43.2"} |
[root@node-1 ~]# neutron port-show efa09ef2-e378-4041-97fb-1fa851c1cce4
+-----------------------+------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------------------------------------+
| admin_state_up | False |
| allowed_address_pairs | |
| binding:host_id | node-1.domain.tld |
| binding:profile | {} |
| binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} |
| binding:vif_type | ovs |
| binding:vnic_type | normal |
| device_id | dhcp9b3b6618-0449-5cf4-ba1a-1bd2727132bc-8b8db3da-b685-4ec5-b35d-afa1f6c4d791 |
| device_owner | network:dhcp |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "c8d4f664-e776-4375-88a2-61e7328a8d47", "ip_address": "10.132.43.2"} |
| id | efa09ef2-e378-4041-97fb-1fa851c1cce4 |
| mac_address | fa:16:3e:e0:e4:0f |
| name | |
| network_id | 8b8db3da-b685-4ec5-b35d-afa1f6c4d791 |
| security_groups | |
| status | DOWN |
| tenant_id | 9ec290d997f246b09e3d2c60fb159e6f |
+-----------------------+------------------------------------------------------------------------------------+
workaround:从dashboard上disable这个port
疑问:neutron dhcp应该不影响安装,试过了ifdown eth1;ifup eth1,不能拿到IP(不过这个机子的eth1网口连接好像有问题)。那为什么Fuel会报错呢?
Network details:
https://docs.mirantis.com/openstack/fuel/fuel-6.0/reference-architecture.html#network-architecture
Logical Networks
For better network performance and manageability, Fuel places different types of traffic into separate logical networks. This section describes how to distribute the network traffic in an OpenStack environment.
Admin (PXE) Network ("Fuel network")
The Fuel Master Node uses this network to provision and orchestrate the OpenStack environment. It is used during installation to provide DNS, DHCP, and gateway services to a node before that node is provisioned. Nodes retrieve their network configuration from the Fuel Master node using DHCP, which is why this network must be isolated from the rest of your network and must not have a DHCP server other than the Fuel Master running on it.
Public Network
The word "Public" means that these addresses can be used to communicate with the cluster and its VMs from outside of the cluster (the Internet, corporate network, end users).
The public network provides connectivity to the globally routed address space for VMs. The IP address from the public network that has been assigned to a compute node is used as the source for the Source NAT performed for traffic going from VM instances on the compute node to the Internet.
The public network also provides Virtual IPs for public endpoints, which are used to connect to OpenStack services APIs.
Finally, the public network provides a contiguous address range for the floating IPs that are assigned to individual VM instances by the project administrator. Nova Network or Neutron services can then configure this address on the public network interface of the Network controller node. Environments based on Nova Network use iptables to create a Destination NAT from this address to the private IP of the corresponding VM instance through the appropriate virtual bridge interface on the Network controller node.
For security reasons, the public network is usually isolated from other networks in the cluster.
If you use tagged networks for your configuration and combine multiple networks onto one NIC, you should leave the Public network untagged on that NIC. This is not a requirement, but it simplifies external access to OpenStack Dashboard and public OpenStack API endpoints.
Storage Network
Part of a cluster's internal network. It is used to separate storage traffic (Swift, Ceph, iSCSI, etc.) from other types of internal communications in the cluster. The Storage network is usually on a separate VLAN or interface, isolated from all other communication.
Management network
Also part of a cluster's internal network. It serves all other internal communications, including database queries, AMQP messaging, high availability services).
Private Network (Fixed network)
The private network facilitates communication between each tenant's VMs. Private network address spaces are not a part of the enterprise network address space; fixed IPs of virtual instances cannot be accessed directly from the rest of the Enterprise network.
Just like the public network, the private network should be isolated from other networks in the cluster for security reasons.
Internal Network
The internal network connects all OpenStack nodes in the environment. All components of an OpenStack environment communicate with each other using this network. This network must be isolated from both the private and public networks for security reasons.
The internal network can also be used for serving iSCSI protocol exchanges between Compute and Storage nodes.
Note
If you want to combine another network with the Admin network on the same network interface, you must leave the Admin network untagged. This is the default configuration and cannot be changed in the Fuel UI although you could modify it by manually editing configuration files.
一些说明:
- 2015/8/2 - 高晋:
1. openstack的服务在每个节点上都会安装。在选择role的时候,比如选择Cinder,那么会在这个节点上划出一块磁盘空间做cinder-volume
2. Cinder的后端存储:可配置,默认是本地存储,如果选择了Ceph,那么(role不选择cinder)用Ceph
3. Glance的后端存储:单控制节点,用本地存储;多控制节点,默认用Swift,可以配置为使用Ceph
4. sda是SSD
5. virtual storage用来存放nova instance的目录,如果是本地存储那么需要较大空间,如果是共享存储那么不需要大空间
对接:
- 因为项目方需要直接在用户区访问admin endpoint,所以向IT申请开放管理口(光口)10.132.41.2的以下端口:
8000
8776
8773
9696
5000
9292
6780
8004
8774
8777