目标:3台物理机组成单控制节点openstack
作者:BCEC大马哥
Fuel master IPMI login:
http://10.255.2.101
admin/12345678
Fuel master example:
10.254.1.1
root/sybk2015
Notes/Rules:
  • 这些浪潮服务器共有4个网口,其中eth0/eth1是光口,eth2/eth3是电口
  • IT分配的vlan都是业务vlan,所有业务vlan都走的是光口,所以eth2/eth3是不能用这些vlan的;同时,eth0/eth1 不能作为pxe口,否则dhcp boot request不会带vlan tag,所以到不了master的eth0/eth1
  • IPMI地址和PXE地址是一一对应的。比如:IPMI为10.255.2.x,那么其PXE地址为10.254.2.x。而且IT要求两个地址的x一定要一致。
  • Fuel master做dhcp server的时候,必须指定mac,且IP和mac绑定
    • DHCP pool for node discovery里面的地址,需要在cobbler中dnsmasq中配置
    • Static pool for node installed里面的地址,可以通过修改postgresql或者restful api设置
  • Fuel的服务跑中容器中,arch: https://wiki.openstack.org/wiki/Fuel
  • Fuel doc:https://docs.mirantis.com/openstack/fuel/fuel-6.0/operations.html (包含docker例子)
    https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html (包含CLI例子)
    https://docs.mirantis.com/openstack/fuel/fuel-6.0/reference-architecture.html#network-architecture
  • 服务器管理地址上看到的eth0不能设置“局域网设置”为“启用”
  • Fuel configure file: /etc/fuel/astute.yaml
Fuel配置:
disable eth0
enable eth2
static pool和discovery pool可以一样:
退出菜单后,可以执行fuelmenu再次打开:
安装完成之后:
http://10.254.2.101:8000/ (admin/admin)
ssh root@10.254.2.101 (password: r00tme)
在reset slaver之前,需要在dnsmasq中绑定mac和IP地址。具体配置参见Issue-3。
在Fuel页面上配置完Openstack环境之后,在deploy之前需要修改数据库(或者使用REST API),以绑定PXE static IP和节点之间的关系:
  1. 在master上执行一下(否则下一步看不到某些IP):
    [root@fuel ~]# fuel --env 1 deployment default
    Default deployment info was downloaded to /root/deployment_1
    [root@fuel ~]# fuel node list
    id | status   | name             | cluster | ip           | mac               | roles | pending_roles     | online
    ---|----------|------------------|---------|--------------|-------------------|-------|-------------------|-------
    2  | discover | Untitled (56:6d) | 1       | 10.254.2.104 | 6c:92:bf:0d:56:6d |       | cinder, compute   | True
    3  | discover | Untitled (30:2d) | 1       | 10.254.2.103 | 6c:92:bf:0d:30:2d |       | cinder, compute   | True
    1  | discover | Untitled (5c:8d) | 1       | 10.254.2.105 | 6c:92:bf:0d:5c:8d |       | controller, mongo | True
  2. 用navicat连接Nailgun数据库,修改ip_addr表,并保存!
另,这个表应该记录了mac地址和ip地址的绑定关系。在以下遇到的问题中,有需要重启cobbler/dnsmasq的,重启之后,会由 /etc/cobbler/dnsmasq.template的内容重写/etc/dnsmasq.conf。而/etc/cobbler /dnsmasq.template中有一句“$insert_cobbler_system_definitions”,应该是读取这个表的内容并插入 /etc/dnsmasq.conf
Issue-1:
http://10.254.2.101:8000/
403 Forbidden
docker ps
dockerctl list
ngnix logs:
[root@fuel nginx]# pwd
/var/log/docker-logs/nginx
[root@fuel nginx]# ll
总用量 20
-rw-r--r-- 1 root root    0 6月  11 17:22 access.log
-rw-r--r-- 1 root root 6332 6月  15 13:53 access_nailgun.log
-rw-r--r-- 1 root root    0 6月  11 17:22 access_repo.log
-rw-r--r-- 1 root root    0 6月  11 17:22 error.log
-rw-r--r-- 1 root root 8590 6月  15 13:53 error_nailgun.log
-rw-r--r-- 1 root root    0 6月  11 17:22 error_repo.log
==> error_nailgun.log <==
2015/06/15 15:08:21 [error] 464#0: *34 "/usr/share/nailgun/static/index.html" is forbidden (13: Permission denied), client: 172.17.42.1, server: localhost, request: "GET / HTTP/1.1", host: "10.254.2.101:8000"
==> access_nailgun.log <==
172.17.42.1 - - [15/Jun/2015:15:08:21 +0100] "GET / HTTP/1.1" 403 564 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
[root@fuel nginx]# ll /usr/share/nailgun
ls: 无法访问/usr/share/nailgun: 没有那个文件或目录
[root@fuel nginx]#
gaojin:
应该进docker里面去看,而不是走host上看。
进到nailgun里面发现的确没有/usr/share/nailgun/static/index.html
尝试手动启动一个nailgun:
[root@fuel nailgun]# docker run -it  -v /var/www/nailgun:/var/www/nailgun -v /etc/fuel/:/etc/fuel/ fuel/nailgun_5.1.1 /bin/bash
docker inspect container-id 可以查看该容器的配置,发现HostIp不对,应该是eth2的IP 10.254.2.101,而不是eth0上的default IP 10.20.0.2
"NetworkSettings": {
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"8001/tcp": [
{
"HostIp": "10.20.0.2",
"HostPort": "8001"
},
{
"HostIp": "127.0.0.1",
"HostPort": "8001"
}
]
}
},
试fuelmenu,disable eth0,然后docker build,但还是用了10.20.0.2
------------------------
[root@fuel nailgun]# docker inspect 9c64bbf8a8f1
[{
"ID": "9c64bbf8a8f1f19a293280f65c3cc205806130c6f7a2b669f0f6ca7016c4e35d",
"Created": "2015-06-11T17:22:46.524364557Z",
"Path": "/bin/sh",
"Args": [
"-c",
"/usr/local/bin/start.sh"
],
"Config": {
"Hostname": "9c64bbf8a8f1",
"Domainname": "",
"User": "",
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"8001/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"/usr/local/bin/start.sh"
],
"Image": "fuel/nailgun_5.1.1",
"Volumes": {
"/etc/fuel": {},
"/etc/nailgun": {},
"/etc/yum.repos.d": {},
"/root/.ssh": {},
"/usr/share/nailgun/static": {},
"/var/log": {},
"/var/www/nailgun": {}
},
"WorkingDir": "/root",
"Entrypoint": null,
"NetworkDisabled": false,
"OnBuild": null
},
"State": {
"Running": true,
"Pid": 11655,
"ExitCode": 0,
"StartedAt": "2015-06-15T11:37:34.064883412Z",
"FinishedAt": "2015-06-15T11:37:33.624798124Z",
"Ghost": false
},
"Image": "531bb9a11ac38bd0ff2263e562aebca0519f1d86ef5b3a1b1b2a82e7b4d5e7ae",
"NetworkSettings": {
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"8001/tcp": [
{
"HostIp": "10.20.0.2",
"HostPort": "8001"
},
{
"HostIp": "127.0.0.1",
"HostPort": "8001"
}
]
}
},
"ResolvConfPath": "/etc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9c64bbf8a8f1f19a293280f65c3cc205806130c6f7a2b669f0f6ca7016c4e35d/hostname",
"HostsPath": "/var/lib/docker/containers/9c64bbf8a8f1f19a293280f65c3cc205806130c6f7a2b669f0f6ca7016c4e35d/hosts",
"Name": "/fuel-core-5.1.1-nailgun",
"Driver": "devicemapper",
"ExecDriver": "lxc-0.9.0",
"Volumes": {
"/etc/fuel": "/etc/fuel",
"/etc/nailgun": "/var/lib/docker/vfs/dir/249ad99fd67e72b36e6c66e22ba87f0bd7a16298886fe59a54f39ab62e6b8185",
"/etc/yum.repos.d": "/etc/yum.repos.d",
"/root/.ssh": "/root/.ssh",
"/usr/share/nailgun/static": "/var/lib/docker/vfs/dir/183fd1742237cc99931ffb6e56f176bf0c0712953c201cd028115dc82e8d55e2",
"/var/log": "/var/log/docker-logs",
"/var/www/nailgun": "/var/www/nailgun"
},
"VolumesRW": {
"/etc/fuel": false,
"/etc/nailgun": true,
"/etc/yum.repos.d": true,
"/root/.ssh": false,
"/usr/share/nailgun/static": true,
"/var/log": true,
"/var/www/nailgun": true
},
"HostConfig": {
"Binds": [
"/etc/fuel:/etc/fuel:ro",
"/var/log/docker-logs:/var/log",
"/root/.ssh:/root/.ssh:ro",
"/var/www/nailgun:/var/www/nailgun:rw",
"/etc/yum.repos.d:/etc/yum.repos.d:rw"
],
"ContainerIDFile": "",
"LxcConf": [],
"Privileged": false,
"PortBindings": {
"8001/tcp": [
{
"HostIp": "10.20.0.2",
"HostPort": "8001"
},
{
"HostIp": "127.0.0.1",
"HostPort": "8001"
}
]
},
"Links": null,
"PublishAllPorts": false,
"Dns": null,
"DnsSearch": null,
"VolumesFrom": null
}
}]
------------------------
docker容器里面的一个启动脚本:/usr/local/bin/start.sh
进入nailgun:
[root@fuel ~]# docker ps | grep nail
6aa0792b5009        fuel/nailgun_5.1.1:latest       /bin/sh -c /usr/loca   50 minutes ago      Up 28 minutes       10.254.2.101:8001->8001/tcp, 127.0.0.1:8001->8001/tcp                                                                                                                                                                                fuel-core-5.1.1-nailgun
[root@fuel ~]# dockerctl shell 6aa0792b5009
换来Fuel iso之后还是403Forbidden,执行/usr/local/bin/start.sh:
Notice: /Stage[main]/Nailgun::Venv/Exec[nailgun_syncdb]/returns: executed successfully
Notice: /Stage[main]/Nailgun::Venv/Exec[nailgun_upload_fixtures]/returns: executed successfully
Notice: /Stage[main]/Nailgun::Supervisor/Service[supervisord]/enable: enable changed 'false' to 'true'
Notice: Finished catalog run in 2.64 seconds
/etc/sysconfig/supervisord: line 15: ulimit: open files: cannot modify limit: Operation not permitted
Stopping supervisord: Shut down
Waiting roughly 60 seconds for /var/run/supervisord.pid to be removed after child processes exit
[root@a446914b82cf static]# namei -mo /usr/share/nailgun/static/
f: /usr/share/nailgun/static/
drwxr-xr-x root root /
drwxr-xr-x root root usr
drwxr-xr-x root root share
drwxr-xr-x root root nailgun
drwx------ root root static
[root@a446914b82cf static]#
最后用了正确的iso之后就没这个问题了。
Issue 2: slaver重启之后拿不到IP地址
Fuel的dhcp/dns都是用dnsmasq实现的,在container cobbler中
查看dnsmasq的log:
[root@fuel ~]# docker ps | grep cob
6cad38e2894e        fuel/cobbler_5.1.1:latest       /bin/sh -c /usr/loca   4 hours ago         Up 24 minutes       53/tcp, 127.0.0.1:53->53/udp, 10.254.2.101:53->53/udp, 67/tcp, 67/udp, 127.0.0.1:69->69/udp, 10.254.2.101:69->69/udp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp                                                                       fuel-core-5.1.1-cobbler
[root@fuel ~]# dockerctl shell 6cad38e2894e
[root@6cad38e2894e ~]# tail -f /var/log/dnsmasq.log
Jun 16 15:08:03 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0c:e6:5c no address available
Jun 16 15:08:03 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.172 6c:92:bf:0c:b2:90 no address available
Jun 16 15:08:04 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.179 6c:92:bf:0d:5b:86 no address available
Jun 16 15:08:04 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.104 6c:92:bf:0c:e4:ee no address available
Jun 16 15:08:04 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.102 6c:92:bf:0d:5a:82 no address available
Jun 16 15:08:05 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.103 6c:92:bf:0d:30:26 no address available
Jun 16 15:08:05 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.170 6c:92:bf:0c:e6:08 no address available
Jun 16 15:08:05 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.102 6c:92:bf:0d:5a:e6 no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0c:c8:6a no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0d:5c:8e no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.153 6c:92:bf:0d:5c:92 no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 6c:92:bf:0c:e6:5c no address available
Jun 16 15:08:06 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.175 6c:92:bf:0d:5a:6a no address available
Jun 16 15:08:07 dnsmasq-dhcp[973]: DHCPDISCOVER(eth0) 10.254.2.179 6c:92:bf:0d:5b:86 no address available
因为我们在Fuel menu里面配置的static IP pool是10.254.2.102 -10.254.2.103,pool for discovery是10.254.2.104-10.254.2.105
但是这几个IP地址已经被占用了,所以出现no address available
同时,应该修改dnsmasq的配置,限制哪些mac地址能dhcp获得对应的IP地址:
  1. 修改dnsmasq配置文件:/etc/cobbler/dnsmasq.template
    ….
    dhcp-option=6,10.254.2.101

    dhcp-range=internal,10.254.2.104,10.254.2.105,255.255.255.0,120m
    dhcp-option=net:internal,option:router,10.254.2.101
    pxe-service=net:#gpxe,x86PC,"Install",pxelinux,10.254.2.101
    dhcp-boot=net:internal,pxelinux.0,boothost,10.254.2.101

    # added
    dhcp-ignore=tag:!known 
    dhcp-host=net:x86_64,6c:92:bf:0d:30:25,10.254.2.102             # node-5
    dhcp-host=net:x86_64,6c:92:bf:0d:30:2d,10.254.2.103
    dhcp-host=net:x86_64,6c:92:bf:0d:56:6d,10.254.2.104
    dhcp-host=net:x86_64,6c:92:bf:0d:5c:8d,10.254.2.105
    dhcp-host=net:x86_64,6c:92:bf:0c:e5:5d,10.254.2.118             # node-4 (102)

  2. 清空dnsmasq的数据文件:
    [root@a9be15498990 cobbler]# > /var/lib/dnsmasq/dnsmasq.leases
  3. cobbler sync (会写入/etc/dnsmasq.conf)
  4. 重启dnsmasq
    [root@6cad38e2894e ~]# service dnsmasq restart
    Shutting down dnsmasq:                                     [  OK  ]
    Starting dnsmasq:                                          [  OK  ]
    注意:不能只重启容器,否则修改会丢失(因为没有commit)
PS:
获得分配的vlan之后,通过fuelmenu修改了pxe-admin的网段(改为10.132.40.0/24, vlan 1240)
Fuel master为10.132.40.1,但是ping不通GW 10.132.40.254(该网关在外面是能ping通的)
10.132.40.0/24, vlan 1240是业务网络,配置在光口上,所以fuel master只能用eth0/eth1,同时需要在命令行上手动打上vlan tag;但是,slaver的eth0/eth1无法在boot的时候带上vlan tag,所以得不到ip。
(而10.254.2.x是在电口上的,即eth2/eth3。slaver从eth2发送dhcp request的时候,switch会给这些包打上van tag;业务网络中,switch不会主动打vlan tag,而是有openstack来完成的)
tcpdump抓dhcp包:
tcpdump -i any -vvv -s 1500 '((port 67 or port 68) and (udp[8:1] = 0x1))'
Issue-3:
一开始没有在dnsmasq的配置文件中制定mac和ip的绑定关系,导致10.255.2.105启动的时候eth2拿到了 10.254.2.104,10.255.2.104拿到了10.254.2.105(启动之后又丢了这个ip,执行ifdown eth2;ifup eth2之后又拿到了这个ip)。
我想让这两个IP交换一下,所以:
  1. 在/etc/cobbler/dnsmasq.template指定绑定关系:
    dhcp-ignore=tag:!known
    dhcp-host=net:x86_64,6c:92:bf:0d:56:81
    dhcp-host=net:x86_64,6c:92:bf:0d:30:25,10.254.2.102
    dhcp-host=net:x86_64,6c:92:bf:0d:30:2d,10.254.2.103
    dhcp-host=net:x86_64,6c:92:bf:0d:56:6d,10.254.2.104
    dhcp-host=net:x86_64,6c:92:bf:0d:5c:8d,10.254.2.105
    dhcp-host=net:x86_64,6c:92:bf:0c:e5:5d,10.254.2.118
  2. sync cobbler
    cobbler sync
  3. 重启dnsmasq
    service dnsmasq restart
  4. 在两个主机上ifdown eth2;ifup eth2
一个主机成功获得了10.254.2.104,另一个失败了:
[root@a9be15498990 etc]# tail -f /var/log/dnsmasq.log
Jun 17 14:29:33 dnsmasq-dhcp[1996]: not using configured address 10.254.2.104 because it was previously declined
Jun 17 14:29:33 dnsmasq-dhcp[1996]: DHCPDISCOVER(eth0) 10.254.2.105 6c:92:bf:0d:56:6d no address available
原因是104这个ip原来被client接受了,后来client又发出了DECLINE消息给server,所有server把这个ip标记为declined且暂时不会分配给client。
Update 2015//6/30: 发出decline的原因是机器重启进入bootstrap的适合,会再次发送DHCP request,拿到IP之后,会通过ARP广播自己的MAC/IP,但是发现这个IP已经被占用了,所以OS会发送decline给DHCP server,取消这个IP。
解决办法:
http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2008q3/002285.html
等待一段时间(十几分钟),再次ifdown eth2;ifup eth2
配置Openstack的抓图:
controller disk:
comupte-1
disk(默认的)
Note:
1. public/mgmt/storage对应的network都是provider network,这些vlan tag可以中ovs上看到:
控制节点(很多neutron service也跑在这里):
[root@node-1 ~]# ovs-vsctl show | grep tag
tag: 1240
tag: 1241
tag: 1242
tag: 2
[root@node-1 ~]# service --status-all | grep neutron
22   ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           multiport ports 9696 /* 110 neutron  */
neutron-dhcp-agent (pid  32108) is running...
neutron-l3-agent (pid  32124) is running...
neutron-lbaas-agent is stopped
neutron-metadata-agent (pid  32207) is running...
neutron-openvswitch-agent (pid  32307) is running...
neutron (pid  32512) is running...
计算节点:
[root@node-2 ~]# ovs-vsctl show | grep tag
tag: 1242
tag: 1241
tag: 10
结合下面的网络分配(eth0/eth1做bond,所有网络都在光口上),可以看到Fuel会创建多个ovs bridge:
[root@node-1 ~]# ovs-vsctl list-br
br-eth2
br-eth3
br-ex
br-fw-admin
br-int
br-mgmt
br-ovs-bond0
br-prv
br-storage
br-ovs-bond0/br-int负责vlan tag翻译:
Bridge "br-ovs-bond0"
Port "br-ovs-bond0--br-prv"
Interface "br-ovs-bond0--br-prv"
type: patch
options: {peer="br-prv--br-ovs-bond0"}
Port "br-ovs-bond0"
Interface "br-ovs-bond0"
type: internal
Port "ovs-bond0"
Interface "eth1"
Interface "eth0"
Port "br-ovs-bond0--br-ex"
tag: 1240
Interface "br-ovs-bond0--br-ex"
type: patch
options: {peer="br-ex--br-ovs-bond0"}
Port "br-ovs-bond0--br-mgmt"
tag: 1241
Interface "br-ovs-bond0--br-mgmt"
type: patch
options: {peer="br-mgmt--br-ovs-bond0"}
Port "br-ovs-bond0--br-storage"
tag: 1242
Interface "br-ovs-bond0--br-storage"
type: patch
options: {peer="br-storage--br-ovs-bond0"}
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "tapefa09ef2-e3"
tag: 2
Interface "tapefa09ef2-e3"
type: internal
Port int-br-prv
Interface int-br-prv
2. Neutron L2 vlan range,会写入neutron配置文件:
[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
network_vlan_ranges =physnet2:1243:1244

# Example: network_vlan_ranges = physnet1:1000:2999,physnet2
[ovs]
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet2:br-prv
3. 网络拓扑
compute:
再次强调:电口不能用业务vlan!
修改后:
Issue-4: deploy之后,节点安装openstack各种出错:
其实这个节点没有被Fuel重装,看到的还是以前的配置。
解决方法 - 在deploy之前检查dnsmasq,有必要的话修改:
  1. dockerctl shell cobbler
    查看/etc/dnsmasq.conf是否有mac和ip的绑定。
  2. 尝试service dnsmasq restart
    我执行的时候失败了:配置文件中有重复纪录。
    原因是/etc/cobbler/dnsmasq.template有这样一条命令:
    dhcp-option=6,10.254.2.101
    dhcp-range=internal,10.254.2.103,10.254.2.105,255.255.255.0,120m
    dhcp-option=net:internal,option:router,10.254.2.101
    pxe-service=net:#gpxe,x86PC,"Install",pxelinux,10.254.2.101
    dhcp-boot=net:internal,pxelinux.0,boothost,10.254.2.101

    $insert_cobbler_system_definitions           #数据库中的纪录其实和下面的几条记录重复了,所以把下面纪录删除并执行Issue-2中的相关命令

    dhcp-ignore=tag:!known

dhcp-host=net:x86_64,6c:92:bf:0d:56:81
dhcp-host=net:x86_64,6c:92:bf:0d:30:25,10.254.2.102
dhcp-host=net:x86_64,6c:92:bf:0d:30:2d,10.254.2.103
dhcp-host=net:x86_64,6c:92:bf:0d:56:6d,10.254.2.104
dhcp-host=net:x86_64,6c:92:bf:0d:5c:8d,10.254.2.105
dhcp-host=net:x86_64,6c:92:bf:0c:e5:5d,10.254.2.118
Issue-5:
2015/06/29 增加计算节点,deploy之前做了network测试失败了:
这个10.132.43.2是neutron上一个port的ip
[root@node-1 ~]# neutron port-list | grep 10.132.43.2
| efa09ef2-e378-4041-97fb-1fa851c1cce4 |      | fa:16:3e:e0:e4:0f | {"subnet_id": "c8d4f664-e776-4375-88a2-61e7328a8d47", "ip_address": "10.132.43.2"}   |
[root@node-1 ~]# neutron port-show efa09ef2-e378-4041-97fb-1fa851c1cce4
+-----------------------+------------------------------------------------------------------------------------+
| Field                 | Value                                                                              |
+-----------------------+------------------------------------------------------------------------------------+
| admin_state_up        | False                                                                              |
| allowed_address_pairs |                                                                                    |
| binding:host_id       | node-1.domain.tld                                                                  |
| binding:profile       | {}                                                                                 |
| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}                                     |
| binding:vif_type      | ovs                                                                                |
| binding:vnic_type     | normal                                                                             |
| device_id             | dhcp9b3b6618-0449-5cf4-ba1a-1bd2727132bc-8b8db3da-b685-4ec5-b35d-afa1f6c4d791      |
| device_owner          | network:dhcp                                                                       |
| extra_dhcp_opts       |                                                                                    |
| fixed_ips             | {"subnet_id": "c8d4f664-e776-4375-88a2-61e7328a8d47", "ip_address": "10.132.43.2"} |
| id                    | efa09ef2-e378-4041-97fb-1fa851c1cce4                                               |
| mac_address           | fa:16:3e:e0:e4:0f                                                                  |
| name                  |                                                                                    |
| network_id            | 8b8db3da-b685-4ec5-b35d-afa1f6c4d791                                               |
| security_groups       |                                                                                    |
| status                | DOWN                                                                               |
| tenant_id             | 9ec290d997f246b09e3d2c60fb159e6f                                                   |
+-----------------------+------------------------------------------------------------------------------------+
workaround:从dashboard上disable这个port
疑问:neutron dhcp应该不影响安装,试过了ifdown eth1;ifup eth1,不能拿到IP(不过这个机子的eth1网口连接好像有问题)。那为什么Fuel会报错呢?
Network details:
https://docs.mirantis.com/openstack/fuel/fuel-6.0/reference-architecture.html#network-architecture

Logical Networks

For better network performance and manageability, Fuel places different types of traffic into separate logical networks. This section describes how to distribute the network traffic in an OpenStack environment.

Admin (PXE) Network ("Fuel network")

The Fuel Master Node uses this network to provision and orchestrate the OpenStack environment. It is used during installation to provide DNS, DHCP, and gateway services to a node before that node is provisioned. Nodes retrieve their network configuration from the Fuel Master node using DHCP, which is why this network must be isolated from the rest of your network and must not have a DHCP server other than the Fuel Master running on it.

Public Network

The word "Public" means that these addresses can be used to communicate with the cluster and its VMs from outside of the cluster (the Internet, corporate network, end users).

The public network provides connectivity to the globally routed address space for VMs. The IP address from the public network that has been assigned to a compute node is used as the source for the Source NAT performed for traffic going from VM instances on the compute node to the Internet.

The public network also provides Virtual IPs for public endpoints, which are used to connect to OpenStack services APIs.

Finally, the public network provides a contiguous address range for the floating IPs that are assigned to individual VM instances by the project administrator. Nova Network or Neutron services can then configure this address on the public network interface of the Network controller node. Environments based on Nova Network use iptables to create a Destination NAT from this address to the private IP of the corresponding VM instance through the appropriate virtual bridge interface on the Network controller node.

For security reasons, the public network is usually isolated from other networks in the cluster.

If you use tagged networks for your configuration and combine multiple networks onto one NIC, you should leave the Public network untagged on that NIC. This is not a requirement, but it simplifies external access to OpenStack Dashboard and public OpenStack API endpoints.

Storage Network

Part of a cluster's internal network. It is used to separate storage traffic (Swift, Ceph, iSCSI, etc.) from other types of internal communications in the cluster. The Storage network is usually on a separate VLAN or interface, isolated from all other communication.

Management network

Also part of a cluster's internal network. It serves all other internal communications, including database queries, AMQP messaging, high availability services).

Private Network (Fixed network)

The private network facilitates communication between each tenant's VMs. Private network address spaces are not a part of the enterprise network address space; fixed IPs of virtual instances cannot be accessed directly from the rest of the Enterprise network.

Just like the public network, the private network should be isolated from other networks in the cluster for security reasons.

Internal Network

The internal network connects all OpenStack nodes in the environment. All components of an OpenStack environment communicate with each other using this network. This network must be isolated from both the private and public networks for security reasons.

The internal network can also be used for serving iSCSI protocol exchanges between Compute and Storage nodes.

Note

If you want to combine another network with the Admin network on the same network interface, you must leave the Admin network untagged. This is the default configuration and cannot be changed in the Fuel UI although you could modify it by manually editing configuration files.

一些说明:
  • 2015/8/2 - 高晋:
    1. openstack的服务在每个节点上都会安装。在选择role的时候,比如选择Cinder,那么会在这个节点上划出一块磁盘空间做cinder-volume
    2. Cinder的后端存储:可配置,默认是本地存储,如果选择了Ceph,那么(role不选择cinder)用Ceph
    3. Glance的后端存储:单控制节点,用本地存储;多控制节点,默认用Swift,可以配置为使用Ceph
    4. sda是SSD
    5. virtual storage用来存放nova instance的目录,如果是本地存储那么需要较大空间,如果是共享存储那么不需要大空间
对接:
  • 因为项目方需要直接在用户区访问admin endpoint,所以向IT申请开放管理口(光口)10.132.41.2的以下端口:
    8000
    8776
    8773
    9696
    5000
    9292
    6780
    8004
    8774
    8777

查看原文:http://www.zoues.com/index.php/2015/11/04/fuel/

Fuel 多台物理机组成单控制节点相关推荐

  1. 虚拟化技术可以将一台物理服务器虚拟成,服务器虚拟化技术在实验室中的应用...

    第 3 2 卷 第 7 期 (上 ) 201 6年 7 月 赤 峰 学 院 学 报 (自 然 科 学 版 ) Journal of Chifeng University (Natural Scienc ...

  2. MySQL单台物理机上单实例多库与多实例单库性能测试

    MySQL单台物理机上单实例多库与多实例单库性能测试 因游戏业务需求,经常需要创建新的数据库,有时候在已经启着数据库实例的机器上纠结,是在原来的实例中直接加个库呢,还是在另起一个实例,哪个性能更好呢? ...

  3. 一台物理机上VMware虚拟机实现拨号上网同时内网通信

    一台物理机上VMware虚拟机实现拨号上网同时内网通信 前言:数据走向就是底下的图,看起来是不是很简单很easy 一:准备在VMware vSphere Client上面准备两台windows2003 ...

  4. 在一台物理服务器上搭建VSAN实验测试

    在一台物理服务器上搭建VSAN实验测试 https://blog.51cto.com/4964151/2333749 VSAN要求: 1.至少3台以上的vSphere ESXi 主机 2.每台主机需要 ...

  5. 组合钻床动力滑台液压系统设计及PLC控制设计【设计说明书(论文)+CAD图纸+外文翻译】

    目  录 摘    要 Abstract 绪 论 第一章 液压回路方案分析及液压原理图的拟定 第一节 引言 第二节 液压系统的工作要求 第三节 计算液压缸外负载.绘制工作循环图 第四节 确定液压缸主要 ...

  6. 基于Simulink的风电机组变桨距控制系统仿真研究

    1.内容简介 略 515-可以交流.咨询.答疑 2.内容说明 要:针对风电机组复杂.非线性的特点,建立了完整的风电机组变桨距模型,并 运 用 Matlab/Simulink强 大的功能对其进行仿真研究 ...

  7. 前端之Vue:模板语法、指令、Style 和 Class、条件渲染、列表渲染、事件处理、数据双向绑定、表单控制

    目录 一. 模板语法 插值语法 二. 指令 2.1 文本指令 v-html:让HTML渲染成页面 v-text:标签内容显示js变量对应的值 v-show:显示/隐藏内容 v-if:显示/删除内容 2 ...

  8. Spring 为啥默认把 bean 设计成单例的?

    点击上方 好好学java ,选择 星标 公众号 重磅资讯.干货,第一时间送达 今日推荐:在滴滴和头条干了 2 年后端开发,太真实-个人原创100W+访问量博客:点击前往,查看更多 熟悉Spring开发 ...

  9. 如何使用 Istio 进行多集群部署管理:单控制平面 Gateway 连接拓扑

    作者 | 王夕宁  阿里巴巴高级技术专家 **导读:**本文摘自于由阿里云高级技术专家王夕宁撰写的<Istio 服务网格技术解析与实践>一书,讲述了如何使用 Istio 进行多集群部署管理 ...

最新文章

  1. Linux系统开机过程详细分析
  2. CAN总线简明易懂教程(三)
  3. android 8.1 go,Android 8.1 Settings 的热点源码分析-Go语言中文社区
  4. 喜大普奔,网易猪肉要到你碗里来了!
  5. Hibernate Search 4.2最终发布:支持空间查询
  6. 微软发布Azure Storage不可变存储功能的正式版本
  7. poto——剧院魅影——phantom of the opera
  8. A - Cube Stacking(带权并查集)
  9. 你知道该如何搭建 AI 智能问答系统吗?
  10. LeetCode_9_回文数字
  11. 如何从Mac桌面隐藏各种标准图标?
  12. 沙箱环境和正式环境【PayPal接入(java)】【IPN通知问题】项目实战干货总结记录!
  13. 4条地铁线,乘船到西站!杭州西站枢纽综合交通规划设计方案出炉
  14. android 热补丁工具,Hotfix补丁工具报错排查步骤
  15. 校招网工面试经历(持续更新)
  16. 网站banner写法
  17. 高清HDMI高清编码器(HDMI网络传输器)使用及前景
  18. Linux内核之进程6: 深度睡眠
  19. 借助小程序·云开发制作校园导览小程序丨实战
  20. 【数据去噪】SG-多项式平滑算法

热门文章

  1. 深度解密今日头条的个性化资讯推荐技术
  2. 广东省阳江市谷歌卫星地图下载
  3. Java使用阿里云短信API发送验证码
  4. Vue实现数据大屏组件轮播效果
  5. 【游戏客户端面试题干货】-- 2021年度最新游戏客户端面试干货(操作系统篇)
  6. mybatis 源码系列(七) Java基础之数据库事务隔离级别
  7. 获取当前位置附近的小吃店功能
  8. win10计算机怎么显示桌面,win10如何显示我的电脑在桌面?教您显示的方法
  9. 如何选择健走鞋(健走鞋与跑步鞋的区别)
  10. 【挑战】How to Get Highest Score in Garupa?