几张图感性认识ocatvia



问题

采用Neutron LBaaS v2实现HTTPS Health Monitors时的配置如下(步骤见附件 - Neutron LBaaS v2)

backend 52112201-05ce-4f4d-b5a8-9e67de2a895amode tcpbalance leastconntimeout check 10soption httpchk GET /http-check expect rstatus 200option ssl-hello-chkserver 37a1f5a8-ec7e-4208-9c96-27d2783a594f 192.168.21.13:443 weight 1 check inter 5s fall 2server 8e722b4b-08b8-4089-bba5-8fa5dd26a87f 192.168.21.8:443 weight 1 check inter 5s fall 2

这种配置会有一个问题, 当使用自定义签名证书时一切正常, 但使用机构颁发的证书时反而有问题.
1, 对于ssl check, 严格一点的是check-ssl, 但haproxy没有证书不支持严格的客户端认证, 所以需添加"check check-ssl verify none"参数禁止对客户端参数进行验证. lbaasv2由于久远不支持(那时都还是haproxy 1.7以前必须不支持), ocatavia则有对ssl check的支持.(https://github.com/openstack/octavia/blob/master/octavia/common/jinja/haproxy/templates/macros.j2)

154         {% if pool.health_monitor.type == constants.HEALTH_MONITOR_HTTPS %}
155             {% set monitor_ssl_opt = " check-ssl verify none" %}

下面的配置works

backend 79024d4d-4de4-492c-a3e2-21730b096a37mode tcpbalance roundrobintimeout check 10soption httpchk GET /http-check expect rstatus 200fullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 317e4dea-5a62-4df0-a2f1-ea7bad4a9c5d 192.168.21.7:443 weight 1 check check-ssl verify none inter 5s fall 3 rise 4

但是似乎ocatavia的client有点问题, 它设置出来的是:

    option httpchk None Nonehttp-check expect rstatus Noneserver 317e4dea-5a62-4df0-a2f1-ea7bad4a9c5d 192.168.21.7:443 weight 1 check check-ssl verify none inter 5s fall 3 rise 4ubuntu@zhhuabj-bastion:~/ca3$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTPS --name https-monitor --http-method GET --url-path / --expected-codes 200 pool1
http_method is not a valid option for health monitors of type HTTPS (HTTP 400) (Request-ID: req-2d81bafa-1240-4f73-8e2e-cb0dd7691fdb)

2, ssl backend side采用了严格的客户端认证的话, 需改用TLS-HELLO check (https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html)其实已经有说明, 如下:
HTTPS health monitors operate exactly like HTTP health monitors, but with ssl back-end servers. Unfortunately, this causes problems if the servers are performing client certificate validation, as HAProxy won’t have a valid cert. In this case, using TLS-HELLO type monitoring is an alternative.
TLS-HELLO health monitors simply ensure the back-end server responds to SSLv3 client hello messages. It will not check any other health metrics, like status code or body contents.
https://review.openstack.org/#/c/475944/

实际上客户并未使用客户端认证, 所以不是上面的原因, 应该是SNI所致. 因为后端有SNI认证, haproxy端需传入hostname, 但haproxy端无法传入hostname, 所以出错.但octavia-worker应该可以传SNI到后端.

ubuntu@zhhuabj-bastion:~/ca3$ curl --cacert ca.crt https://10.5.150.5
curl: (51) SSL: certificate subject name (www.server1.com) does not match target host name '10.5.150.5'
ubuntu@zhhuabj-bastion:~/ca3$ curl --resolve www.server1.com:443:10.5.150.5 -k https://www.server1.com
test1

什么是SNI, 就是ssl server端可能会根据每个节点的hostname生成不同的cert, 并启用SNI. 这样ssl client访问ssl server端时也应该将hostname也传过去. (注: SNIProxy是一个适用于 HTTPS 和 HTTP 的类似于透明代理的反向代理工具。它可以在 TCP 层直接将流量在不解包的情况下转发出来,实现不需要在代理服务器配置证书就能反向代理 HTTPS 网站的功能)
'curl -k’的方式测试无法很好的测试SNI, 最好是通过’openstack s_client’测试:

#Ideal Test to connect with SSLv3/No SNI
openssl s_client -ssl3 -connect 103.245.215.4:443
#openssl s_client -connect 192.168.254.214:9443 | openssl x509 -noout -text | grep DNS
#We can also send SNI using -servername:
openssl s_client -ssl3 -servername CERT_HOSTNAME -connect 103.245.215.4:443

这个文档 (https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#option%20ssl-hello-chk )指出ssl-hello-chk只检查SSLv3不检查HTTP, 事实上, HTTPS health check也不检查HTTP只是多了个SSL negotiation. check-ssl check似乎能做更多.
LBaas v2模板目前只支持"httpchk"与"ssl-hello-chk", 这只有SSL check, 没有HTTP check. 所以问题很可能是出在SSLv3 hello (without SNI)有问题.做个SNI相关的实验验证一下:
首先, charm应该将ca传给octavia, ocavia应该根据ca再去创建SNI证书, 并且传SNI证书到backend, octavia-worker的相关处理代码如下:

        LOG.debug("request url %s", path)_request = getattr(self.session, method.lower())_url = self._base_url(amp.lb_network_ip) + pathLOG.debug("request url %s", _url)reqargs = {'verify': CONF.haproxy_amphora.server_ca,'url': _url,'timeout': (req_conn_timeout, req_read_timeout), }reqargs.update(kwargs)headers = reqargs.setdefault('headers', {})headers['User-Agent'] = OCTAVIA_API_CLIENTself.ssl_adapter.uuid = amp.idexception = None# Keep retryingdef get_create_amphora_flow(self):"""Creates a flow to create an amphora.:returns: The flow for creating the amphora"""create_amphora_flow = linear_flow.Flow(constants.CREATE_AMPHORA_FLOW)create_amphora_flow.add(database_tasks.CreateAmphoraInDB(provides=constants.AMPHORA_ID))create_amphora_flow.add(lifecycle_tasks.AmphoraIDToErrorOnRevertTask(requires=constants.AMPHORA_ID))if self.REST_AMPHORA_DRIVER:create_amphora_flow.add(cert_task.GenerateServerPEMTask(create_amphora_flow.add(database_tasks.UpdateAmphoraDBCertExpiration(requires=(constants.AMPHORA_ID, constants.SERVER_PEM)))create_amphora_flow.add(compute_tasks.CertComputeCreate(requires=(constants.AMPHORA_ID, constants.SERVER_PEM,constants.BUILD_TYPE_PRIORITY, constants.FLAVOR),provides=constants.COMPUTE_ID))

1, 两个证书, 略. lb_tls_secret_1的hostname是www.server1.com, lb_tls_secret_2的hostname是www.server2.com

secret1_id=$(openstack secret list |grep lb_tls_secret_1 |awk '{print $2}')
secret2_id=$(openstack secret list |grep lb_tls_secret_2 |awk '{print $2}')

2, 创建listener时使用( --sni-container-refs $secret1_id $secret2_id )加入了两个域名的SNI

IP=192.168.21.7
secret1_id=$(openstack secret list |grep lb_tls_secret_1 |awk '{print $2}')
secret2_id=$(openstack secret list |grep lb_tls_secret_2 |awk '{print $2}')
subnetid=$(openstack subnet show private_subnet -f value -c id); echo $subnetid
lb_id=$(openstack loadbalancer show lb3 -f value -c id)
#lb_id=$(openstack loadbalancer create --name test_tls_termination --vip-subnet-id $subnetid -f value -c id); echo $lb_id
listener_id=$(openstack loadbalancer listener create $lb_id --name https_listener --protocol-port 443 --protocol TERMINATED_HTTPS --default-tls-container=$secret1_id --sni-container-refs $secret1_id $secret2_id -f value -c id); echo $listener_id
pool_id=$(openstack loadbalancer pool create --protocol HTTP --listener $listener_id --lb-algorithm ROUND_ROBIN -f value -c id); echo $pool_id
openstack loadbalancer member create --address ${IP} --subnet-id $subnetid --protocol-port 80 $pool_idpublic_network=$(openstack network show ext_net -f value -c id)
fip=$(openstack floating ip create $public_network -f value -c floating_ip_address)
vip=$(openstack loadbalancer show $lb_id -c vip_address -f value)
vip_port=$(openstack port list --fixed-ip ip-address=$vip -c ID -f value)
openstack floating ip set $fip --fixed-ip-address $vip --port $vip_port

3, 测试, client传入www.server1.com或www.server2.com两个域名时, server端能正常响应, 但传入一个域名www.server3.com时就报了这个错:‘does not match target host name ‘www.server3.com’’

ubuntu@zhhuabj-bastion:~/ca3$ curl --resolve www.server1.com:443:10.5.150.5 --cacert ca.crt https://www.server1.com
Hello World!
ubuntu@zhhuabj-bastion:~/ca3$ curl --resolve www.server2.com:443:10.5.150.5 --cacert ca.crt https://www.server2.com
Hello World!
ubuntu@zhhuabj-bastion:~/ca3$ curl --resolve www.server3.com:443:10.5.150.5 --cacert ca.crt https://www.server3.com
curl: (51) SSL: certificate subject name (www.server1.com) does not match target host name 'www.server3.com'

为什么会这样呢?

LBaaS v2中的ssl check将在haproxy中添加下列配置, 实际上有ssl-hello-chk时httpchk将被覆盖(haproxy忽略的). haproxy 1.7开始添加了更高级的check-ssl(xenial使用haproxy 1.6, 不支持), 估计就是早期的lbaas为ssl check添加ssl-hello-chk的原因

mode tcp
option httpchk GET /
http-check expect rstatus 303
option ssl-hello-chk

haproxy(http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-option%20ssl-hello-chk)将为ssl-hello-chk伪造SSLv3包.

When some SSL-based protocols are relayed in TCP mode through HAProxy, it is
possible to test that the server correctly talks SSL instead of just testing
that it accepts the TCP connection. When "option ssl-hello-chk" is set, pure
SSLv3 client hello messages are sent once the connection is established to
the server, and the response is analyzed to find an SSL server hello message.
The server is considered valid only when the response contains this server
hello message.
All servers tested till there correctly reply to SSLv3 client hello messages,
and most servers tested do not even log the requests containing only hello
messages, which is appreciable.
Note that this check works even when SSL support was not built into haproxy
because it forges the SSL message. When SSL support is available, it is best
to use native SSL health checks instead of this one.

这是haproxy相关处理的源代码, 它没使用SSL libray, 先发硬编码的SSLv3 hello消息, 然后从response里找0x15 (SSL3_RT_ALERT) or 0x16 (SSL3_RT_HANDSHAKE), 若没找着就返回HCHK_STATUS_L6RSP(Layer6 invalid response) - https://github.com/haproxy/haproxy/blob/master/src/checks.c#L915

 case PR_O2_SSL3_CHK:if (!done && b_data(&check->bi) < 5)goto wait_more_data;/* Check for SSLv3 alert or handshake */if ((b_data(&check->bi) >= 5) && (*b_head(&check->bi) == 0x15 || *b_head(&check->bi) == 0x16))set_server_check_status(check, HCHK_STATUS_L6OK, NULL);elseset_server_check_status(check, HCHK_STATUS_L6RSP, NULL);break;

错误’Layer6 invalid response’正是从客户日志中看到的:

Oct 25 04:50:34 neut002 haproxy[54990]: Server aaa0a533-073b-4b0f-8b81-777b6a8f3900/f2dc685f-58f7-4201-8060-3409d2d73a0d is DOWN, reason: Layer6 invalid response, check duration: 4ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

所以backend ssl server端应该返回0x15, 客户究竟在haproxy之前运行什么ssl backend端, 我们不清楚. 假设它们运行的是apache2. 我们搭建一个测试环境, apache2采用默认的tls1.2, 而haproxy里还使用老的sslv3 hello时, apache2 ssl backend将返回下列的ssl协商错误:

ubuntu@zhhuabj-bastion:~/ca3$ openssl s_client -ssl3 -connect www.server1.com:443
140306875094680:error:140A90C4:SSL routines:SSL_CTX_new:null ssl method passed:ssl_lib.c:1878:
ubuntu@zhhuabj-bastion:~/ca3$ openssl s_client -ssl3 -servername www.server1.com -connect www.server1.com:443
139626113296024:error:140A90C4:SSL routines:SSL_CTX_new:null ssl method passed:ssl_lib.c:1878:
ubuntu@zhhuabj-bastion:~/ca3$ openssl s_client -ssl3 -servername www.server2.com -connect www.server1.com:443
140564176807576:error:140A90C4:SSL routines:SSL_CTX_new:null ssl method passed:ssl_lib.c:1878:

为什么ssl backend端不返回0x15或0x16呢, 理论上可能有以下几个原因
a, SSLv3现在已经被废弃了, 主流http server已经禁用了SSLv3支持, apach2收到haproxy过来的SSLv3 hello包时, apache2的SSL实现可能会响应别的消息而不是0x15/0x15
b, 因为haproxy过来的SSLv3 hello请求里没有SNI, 这样若启用了SNI的backend端(如apache2)就会ssl协商失败了, 这样也就未返回0x15/0x16
c, 其他原因
具体原因还需继续在backend抓包(tcpdump -eni ens3 -w ssl-test.pcap -s 0 port 443 or port 8443)确认.

更新, 原因已找到:
octavia/0会创建amphorae service vm, octavia/0上的/usr/lib/python3/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py采用python requests模块去连接service vm上的9443端口. 这块代码类似下面这句所以不work:

curl --cacert /etc/octavia/certs/issuing_ca.pem 192.168.254.31:9443

改试下面的都work , 其中26835e25-7c4f-4776-940f-209eb9a9e826是SNI (loadbalance-id).

curl -k 192.168.254.31:9443
openssl s_client -connect 192.168.254.31:9443 -key /etc/octavia/certs/issuing_ca_key.pem
curl --cacert /etc/octavia/certs/issuing_ca.pem https://26835e25-7c4f-4776-940f-209eb9a9e826:9443 --resolve 26835e25-7c4f-4776-940f-209eb9a9e826:9443:192.168.254.31#ipv6
#tlsv13 alert certificate required, it shows ssh client verfication is required
curl -6 -k https://[fc00:ea96:ae23:2d4f:f816:3eff:fe76:d87a]:9443/
#no alternative certificate subject name matches target host name, it shows it's about sni
curl -6 --cacert /etc/octavia/certs/issuing_ca.pem https://[fc00:ea96:ae23:2d4f:f816:3eff:fe76:d87a]:9443/
curl -6 --cacert /etc/octavia/certs/issuing_ca.pem --resolve backend1.domain:9443:[fc00:ea96:ae23:2d4f:f816:3eff:fe76:d87a] https://backend1.domain:9443 -v

在octavia/0上测试:
openssl s_client -connect 192.168.254.31:9443 -key /etc/octavia/certs/issuing_ca_key.pem
最后的原因是, 创建证书时未指明CN=$DOMAIN1:

SUBJECT="/C=CN/ST=BJ/L=BJ/O=STS/OU=Joshua/CN=$DOMAIN1"
openssl req -new -nodes -subj $SUBJECT -key $DOMAIN1.key -out $DOMAIN1.csr

我们知道, python requests module中的assert_hostname用来往backend传递SNI, 见 - https://medium.com/@yzlin/python-requests-ssl-ip-binding-6df25a9a8f6a
而下列代码(https://github.com/openstack/octavia/blob/master/octavia/amphorae/drivers/haproxy/rest_api_driver.py#L542), 将传self.uuid(self.ssl_adapter.uuid = amp.id)到conn.asser_hostname.

class CustomHostNameCheckingAdapter(requests.adapters.HTTPAdapter):def cert_verify(self, conn, url, verify, cert):conn.assert_hostname = self.uuidreturn super(CustomHostNameCheckingAdapter,self).cert_verify(conn, url, verify, cert)

见设计文档: https://docs.openstack.org/octavia/ocata/specs/version0.5/tls-data-security.html

安装Octavia

./generate-bundle.sh --name octavia --create-model --run --octavia -r stein --dvr-snat --num-compute 2   #can also use br-data:ens7 here
juju config neutron-openvswitch data-port="br-data:ens7"
./bin/add-data-ports.sh                      #it will add another NIC ens7 for every nova-compute nodes

或者:

# https://blog.ubuntu.com/2019/01/28/taking-octavia-for-a-ride-with-kubernetes-on-openstack
sudo snap install --classic charm
charm pull cs:openstack-base
cd openstack-base/
curl https://raw.githubusercontent.com/openstack-charmers/openstack-bundles/master/stable/overlays/loadbalancer-octavia.yaml -o loadbalancer-octavia.yaml
juju deploy ./bundle.yaml --overlay loadbalancer-octavia.yaml

或者

sudo bash -c 'cat >overlays/octavia.yaml" <<EOF
debug:                      &debug                     True
openstack_origin:           &openstack_origin          cloud:bionic-rocky
applications:octavia:#series: bioniccharm: cs:octavianum_units: 1constraints: mem=2Goptions:debug: *debugopenstack-origin: *openstack_originoctavia-dashboard:charm: cs:octavia-dashboard
relations:
- - mysql:shared-db- octavia:shared-db
- - keystone:identity-service- octavia:identity-service
- - rabbitmq-server:amqp- octavia:amqp
- - neutron-api:neutron-load-balancer- octavia:neutron-api
- - neutron-openvswitch:neutron-plugin- octavia:neutron-openvswitch
- - openstack-dashboard:dashboard-plugin- octavia-dashboard:dashboard
EOF
sudo bash -c 'cat >overlays/octavia.yaml" <<EOF
debug:                      &debug                     True
verbose:                    &verbose                   True
openstack_origin:           &openstack_origin
applications:barbican:charm: cs:~openstack-charmers-next/barbicannum_units: 1constraints: mem=1Goptions:debug: *debugopenstack-origin: *openstack_origin
relations:- [ barbican, rabbitmq-server ]- [ barbican, mysql ]- [ barbican, keystone ]
EOF
./generate-bundle.sh --series bionic --release rocky --barbican
juju deploy ./b/openstack.yaml --overlay ./b/o/barbican.yaml --overlay overlays/octavia.yaml

或者:

juju add-model bionic-barbican-octavia
./generate-bundle.sh --series bionic --barbican
#./generate-bundle.sh --series bionic --release rocky --barbican
juju deploy ./b/openstack.yaml --overlay ./b/o/barbican.yaml
#https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/overlays/loadbalancer-octavia.yaml
#NOTE: need to comment to:lxd related lines from loadbalancer-octavia.yaml, and change nova-compute num to 3
juju deploy ./b/openstack.yaml --overlay ./overlays/loadbalancer-octavia.yaml# Or we can:
# 2018-12-25 03:30:39 DEBUG update-status fatal error: runtime: out of memory
juju deploy octavia --config openstack-origin=cloud:bionic:queens --constraints mem=4G
juju deploy octavia-dashboard
juju add-relation octavia-dashboard openstack-dashboard
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia neutron-openvswitch
juju add-relation octavia neutron-api# Initialize and unseal vault
# https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-vault.html
# https://lingxiankong.github.io/2018-07-16-barbican-introduction.html
# /snap/vault/1315/bin/vault server -config /var/snap/vault/common/vault.hcl
sudo snap install vault
export VAULT_ADDR="http://$(juju run --unit vault/0 unit-get private-address):8200"ubuntu@zhhuabj-bastion:~$ vault operator init -key-shares=5 -key-threshold=3
Unseal Key 1: UB7XDri5FRcMLirKBIysdUb2PN7Ia5EVMP0Z9wD9Hyll
Unseal Key 2: mD8Gnr3hdB2LjjNB4ugxvvsvb8+EQQ/0AXm2p+c2qYFT
Unseal Key 3: vymYLAdou3qky24IEKDufYsZXAIPLWtErAKy/RkfgghS
Unseal Key 4: xOwDbqgNLLipsZbp+FAmVhBc3ZxA8CI3DchRc4AClRyQ
Unseal Key 5: nRlZ8WX6CS9nOw2ct5U9o0Za5jlUAtjN/6XLxjf62CnR
Initial Root Token: s.VJKGhNvIFCTgHVbQ6WvL0OLevault operator unseal UB7XDri5FRcMLirKBIysdUb2PN7Ia5EVMP0Z9wD9Hyll
vault operator unseal mD8Gnr3hdB2LjjNB4ugxvvsvb8+EQQ/0AXm2p+c2qYFT
vault operator unseal vymYLAdou3qky24IEKDufYsZXAIPLWtErAKy/RkfgghS
export VAULT_TOKEN=s.VJKGhNvIFCTgHVbQ6WvL0OLe
vault token create -ttl=10m
$ vault token create -ttl=10m
Key                  Value
---                  -----
token                s.7ToXh9HqE6FiiJZybFhevL9v
token_accessor       6dPkFpsPmx4D7g8yNJXvEpKN
token_duration       10m
token_renewable      true
token_policies       ["root"]
identity_policies    []
policies             ["root"]# Authorize vault charm to use a root token to be able to create secrets storage back-ends and roles to allow other app to access vault
juju run-action vault/0 authorize-charm token=s.7ToXh9HqE6FiiJZybFhevL9v# upload Amphora image
source ~/stsstack-bundles/openstack/novarc
http_proxy=http://squid.internal:3128 wget http://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
#openstack image create --tag octavia-amphora --disk-format=qcow2 --container-format=bare --private amphora-haproxy-xenial --file ./test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
glance image-create --tag octavia-amphora --disk-format qcow2 --name amphora-haproxy-xenial --file ./test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2 --visibility public --container-format bare --progresscd stsstack-bundles/openstack/
./configure
./tools/sec_groups.sh
./tools/instance_launch.sh 2 xenial
neutron floatingip-create ext_net
neutron floatingip-associate $(neutron floatingip-list |grep 10.5.150.4 |awk '{print $2}') $(neutron port-list |grep '192.168.21.3' |awk '{print $2}')
or
fix_ip=192.168.21.3
public_network=$(openstack network show ext_net -f value -c id)
fip=$(openstack floating ip create $public_network -f value -c floating_ip_address)
openstack floating ip set $fip --fixed-ip-address $fix_ip --port $(openstack port list --fixed-ip ip-address=$fix_ip -c id -f value)cd ~/ca  #https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html
juju config octavia \lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \lb-mgmt-issuing-ca-key-passphrase=foobar \lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"注: 也遇到一个问题, 上述命令没有更新service vm里的cert, ssh登录service vm之后查看/etc/octavia/certs/目录发现证书不同. 证书不对, 会导致service vm里的amphora-agent在9443端口起不来. service vm是通过cloud-init来写的证书, 那错误出在哪个环节呢?

配置资源:

# the code search 'configure_resources'
juju config octavia create-mgmt-network
juju run-action --wait octavia/0 configure-resources# some deubg ways:
openstack security group rule create $(openstack security group show lb-mgmt-sec-grp -f value -c id) --protocol udp --dst-port 546 --ethertype IPv6
openstack security group rule create $(openstack security group show lb-mgmt-sec-grp -f value -c id) --protocol icmp --ethertype IPv6
neutron security-group-rule-create --protocol icmpv6 --direction egress --ethertype IPv6 lb-mgmt-sec-grp
neutron security-group-rule-create --protocol icmpv6 --direction ingress --ethertype IPv6 lb-mgmt-sec-grpneutron port-show octavia-health-manager-octavia-0-listen-port -f value -c status
neutron port-update --admin-state-up True octavia-health-manager-octavia-0-listen-port
AGENT=$(neutron l3-agent-list-hosting-router lb-mgmt -f value -c id)
neutron l3-agent-router-remove $AGENT lb-mgmt
neutron l3-agent-router-add $AGENT lb-mgmt

上面configure-resources命令 (juju run-action --wait octavia/0 configure-resources)将会自动配置IPv6管理网段, 并且会配置一个binding:host在octavia/0节点上的名为octavia-health-manager-octavia-0-listen-port的port.

ubuntu@zhhuabj-bastion:~$ neutron router-list |grep mgmt
| 0a839377-6b19-419b-9868-616def4d749f | lb-mgmt         | null                                                                                                                                                                                    | False       | False |ubuntu@zhhuabj-bastion:~$ neutron net-list |grep mgmt
| ae580dc8-31d6-4ec3-9d44-4a9c7b9e80b6 | lb-mgmt-net | ea9c7d5c-d224-4dd3-b40c-3acae9690657 fc00:4a9c:7b9e:80b6::/64 |ubuntu@zhhuabj-bastion:~$ neutron subnet-list |grep mgmt
| ea9c7d5c-d224-4dd3-b40c-3acae9690657 | lb-mgmt-subnetv6 | fc00:4a9c:7b9e:80b6::/64 | {"start": "fc00:4a9c:7b9e:80b6::2", "end": "fc00:4a9c:7b9e:80b6:ffff:ffff:ffff:ffff"} |ubuntu@zhhuabj-bastion:~$ neutron port-list |grep fc00
| 5cb6e3f3-ebe5-4284-9c05-ea272e8e599b |                                                      | fa:16:3e:9e:82:6a | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6::1"}                  |
| 983c56d2-46dd-416c-abc8-5096d76f75e2 | octavia-health-manager-octavia-0-listen-port         | fa:16:3e:99:8c:ab | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6:f816:3eff:fe99:8cab"} |
| af38a60d-a370-4ddb-80ac-517fda175535 |                                                      | fa:16:3e:5f:cd:ae | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6:f816:3eff:fe5f:cdae"} |
| b65f90d1-2e1f-4994-a0e9-2bb13ead4cab |                                                      | fa:16:3e:10:34:84 | {"subnet_id": "ea9c7d5c-d224-4dd3-b40c-3acae9690657", "ip_address": "fc00:4a9c:7b9e:80b6:f816:3eff:fe10:3484"} |

并且在octavia/0上会创建一个名为o-hm0的接口, 此接口的IP地址与octavia-health-manager-octavia-0-listen-port port同.

ubuntu@zhhuabj-bastion:~$ juju ssh octavia/0 -- ip addr show o-hm0 |grep global
Connection to 10.5.0.110 closed.inet6 fc00:4a9c:7b9e:80b6:f816:3eff:fe99:8cab/64 scope global dynamic mngtmpaddr noprefixroute
ubuntu@zhhuabj-bastion:~$ juju ssh octavia/0 -- sudo ovs-vsctl show
490bbb36-1c7d-412d-8b44-31e6f796306aManager "ptcp:6640:127.0.0.1"is_connected: trueBridge br-tunController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort "gre-0a05006b"Interface "gre-0a05006b"type: greoptions: {df_default="true", in_key=flow, local_ip="10.5.0.110", out_key=flow, remote_ip="10.5.0.107"}Port br-tunInterface br-tuntype: internalPort patch-intInterface patch-inttype: patchoptions: {peer=patch-tun}Port "gre-0a050016"Interface "gre-0a050016"type: greoptions: {df_default="true", in_key=flow, local_ip="10.5.0.110", out_key=flow, remote_ip="10.5.0.22"}Bridge br-dataController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort phy-br-dataInterface phy-br-datatype: patchoptions: {peer=int-br-data}Port br-dataInterface br-datatype: internalBridge br-intController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort patch-tunInterface patch-tuntype: patchoptions: {peer=patch-int}Port "o-hm0"tag: 1Interface "o-hm0"type: internalPort br-intInterface br-inttype: internalPort int-br-dataInterface int-br-datatype: patchoptions: {peer=phy-br-data}Bridge br-exPort br-exInterface br-extype: internalovs_version: "2.10.0"ubuntu@zhhuabj-bastion:~$ juju ssh neutron-gateway/0 -- sudo ovs-vsctl show
ec3e2cb6-5261-4c22-8afd-5bacb0e8ce85Manager "ptcp:6640:127.0.0.1"is_connected: trueBridge br-exPort br-exInterface br-extype: internalBridge br-intController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort "tap62c03d3b-b1"tag: 2Interface "tap62c03d3b-b1"Port "tapb65f90d1-2e"tag: 3Interface "tapb65f90d1-2e"Port int-br-dataInterface int-br-datatype: patchoptions: {peer=phy-br-data}Port patch-tunInterface patch-tuntype: patchoptions: {peer=patch-int}Port "tap6f1478be-b1"tag: 1Interface "tap6f1478be-b1"Port "tap01efd82b-53"tag: 2Interface "tap01efd82b-53"Port "tap5cb6e3f3-eb"tag: 3Interface "tap5cb6e3f3-eb"Port br-intInterface br-inttype: internalBridge br-dataController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort "ens7"Interface "ens7"Port br-dataInterface br-datatype: internalPort phy-br-dataInterface phy-br-datatype: patchoptions: {peer=int-br-data}Bridge br-tunController "tcp:127.0.0.1:6633"is_connected: truefail_mode: securePort patch-intInterface patch-inttype: patchoptions: {peer=patch-tun}Port "gre-0a05007a"Interface "gre-0a05007a"type: greoptions: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.122"}Port "gre-0a05006b"Interface "gre-0a05006b"type: greoptions: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.107"}Port br-tunInterface br-tuntype: internalPort "gre-0a050079"Interface "gre-0a050079"type: greoptions: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.121"}Port "gre-0a05006e"Interface "gre-0a05006e"type: greoptions: {df_default="true", in_key=flow, local_ip="10.5.0.22", out_key=flow, remote_ip="10.5.0.110"}ovs_version: "2.10.0"ubuntu@zhhuabj-bastion:~⟫ juju ssh neutron-gateway/0 -- cat /var/lib/neutron/ra/0a839377-6b19-419b-9868-616def4d749f.radvd.conf
interface qr-5cb6e3f3-eb
{AdvSendAdvert on;MinRtrAdvInterval 30;MaxRtrAdvInterval 100;AdvLinkMTU 1458;prefix fc00:4a9c:7b9e:80b6::/64{AdvOnLink on;AdvAutonomous on;};
};ubuntu@zhhuabj-bastion:~$ openstack security group rule list lb-health-mgr-sec-grp
+--------------------------------------+-------------+----------+------------+-----------------------+
| ID                                   | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+----------+------------+-----------------------+
| 09a92cb2-9942-44d4-8a96-9449a6758967 | None        | None     |            | None                  |
| 20daa06c-9de6-4c91-8a1e-59645f23953a | udp         | None     | 5555:5555  | None                  |
| 8f7b9966-c255-4727-a172-60f22f0710f9 | None        | None     |            | None                  |
| 90f86b27-12f8-4a9a-9924-37b31d26cbd8 | icmpv6      | None     |            | None                  |
+--------------------------------------+-------------+----------+------------+-----------------------+
ubuntu@zhhuabj-bastion:~$ openstack security group rule list lb-mgmt-sec-grp
+--------------------------------------+-------------+----------+------------+-----------------------+
| ID                                   | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+----------+------------+-----------------------+
| 54f79f92-a6c5-411d-a309-a02b39cc384b | icmpv6      | None     |            | None                  |
| 574f595e-3d96-460e-a3f2-329818186492 | None        | None     |            | None                  |
| 5ecb0f58-f5dd-4d52-bdfa-04fd56968bd8 | tcp         | None     | 22:22      | None                  |
| 7ead3a3a-bc45-4434-b7a2-e2a6c0dc3ce9 | None        | None     |            | None                  |
| cf82d108-e0f8-4916-95d4-0c816b6eb156 | tcp         | None     | 9443:9443  | None                  |
+--------------------------------------+-------------+----------+------------+-----------------------+ubuntu@zhhuabj-bastion:~$ source ~/novarc
ubuntu@zhhuabj-bastion:~$ openstack security group rule list default
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
| ID                                   | IP Protocol | IP Range  | Port Range | Remote Security Group                |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
| 15b56abd-c2af-4c0a-8585-af68a8f09e3c | icmpv6      | None      |            | None                                 |
| 2ad77fa3-32c7-4a20-a572-417bea782eff | icmp        | 0.0.0.0/0 |            | None                                 |
| 2c2aec15-e4ad-4069-abd2-0191fe80f9bb | None        | None      |            | None                                 |
| 3b775807-3c61-45a3-9677-aaf9631db677 | udp         | 0.0.0.0/0 | 3389:3389  | None                                 |
| 3e9a6e7f-b9a2-47c9-97ca-042b22fbf308 | icmpv6      | None      |            | None                                 |
| 42a3c09e-91c8-471d-b4a8-c1fe87dab066 | None        | None      |            | None                                 |
| 47f9cec2-4bc0-4d71-9a02-3a27d46b59f8 | icmp        | None      |            | None                                 |
| 94297175-9439-4df2-8c93-c5576e52e138 | udp         | None      | 546:546    | None                                 |
| 9c6ac9d2-3b9e-4bab-a55a-04a1679b66be | None        | None      |            | c48a1bf5-7b7e-4337-afdf-8057ae8025af |
| b6e95f76-1b64-4135-8b62-b058ec989f7e | None        | None      |            | c48a1bf5-7b7e-4337-afdf-8057ae8025af |
| de5132a5-72e2-4f03-8b6a-dcbc2b7811c3 | tcp         | 0.0.0.0/0 | 3389:3389  | None                                 |
| e72bea9f-84ce-4e3a-8597-c86d40b9b5ef | tcp         | 0.0.0.0/0 | 22:22      | None                                 |
| ecf1415c-c6e9-4cf6-872c-4dac1353c014 | tcp         | 0.0.0.0/0 |            | None                                 |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+

底层OpenStack环境(OpenStack Over Openstack)需要做 (见: https://blog.csdn.net/quqi99/article/details/78437988 ):

openstack security group rule create $secgroup --protocol udp --dst-port 546 --ethertype IPv6

最容易出现的问题是health-manager-octavia-0-listen-port port为DOWN, 从而o-hm0网络不通而无法从dhcp server处获得IP, 网段不通多半是br-int上的flow rules的问题, 我多次遇到这种情况, 但后来重建环境不知为什么又好了.

root@juju-50fb86-bionic-rocky-barbican-octavia-11:~# ovs-ofctl dump-flows br-intcookie=0x5dc634635bd398eb, duration=424018.932s, table=0, n_packets=978, n_bytes=76284, priority=10,icmp6,in_port="o-hm0",icmp_type=136 actions=resubmit(,24)cookie=0x5dc634635bd398eb, duration=424018.930s, table=0, n_packets=0, n_bytes=0, priority=10,arp,in_port="o-hm0" actions=resubmit(,24)cookie=0x5dc634635bd398eb, duration=425788.219s, table=0, n_packets=0, n_bytes=0, priority=2,in_port="int-br-data" actions=dropcookie=0x5dc634635bd398eb, duration=424018.943s, table=0, n_packets=10939, n_bytes=2958167, priority=9,in_port="o-hm0" actions=resubmit(,25)cookie=0x5dc634635bd398eb, duration=425788.898s, table=0, n_packets=10032, n_bytes=1608826, priority=0 actions=resubmit(,60)cookie=0x5dc634635bd398eb, duration=425788.903s, table=23, n_packets=0, n_bytes=0, priority=0 actions=dropcookie=0x5dc634635bd398eb, duration=424018.940s, table=24, n_packets=675, n_bytes=52650, priority=2,icmp6,in_port="o-hm0",icmp_type=136,nd_target=fc00:4a9c:7b9e:80b6:f816:3eff:fe99:8cab actions=resubmit(,60)cookie=0x5dc634635bd398eb, duration=424018.938s, table=24, n_packets=0, n_bytes=0, priority=2,icmp6,in_port="o-hm0",icmp_type=136,nd_target=fe80::f816:3eff:fe99:8cab actions=resubmit(,60)cookie=0x5dc634635bd398eb, duration=425788.879s, table=24, n_packets=303, n_bytes=23634, priority=0 actions=dropcookie=0x5dc634635bd398eb, duration=424018.951s, table=25, n_packets=10939, n_bytes=2958167, priority=2,in_port="o-hm0",dl_src=fa:16:3e:99:8c:ab actions=resubmit(,60)cookie=0x5dc634635bd398eb, duration=425788.896s, table=60, n_packets=21647, n_bytes=4620009, priority=3 actions=NORMALroot@juju-50fb86-bionic-rocky-barbican-octavia-11:~# ovs-ofctl dump-flows br-datacookie=0xb41c0c7781ded568, duration=426779.130s, table=0, n_packets=16816, n_bytes=3580386, priority=2,in_port="phy-br-data" actions=dropcookie=0xb41c0c7781ded568, duration=426779.201s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL

如果o-hm0总是无法获得IP, 我们也可以手工配置一个IPv4管理网段试试.

neutron router-gateway-clear lb-mgmt
neutron router-interface-delete lb-mgmt lb-mgmt-subnetv6
neutron subnet-delete lb-mgmt-subnetv6
neutron port-list |grep fc00
#neutron port-delete 464e6d47-9830-4966-a2b7-e188c19c407a
openstack subnet create --subnet-range 192.168.0.0/24 --allocation-pool start=192.168.0.2,end=192.168.0.200 --network lb-mgmt-net lb-mgmt-subnet
neutron router-interface-add lb-mgmt lb-mgmt-subnet
#neutron router-gateway-set lb-mgmt ext_net
neutron port-list |grep 192.168.0.1#openstack security group create lb-mgmt-sec-grp --project $(openstack security group show lb-mgmt-sec-grp -f value -c project_id)
openstack security group rule create --protocol udp --dst-port 5555 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp
openstack security group rule create --protocol icmp lb-mgmt-sec-grp
openstack security group show lb-mgmt-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol icmp lb-health-mgr-sec-grp# create a management port o-hm0 on octavia/0 node, first use neutron to allocate a port, then call ovs-vsctl to add-port
LB_HOST=$(juju ssh octavia/0 -- hostname)
juju ssh octavia/0 -- sudo ovs-vsctl del-port br-int o-hm0
# Use LB_HOST to replace juju-70ea4e-bionic-barbican-octavia-11, don't know why it said 'bind failed' when using $LB_HOST directly
neutron port-create --name mgmt-port --security-group $(openstack security group show lb-health-mgr-sec-grp -f value -c id) --device-owner Octavia:health-mgr --binding:host_id=juju-acadb9-bionic-rocky-barbican-octavia-without-vault-9 lb-mgmt-net --tenant-id $(openstack security group show lb-health-mgr-sec-grp -f value -c project_id)juju ssh octavia/0 -- sudo ovs-vsctl --may-exist add-port br-int o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$(neutron port-show mgmt-port -f value -c mac_address) -- set Interface o-hm0 external-ids:iface-id=$(neutron port-show mgmt-port -f value -c id)
juju ssh octavia/0 -- sudo ip link set dev o-hm0 address $(neutron port-show mgmt-port -f value -c mac_address)
ping 192.168.0.2

测试虚机中安装HTTPS测试服务

# Prepare CA and ssl pairs for lb server
openssl genrsa -passout pass:password -out ca.key
openssl req -x509 -passin pass:password -new -nodes -key ca.key -days 3650 -out ca.crt -subj "/C=CN/ST=BJ/O=STS"
openssl genrsa -passout pass:password -out lb.key
openssl req -new -key lb.key -out lb.csr -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl x509 -req -in lb.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out lb.crt -days 3650
cat lb.crt lb.key > lb.pem
#openssl pkcs12 -export -inkey lb.key -in lb.crt -certfile ca.crt -passout pass:password -out lb.p12# Create two test servers and run
sudo apt install python-minimal -y
sudo bash -c 'cat >simple-https-server.py' <<EOF
#!/usr/bin/env python
# coding=utf-8
import BaseHTTPServer, SimpleHTTPServer
import ssl
httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='./lb.pem', server_side=True)
httpd.serve_forever()
EOF
sudo bash -c 'cat >index.html' <<EOF
test1
EOF
nohup sudo python simple-https-server.py &
ubuntu@zhhuabj-bastion:~$ curl -k https://10.5.150.4
test1
ubuntu@zhhuabj-bastion:~$ curl -k https://10.5.150.5
test2
ubuntu@zhhuabj-bastion:~$ curl --cacert ~/ca/ca.crt https://10.5.150.4
curl: (51) SSL: certificate subject name (www.quqi.com) does not match target host name '10.5.150.4'
ubuntu@zhhuabj-bastion:~$ curl --resolve www.quqi.com:443:10.5.150.4 --cacert ~/ca/ca.crt https://www.quqi.com
test1
ubuntu@zhhuabj-bastion:~$ curl --resolve www.quqi.com:443:10.5.150.4 -k https://www.quqi.com
test1

20191109更新, 上面的方法使用–cacert时并不work, 改成下列方法works

openssl req -new -x509 -keyout lb.pem -out lb.pem -days 365 -nodes -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
$ curl --resolve www.quqi.com:443:192.168.99.135 --cacert ./lb.pem https://www.quqi.com
test1

经进一步调试, 原来是因为 这一句"curl --resolve www.quqi.com:443:10.5.150.4 --cacert ~/ca/ca.crt https://www.quqi.com"应该是"curl --resolve www.quqi.com:443:10.5.150.4 --cacert ~/ca/lb.pem https://www.quqi.com"

或者使用apache2安装ssl

vim /etc/apache2/sites-available/default-ssl.confServerName server1.comServerAlias www.server1.comSSLCertificateFile      /home/ubuntu/www.server1.com.crtSSLCertificateKeyFile /home/ubuntu/www.server1.com.keyvim /etc/apache2/sites-available/000-default.confServerName server1.comServerAlias www.server1.comsudo apachectl configtest
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo systemctl restart apache2.service

测试虚机中安装HTTP测试服务

下面的这种HTTP测试服务实际上有问题, 会导致haproxy对backend作check时报下列错误.

ae847e94-5aeb-4da6-9b66-07e1a385465b is UP, reason: Layer7 check passed, code: 200, info: "HTTP status check returned code <3C>200<3E>", check duration: 7ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
1, deploy http server in backend
MYIP=$(ifconfig ens2|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done
2, test it
sudo ip netns exec qrouter-2ea1fc45-69e5-4c77-b6c9-d4fabc57145b curl http://192.168.21.7:80
3, add it into haproxy
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.7 --protocol-port 80 pool1

并且还会导致在作haproxy vip作curl测试时返回Bad Gateway的错误.
所以最后在backend上运行"sudo python -m SimpleHTTPServer 80"之后解决.

How to ssh into amphora service vm

# NOTE: (base64 -w 0 $HOME/.ssh/id_amphora.pub)
sudo mkdir -p /etc/octavia/.ssh && sudo chown -R $(id -u):$(id -g) /etc/octavia/.ssh
ssh-keygen -b 2048 -t rsa -N "" -f /etc/octavia/.ssh/octavia_ssh_key
openstack user list --domain service_domain
# NOTE: we must add '--user' option to avoid the error 'Invalid key_name provided'
nova keypair-add --pub-key=/etc/octavia/.ssh/octavia_ssh_key.pub octavia_ssh_key --user $(openstack user show octavia --domain service_domain -f value -c id)
# openstack cli doesn't support to list user scope keypairs
nova keypair-list --user $(openstack user show octavia --domain service_domain -f value -c id)vim /etc/octavia/octavia.conf
vim /var/lib/juju/agents/unit-octavia-0/charm/templates/rocky/octavia.conf
vim /usr/lib/python3/dist-packages/octavia/compute/drivers/nova_driver.py
vim /usr/lib/python3/dist-packages/octavia/controller/worker/tasks/compute_tasks.py  #import pdb;pdb.set_trace()
[controller_worker]
amp_ssh_key_name = octavia_ssh_key   #for sts, name is called amphora-backdoor
sudo ip netns exec qrouter-0a839377-6b19-419b-9868-616def4d749f ssh -6 -i
~/octavia_ssh_key ubuntu@fc00:4a9c:7b9e:80b6:f816:3eff:fe5f:cdaeNOTE:
we can't ssh by: ssh -i /etc/octavia/octavia_ssh_key ubuntu@10.5.150.15
but we can ssh by:
sudo ip netns exec qrouter-0a839377-6b19-419b-9868-616def4d749f ssh -i ~/octavia_ssh_key ubuntu@192.168.0.12 -v
ubuntu@zhhuabj-bastion:~$ nova list --all
+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+-------------------------------------------------------------+
| ID                                   | Name                                         | Tenant ID                        | Status | Task State | Power State | Networks                                                    |
+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+-------------------------------------------------------------+
| 1f50fa16-bbbe-47a7-b66b-86de416d0c5e | amphora-2fae038f-08b5-4bce-a2b7-f7bd543c0314 | 5165bc7f79304f67a135fcde3cd78ae1 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.12; private=192.168.21.6, 10.5.150.15 |openstack security group rule create lb-300e5ee7-2793-4df3-b901-17ce76da0b09 --protocol icmp --remote-ip 0.0.0.0/0
openstack security group rule create lb-300e5ee7-2793-4df3-b901-17ce76da0b09 --protocol tcp --dst-port 22

Deploy a non-terminated HTTPS load balancer

sudo apt install python-octaviaclient python3-octaviaclient
openstack complete |sudo tee /etc/bash_completion.d/openstack
source <(openstack complete)
#No module named 'oslo_log'
#openstack loadbalancer delete --cascade --wait lb1
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
#lb_vip_port_id=$(openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id private_subnet)
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
openstack loadbalancer show lb1
nova list --all
openstack loadbalancer listener create --name listener1 --protocol HTTPS --protocol-port 443 lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTPS
#openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTPS --url-path / pool1
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TLS-HELLO pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.10 --protocol-port 443 pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.12 --protocol-port 443 pool1
openstack loadbalancer member list pool1root@amphora-a9cf0b97-7f30-4b9c-b16a-7bc54526e0d0:~# ps -ef |grep haproxy
root      1459     1  0 04:34 ?        00:00:00 /usr/sbin/haproxy-systemd-wrapper -f /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.pid -L UlKGE8M_cxJTcktjV8M-eKJkh-g
nobody    1677  1459  0 04:35 ?        00:00:00 /usr/sbin/haproxy -f /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.pid -L UlKGE8M_cxJTcktjV8M-eKJkh-g -Ds -sf 1625
nobody    1679  1677  0 04:35 ?        00:00:00 /usr/sbin/haproxy -f /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.pid -L UlKGE8M_cxJTcktjV8M-eKJkh-g -Ds -sf 1625
root      1701  1685  0 04:36 pts/0    00:00:00 grep --color=auto haproxy
root@amphora-a9cf0b97-7f30-4b9c-b16a-7bc54526e0d0:~#
root@amphora-a9cf0b97-7f30-4b9c-b16a-7bc54526e0d0:~# cat /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0/haproxy.cfg
# Configuration for loadbalancer eda3efa5-dd91-437c-81d9-b73d28b5312f
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/b9d5a192-1a6a-4df7-83d4-fe96ac9574c0.sock mode 0666 level usermaxconn 1000000
defaultslog globalretries 3option redispatch
frontend b9d5a192-1a6a-4df7-83d4-fe96ac9574c0option tcplogmaxconn 1000000bind 192.168.21.16:443mode tcpdefault_backend 502b6689-40ad-4201-b704-f221e0fddd58timeout client 50000
backend 502b6689-40ad-4201-b704-f221e0fddd58mode tcpbalance roundrobintimeout check 10soption ssl-hello-chkfullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 49f16402-69f4-49bb-8dc0-5ec13a0f1791 192.168.21.10:443 weight 1 check inter 5s fall 3 rise 4server 1ab624e1-9cd8-49f3-9297-4fa031a3ca58 192.168.21.12:443 weight 1 check inter 5s fall 3 rise 4

有时候service vm已经创建好, 但octavia-worker因为下列原因退出导致"openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet"这步执行后状态总不对.

2019-01-04 06:30:45.574 8173 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service ConsumerService(0) [8173]
2019-01-04 06:30:45.573 7983 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
2019-01-04 06:30:45.581 8173 INFO octavia.controller.queue.consumer [-] Stopping consumer...
2019-01-04 06:30:45.593 8173 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service ConsumerService(0) [8173]

Deploy a TLS-terminated HTTPS load balancer

# 确实不能加密码: https://opendev.org/openstack/octavia/commit/a501714a76e04b33dfb24c4ead9956ed4696d1df
openssl genrsa -passout pass:password -out ca.key
openssl req -x509 -passin pass:password -new -nodes -key ca.key -days 3650 -out ca.crt -subj "/C=CN/ST=BJ/O=STS"
openssl genrsa -passout pass:password -out lb.key
openssl req -new -key lb.key -out lb.csr -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl x509 -req -in lb.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out lb.crt -days 3650
cat lb.crt lb.key > lb.pem
openssl pkcs12 -export -inkey lb.key -in lb.crt -certfile ca.crt -passout pass:password -out lb.p12sudo apt install python-barbicanclient
#openstack secret delete $(openstack secret list | awk '/ tls_lb_secret / {print $2}')
openstack secret store --name='tls_lb_secret' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < lb.p12)"
openstack acl user add -u $(openstack user show octavia --domain service_domain -f value -c id) $(openstack secret list | awk '/ tls_lb_secret / {print $2}')
openstack secret list
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
openstack loadbalancer show lb1openstack loadbalancer member list pool1
openstack loadbalancer member delete pool1 <member>
openstack loadbalancer pool delete pool1
openstack loadbalancer listener delete listener1
openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_lb_secret / {print $2}') lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.10 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.12 --protocol-port 80 pool1

但是出错了:

ubuntu@zhhuabj-bastion:~/ca⟫ openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_lb_secret / {print $2}') lb1
Could not retrieve certificate: ['http://10.5.0.25:9312/v1/secrets/7c706fb2-4319-46fc-b78d-81f34393f581'] (HTTP 400) (Request-ID: req-c0c0e4d5-f395-424c-9aab-5c4c4e72fb3d)

出错的原因找到, 是创建密钥时不能加密码:

openssl genrsa -out ca.key
openssl req -x509 -new -nodes -key ca.key -days 3650 -out ca.crt -subj "/C=CN/ST=BJ/O=STS"
openssl genrsa -out lb.key
openssl req -new -key lb.key -out lb.csr -subj "/C=CN/ST=BJ/O=STS/CN=www.quqi.com"
openssl x509 -req -in lb.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out lb.crt -days 3650
cat lb.crt lb.key > lb.pem
openssl pkcs12 -export -inkey lb.key -in lb.crt -certfile ca.crt -passout pass: -out lb.p12

但是仍然不成功, 原因已查明, 与密钥无关, 而是之前没有执行这一句(octavia_user_id=$(openstack user show octavia --domain service_domain -f value -c id); openstack acl user add -u $octavia_user_id $secret_id) 所致, 一个完整的脚本如下:

#https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-loadbalancer#Update_neutron_config
#https://lingxiankong.github.io/2018-04-29-octavia-tls-termination-test.html
DOMAIN1=www.server1.com
DOMAIN2=www.server2.comecho "Create CA cert(self-signed) and key..."
CA_SUBJECT="/C=CN/ST=BJ/L=BJ/O=STS/OU=Joshua/CN=CA"
openssl req -new -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ca.key -out ca.crt -subj $CA_SUBJECTopenssl genrsa -des3 -out $DOMAIN1_encrypted.key 1024
openssl rsa -in $DOMAIN1_encrypted.key -out $DOMAIN1.key
openssl genrsa -des3 -out $DOMAIN2_encrypted.key 1024
openssl rsa -in $DOMAIN2_encrypted.key -out $DOMAIN2.keyecho "Create server certificate signing request..."
SUBJECT="/C=CN/ST=BJ/L=BJ/O=STS/OU=Joshua/CN=$DOMAIN1"
openssl req -new -nodes -subj $SUBJECT -key $DOMAIN1.key -out $DOMAIN1.csr
SUBJECT="/C=CN/ST=BJ/L=BJ/O=STS/OU=Joshua/CN=$DOMAIN2"
openssl req -new -nodes -subj $SUBJECT -key $DOMAIN2.key -out $DOMAIN2.csrecho "Sign SSL certificate..."
openssl x509 -req -days 3650 -in $DOMAIN1.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out $DOMAIN1.crt
openssl x509 -req -days 3650 -in $DOMAIN2.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out $DOMAIN2.crt# NOTE: must without password when using barbican to save p12
openssl pkcs12 -export -inkey www.server1.com.key -in www.server1.com.crt -certfile ca.crt -passout pass: -out www.server1.com.p12
openssl pkcs12 -export -inkey www.server2.com.key -in www.server2.com.crt -certfile ca.crt -passout pass: -out www.server2.com.p12secret1_id=$(openstack secret store --name='lb_tls_secret_1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < www.server1.com.p12)" -f value -c "Secret href")
secret2_id=$(openstack secret store --name='lb_tls_secret_2' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < www.server2.com.p12)" -f value -c "Secret href")# allow octavia service user to visit the cert saved in barbican by the user in the novarc
octavia_user_id=$(openstack user show octavia --domain service_domain -f value -c id); echo $octavia_user_id;
openstack acl user add -u $octavia_user_id $secret1_id
openstack acl user add -u $octavia_user_id $secret2_idIP=192.168.21.7
subnetid=$(openstack subnet show private_subnet -f value -c id); echo $subnetid
#lb_id=$(openstack loadbalancer create --name lb3 --vip-subnet-id $subnetid -f value -c id); echo $lb_id
lb_id=22ce64e5-585d-43bd-80eb-5c3ff22abacd
listener_id=$(openstack loadbalancer listener create $lb_id --name https_listener --protocol-port 443 --protocol TERMINATED_HTTPS --default-tls-container=$secret1_id --sni-container-refs $secret1_id $secret2_id -f value -c id); echo $listener_id
pool_id=$(openstack loadbalancer pool create --protocol HTTP --listener $listener_id --lb-algorithm ROUND_ROBIN -f value -c id); echo $pool_id
openstack loadbalancer member create --address ${IP} --subnet-id $subnetid --protocol-port 80 $pool_idpublic_network=$(openstack network show ext_net -f value -c id)
fip=$(openstack floating ip create $public_network -f value -c floating_ip_address)
vip=$(openstack loadbalancer show $lb_id -c vip_address -f value)
vip_port=$(openstack port list --fixed-ip ip-address=$vip -c ID -f value)
#openstack floating ip set $fip --fixed-ip-address $vip --port $vip_port
neutron floatingip-associate $fip $vip_portcurl -k https://$fip
curl --resolve www.server1.com:443:$fip --cacert ~/ca3/ca.crt https://www.server1.com
curl --resolve www.server2.com:443:$fip --cacert ~/ca3/ca.crt https://www.server2.comnobody    2202  2200  0 07:23 ?        00:00:00 /usr/sbin/haproxy -f /var/lib/octavia/dfa44538-2c12-411b-b3b3-c709bc139523/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/dfa44538-2c12-411b-b3b3-c709bc139523/dfa44538-2c12-411b-b3b3-c709bc139523.pid -L 1_a8OAWpvKuB7hMNzt8UwaJ2M00 -Ds -sf 2148root@amphora-2fae038f-08b5-4bce-a2b7-f7bd543c0314:~# cat /var/lib/octavia/dfa44538-2c12-411b-b3b3-c709bc139523/haproxy.cfg
# Configuration for loadbalancer 22ce64e5-585d-43bd-80eb-5c3ff22abacd
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/dfa44538-2c12-411b-b3b3-c709bc139523.sock mode 0666 level usermaxconn 1000000
defaultslog globalretries 3option redispatch
frontend dfa44538-2c12-411b-b3b3-c709bc139523option httplogmaxconn 1000000redirect scheme https if !{ ssl_fc }bind 192.168.21.16:443 ssl crt /var/lib/octavia/certs/dfa44538-2c12-411b-b3b3-c709bc139523/b16771bdb053d138575d60e3035d77fa0598ef5c.pem crt /var/lib/octavia/certs/dfa44538-2c12-411b-b3b3-c709bc139523mode httpdefault_backend e6c4444a-a06e-4c1c-80d4-389516059d46timeout client 50000
backend e6c4444a-a06e-4c1c-80d4-389516059d46mode httpbalance roundrobinfullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 234ff0d3-5196-4536-bd49-dfbab94732d4 192.168.21.7:80 weight 1

目前剩下的问题是service vm无法访问backend 192.168.21.7, 原理应该是:
octavia通过_plug_amphora_vip方法添加一个vip port (octavia-lb-vrrp-7e56de03-298e-43dd-a78f-33aa8d4af735), 它应该往amphora虚机上再添加一个port, 然后为此vip添加allowed_address_pairs. 但是在amphora虚机上我们没有发现这块新添的vip NIC, 重新运行下列’nova interface-attach’也不好使
nova list --all
nova interface-attach --port-id $(neutron port-show octavia-lb-vrrp-f63f0c5b-a541-442a-929c-b8ed7f7b3604 -f value -c id) 044f42c9-d205-4a11-aa8f-6b9aea896861
使用下列方法也不好使:

lb_id=$(openstack loadbalancer show lb3 -f value -c id)
old_vip=$(openstack loadbalancer show lb3 -f value -c vip_address)
private_subnet_id=$(neutron subnet-show private_subnet -f value -c id)
# delete old vip port (named 'octavia-lb-$lb_id')
neutron port-delete octavia-lb-$lb_id
# create new vip port with the same name and vip and binding:host_id is amphora service vm's host
# nova show $(nova list --all |grep amphora |awk '{print $2}') |grep OS-EXT-SRV-ATTR:host
neutron port-create --name octavia-lb-$lb_id --device-owner Octavia --binding:host_id=juju-50fb86-bionic-rocky-barbican-octavia-8 --fixed-ip subnet_id=${private_subnet_id},ip_address=${old_vip} private
mac=$(neutron port-show octavia-lb-$lb_id -f value -c mac_address)
nova interface-attach --port-id $(neutron port-show octavia-lb-$lb_id -f value -c id) $(nova list --all |grep amphora |awk '{print $2}')接着发现admin-state-up为False, 但enable(neutron port-update --admin-state-up True 6dc5b0bd-d0b7-4b29-9fce-b8f7b23c7914)后status仍然为DOWN. 继续检查设置如下;
root@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# cat /etc/netns/amphora-haproxy/network/interfaces
auto lo
iface lo inet loopback
source /etc/netns/amphora-haproxy/network/interfaces.d/*.cfgroot@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# cat /etc/netns/amphora-haproxy/network/interfaces.d/eth1.cfg
# Generated by Octavia agent
auto eth1 eth1:0
iface eth1 inet static
address 192.168.21.34
broadcast 192.168.21.255
netmask 255.255.255.0
gateway 192.168.21.1
mtu 1458
iface eth1:0 inet static
address 192.168.21.5
broadcast 192.168.21.255
netmask 255.255.255.0
# Add a source routing table to allow members to access the VIP
post-up /sbin/ip route add default via 192.168.21.1 dev eth1 onlink table 1
post-down /sbin/ip route del default via 192.168.21.1 dev eth1 onlink table 1
post-up /sbin/ip route add 192.168.21.0/24 dev eth1 src 192.168.21.5 scope link table 1
post-down /sbin/ip route del 192.168.21.0/24 dev eth1 src 192.168.21.5 scope link table 1
post-up /sbin/ip rule add from 192.168.21.5/32 table 1 priority 100
post-down /sbin/ip rule del from 192.168.21.5/32 table 1 priority 100
post-up /sbin/iptables -t nat -A POSTROUTING -p udp -o eth1 -j MASQUERADE
post-down /sbin/iptables -t nat -D POSTROUTING -p udp -o eth1 -j MASQUERADEroot@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# cat /var/lib/octavia/plugged_interfaces
fa:16:3e:e2:3a:7f eth1ip netns exec amphora-haproxy ifdown eth1
ip netns exec amphora-haproxy ifup eth1
ip netns exec amphora-haproxy ifup eth1.0
ip netns exec amphora-haproxy ip addr show但发现无法ifup eth1.0:
root@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# ip netns exec amphora-haproxy ifup eth1.0
Unknown interface eth1.0手工执行它:
ip netns exec amphora-haproxy ifdown eth1
ip netns exec amphora-haproxy ip addr add 192.168.21.34/24 dev eth1
ip netns exec amphora-haproxy ip addr add 192.168.21.5/24 dev eth1
ip netns exec amphora-haproxy ifconfig eth1 up
ip netns exec amphora-haproxy ip route add default via 192.168.21.1 dev eth1 onlink table 1
ip netns exec amphora-haproxy ip route add 192.168.21.0/24 dev eth1 src 192.168.21.5 scope link table 1
ip netns exec amphora-haproxy ip rule add from 192.168.21.5/32 table 1 priority 100
ip netns exec amphora-haproxy iptables -t nat -A POSTROUTING -p udp -o eth1 -j MASQUERADE注: 要重做image的话, 可以参考: https://github.com/openstack/octavia/tree/master/diskimage-create此时, 可以从amphora-haproxy ping backedn vm 192.168.21.7
root@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# ip netns exec amphora-haproxy ping -c 1 192.168.21.7
PING 192.168.21.7 (192.168.21.7) 56(84) bytes of data.
64 bytes from 192.168.21.7: icmp_seq=1 ttl=64 time=3.83 ms但是从neutron-gateway节点仍然无法ping vip 192.168.21.5
root@juju-50fb86-bionic-rocky-barbican-octavia-6:~# ip netns exec qrouter-2ea1fc45-69e5-4c77-b6c9-d4fabc57145b ping 192.168.21.5
PING 192.168.21.5 (192.168.21.5) 56(84) bytes of data.继续为这个ha port添加allowed-address-pairs port, 但仍然无果.
neutron port-update 6dc5b0bd-d0b7-4b29-9fce-b8f7b23c7914 --allowed-address-pairs type=dict list=true mac_address=fa:16:3e:e2:3a:7f,ip_address=192.168.21.5
root@juju-50fb86-bionic-rocky-barbican-octavia-9:~# iptables-save |grep 'IP/MAC pairs'
-A neutron-openvswi-s650aa1af-5 -s 192.168.21.5/32 -m mac --mac-source FA:16:3E:E2:3A:7F -m comment --comment "Allow traffic from defined IP/MAC pairs." -j RETURN
-A neutron-openvswi-s650aa1af-5 -s 192.168.21.34/32 -m mac --mac-source FA:16:3E:E2:3A:7F -m comment --comment "Allow traffic from defined IP/MAC pairs." -j RETURN
-A neutron-openvswi-sfad3d0f9-3 -s 192.168.0.16/32 -m mac --mac-source FA:16:3E:EA:54:4F -m comment --comment "Allow traffic from defined IP/MAC pairs." -j RETURNroot@juju-50fb86-bionic-rocky-barbican-octavia-9:~# iptables-save |grep 'IP/MAC pairs'
-A neutron-openvswi-s650aa1af-5 -s 192.168.21.5/32 -m mac --mac-source FA:16:3E:E2:3A:7F -m comment --comment "Allow traffic from defined IP/MAC pairs." -j RETURN
-A neutron-openvswi-s650aa1af-5 -s 192.168.21.34/32 -m mac --mac-source FA:16:3E:E2:3A:7F -m comment --comment "Allow traffic from defined IP/MAC pairs." -j RETURN
-A neutron-openvswi-sfad3d0f9-3 -s 192.168.0.16/32 -m mac --mac-source FA:16:3E:EA:54:4F -m comment --comment "Allow traffic from defined IP/MAC pairs." -j RETURNroot@juju-50fb86-bionic-rocky-barbican-octavia-9:~# ovs-appctl bridge/dump-flows br-int |grep 192.168.21
table_id=24, duration=513s, n_packets=6, n_bytes=252, priority=2,arp,in_port=4,arp_spa=192.168.21.5,actions=goto_table:25
table_id=24, duration=513s, n_packets=1, n_bytes=42, priority=2,arp,in_port=4,arp_spa=192.168.21.34,actions=goto_table:25root@juju-50fb86-bionic-rocky-barbican-octavia-9:~# ovs-appctl bridge/dump-flows br-int |grep 25,
table_id=25, duration=48s, n_packets=0, n_bytes=0, priority=2,in_port=4,dl_src=fa:16:3e:e2:3a:7f,actions=goto_table:60
table_id=25, duration=48s, n_packets=6, n_bytes=1396, priority=2,in_port=3,dl_src=fa:16:3e:ea:54:4f,actions=goto_table:60
root@juju-50fb86-bionic-rocky-barbican-octavia-9:~# ovs-appctl bridge/dump-flows br-int |grep 60,
table_id=60, duration=76s, n_packets=20, n_bytes=3880, priority=3,actions=NORMAL

继续查找原因, 既然从service vm能ping backend说明网络都没问题, 现在只是无法从gateway ping service vm那说明应该还是防火墙的问题. 采用’neutron port-show '查看该vip port关联的是一个新security group, 添加之后问题解决:

openstack security group rule create --protocol tcp --dst-port 22 lb-7f2985f0-c0d5-47ab-b805-7f5dafe20d3e
openstack security group rule create --protocol icmp lb-7f2985f0-c0d5-47ab-b805-7f5dafe20d3e

接着就是报这个错, 后来确认是上面在backend模拟HTTP服务的方法有问题, 后改成"sudo python -m SimpleHTTPServer 80"后问题解决 .

root@juju-50fb86-bionic-rocky-barbican-octavia-6:~# ip netns exec qrouter-2ea1fc45-69e5-4c77-b6c9-d4fabc57145b curl -k https://192.168.21.5
<html><body><h1>502 Bad Gateway</h1>
Jan  6 05:14:38 amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af haproxy[2688]: 192.168.21.1:41372 [06/Jan/2019:05:14:38.038] a5b442f3-7d40-4849-8b88-7f02697bfd5b~ e25b432a-ea45-4191-9448-c364661326dc/ae847e94-5aeb-4da6-9b66-07e1a385465b 28/0/9/-1/40 502 250 - - PH-- 0/0/0/0/0 0/0 "GET / HTTP/1.1

整个实验结果见链接- https://paste.ubuntu.com/p/PPHv9Zfdf6/

ubuntu@zhhuabj-bastion:~⟫ curl -k https://10.5.150.5
Hello World!
ubuntu@zhhuabj-bastion:~⟫ curl --resolve www.quqi.com:443:10.5.150.5 --cacert ~/ca2_without_pass/ca.crt https://www.quqi.com
Hello World!
ubuntu@zhhuabj-bastion:~⟫ curl --resolve www.quqi.com:443:10.5.150.5 -k https://www.quqi.com
Hello World!root@juju-50fb86-bionic-rocky-barbican-octavia-6:~# ip netns exec qrouter-2ea1fc45-69e5-4c77-b6c9-d4fabc57145b curl -k https://192.168.21.5
Hello World!root@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# ip netns exec amphora-haproxy curl 192.168.21.7
Hello World!root@amphora-3d906381-f49f-4efa-bfa7-b5f84eb3b1af:~# cat /var/lib/octavia/a5b442f3-7d40-4849-8b88-7f02697bfd5b/haproxy.cfg
# Configuration for loadbalancer 7f2985f0-c0d5-47ab-b805-7f5dafe20d3e
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/a5b442f3-7d40-4849-8b88-7f02697bfd5b.sock mode 0666 level usermaxconn 1000000
defaultslog globalretries 3option redispatch
frontend a5b442f3-7d40-4849-8b88-7f02697bfd5boption httplogmaxconn 1000000redirect scheme https if !{ ssl_fc }bind 192.168.21.5:443 ssl crt /var/lib/octavia/certs/a5b442f3-7d40-4849-8b88-7f02697bfd5b/4aa85f186d19a766c29109577d88734a8fca6385.pem crt /var/lib/octavia/certs/a5b442f3-7d40-4849-8b88-7f02697bfd5bmode httpdefault_backend e25b432a-ea45-4191-9448-c364661326dctimeout client 50000
backend e25b432a-ea45-4191-9448-c364661326dcmode httpbalance roundrobintimeout check 10soption httpchk GET /http-check expect rstatus 200fullconn 1000000option allbackupstimeout connect 5000timeout server 50000server ae847e94-5aeb-4da6-9b66-07e1a385465b 192.168.21.7:80 weight 1 check inter 5s fall 3 rise 4

附件 - Neutron LBaaS v2

https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
neutron lbaas-loadbalancer-create --name test-lb private_subnet
neutron lbaas-listener-create --name test-lb-https --loadbalancer test-lb --protocol HTTPS --protocol-port 443
neutron lbaas-pool-create --name test-lb-pool-https --lb-algorithm LEAST_CONNECTIONS --listener test-lb-https --protocol HTTPS
neutron lbaas-member-create --subnet private_subnet --address 192.168.21.13 --protocol-port 443 test-lb-pool-https
neutron lbaas-member-create --subnet private_subnet --address 192.168.21.8 --protocol-port 443 test-lb-pool-https
neutron lbaas-healthmonitor-create --delay 5 --max-retries 2 --timeout 10 --type HTTPS --pool test-lb-pool-https --name monitor1
root@juju-c9d701-xenail-queens-184313-vrrp-5:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl -k  https://192.168.21.14
test1
root@juju-c9d701-xenail-queens-184313-vrrp-5:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl -k  https://192.168.21.14
test2
root@juju-c9d701-xenail-queens-184313-vrrp-5:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl --cacert /home/ubuntu/lb.pem  https://192.168.21.14
curl: (51) SSL: certificate subject name (www.quqi.com) does not match target host name '192.168.21.14'
root@juju-c9d701-xenail-queens-184313-vrrp-5:~# ip netns exec qlbaas-84fd9a6c-24a2-4c0f-912b-eedc254ac1f4 curl --cacert /home/ubuntu/lb.pem  --resolve www.quqi.com:443:192.168.21.14 https://www.quqi.com
test1root@juju-c9d701-xenail-queens-184313-vrrp-5:~# echo 'show stat;show table' | socat stdio /var/lib/neutron/lbaas/v2/84fd9a6c-24a2-4c0f-912b-eedc254ac1f4/haproxy_stats.sock
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
c2a42906-e160-44dd-8590-968af2077b4a,FRONTEND,,,0,0,2000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,0,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,
52112201-05ce-4f4d-b5a8-9e67de2a895a,37a1f5a8-ec7e-4208-9c96-27d2783a594f,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,1,1,0,,,,,,1,3,1,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
52112201-05ce-4f4d-b5a8-9e67de2a895a,8e722b4b-08b8-4089-bba5-8fa5dd26a87f,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,1,1,0,,,,,,1,3,2,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
52112201-05ce-4f4d-b5a8-9e67de2a895a,BACKEND,0,0,0,0,200,0,0,0,0,0,,0,0,0,0,UP,2,2,0,,0,117,0,,1,3,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,root@juju-c9d701-xenail-queens-184313-vrrp-5:~# cat /var/lib/neutron/lbaas/v2/84fd9a6c-24a2-4c0f-912b-eedc254ac1f4/haproxy.conf
# Configuration for test-lb
globaldaemonuser nobodygroup nogrouplog /dev/log local0log /dev/log local1 noticemaxconn 2000stats socket /var/lib/neutron/lbaas/v2/84fd9a6c-24a2-4c0f-912b-eedc254ac1f4/haproxy_stats.sock mode 0666 level user
defaultslog globalretries 3option redispatchtimeout connect 5000timeout client 50000timeout server 50000
frontend c2a42906-e160-44dd-8590-968af2077b4aoption tcplogbind 192.168.21.14:443mode tcpdefault_backend 52112201-05ce-4f4d-b5a8-9e67de2a895a
backend 52112201-05ce-4f4d-b5a8-9e67de2a895amode tcpbalance leastconntimeout check 10soption httpchk GET /http-check expect rstatus 200option ssl-hello-chkserver 37a1f5a8-ec7e-4208-9c96-27d2783a594f 192.168.21.13:443 weight 1 check inter 5s fall 2server 8e722b4b-08b8-4089-bba5-8fa5dd26a87f 192.168.21.8:443 weight 1 check inter 5s fall 2# TCP monitor
neutron lbaas-healthmonitor-delete monitor1
neutron lbaas-healthmonitor-create --delay 5 --max-retries 2 --timeout 10 --type TCP --pool test-lb-pool-https --name monitor1 --url-path /
backend 52112201-05ce-4f4d-b5a8-9e67de2a895amode tcpbalance leastconntimeout check 10sserver 37a1f5a8-ec7e-4208-9c96-27d2783a594f 192.168.21.13:443 weight 1 check inter 5s fall 2server 8e722b4b-08b8-4089-bba5-8fa5dd26a87f 192.168.21.8:443 weight 1 check inter 5s fall 2

附录 - 上层的k8s如何使用下层的openstack中的LBaaS资源

如果在一个OpenStack云上面再创建K8S的话, 在k8s中使用下列命令创建LoadBalancer服务会永远Pending状态.

kubectl run test-nginx --image=nginx --replicas=2 --port=80 --expose --service-overrides='{ "spec": { "type": "LoadBalancer" } }'
kubectl get svc test-nginx

那是因为k8s需要调用底层OpenStack LBaaS服务创建VIP资源, 然后将所有后端服务的NodeIP:NodePort作为backend. 那么该如何让k8s具有访问openstack lbaas资源的能力呢? 如下:

# https://blog.ubuntu.com/2019/01/28/taking-octavia-for-a-ride-with-kubernetes-on-openstack
# openstack上部署k8s, 'juju trust openstack-integrator'将让openstack-integrator具有访问bootstrap时所用的openstack credential的权限,
# 之后, 因为cdk实现了interface-openstack-integration接口, 所以cdk k8s可以使用这些openstack credential来直接使用openstack里的LBaaS等资源
juju deploy cs:~containers/openstack-integrator
juju add-relation openstack-integrator kubernetes-master
juju add-relation openstack-integrator kubernetes-worker
juju config openstack-integrator subnet-id=<UUID of subnet>
juju config openstack-integrator floating-network-id=<UUID of ext_net># 'juju trust' grants openstack-integrator access to the credential used in the bootstrap command, this charm acts as a proxy for the
juju trust openstack-integrator测试yaml, 有时要提供loadbalancer.openstack.org/floating-network-id, 见:  https://github.com/kubernetes/cloud-provider-openstack/tree/master/examples/loadbalancers
kind: Service
apiVersion: v1
metadata:name: external-http-nginx-serviceannotations:service.beta.kubernetes.io/openstack-internal-load-balancer: "false"loadbalancer.openstack.org/floating-network-id: "6f05a9de-4fc9-41f5-9c51-d5f43cd244b9"
spec:selector:app: nginxtype: LoadBalancerports:- name: httpport: 80targetPort: 80

同时也应该确保service tenant下的security group的quata别超了.quata超了的现象是例如对于FIP, 有时可以有时不可以.

后面继续搭建k8s可参见 - https://ubuntu.com/blog/taking-octavia-for-a-ride-with-kubernetes-on-openstack

# deploy underlying openstack model
./generate-bundle.sh --name o7k:stsstack --create-model --octavia -r stein --dvr-snat --num-compute 8
sed -i "s/mem=8G/mem=8G cores=4/g" ./b/o7k/openstack.yaml
./generate-bundle.sh --name o7k:stsstack --replay --run
./bin/add-data-ports.sh
juju config neutron-openvswitch data-port="br-data:ens7"#refere this page to generate certs - https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html
juju config octavia \lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \lb-mgmt-issuing-ca-key-passphrase=foobar \lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"juju run-action --wait octavia/0 configure-resources
source ~/stsstack-bundles/openstack/novarc
./configure
#juju config octavia loadbalancer-topology=ACTIVE_STANDBY
juju run-action octavia-diskimage-retrofit/0 --wait retrofit-image source-image=$(openstack image list |grep bionic |awk '{print $2}')
openstack image list# enable all ingress traffic
PROJECT_ID=$(openstack project show --domain admin_domain admin -f value -c id)
SECGRP_ID=$(openstack security group list --project ${PROJECT_ID} | awk '/default/ {print $2}')
openstack security group rule create ${SECGRP_ID} --protocol any --ethertype IPv6 --ingress
openstack security group rule create ${SECGRP_ID} --protocol any --ethertype IPv4 --ingress# disable quotas for project, neutron and nova
openstack quota set --instances -1 ${PROJECT_ID}
openstack quota set --floating-ips -1 ${PROJECT_ID}
openstack quota set --cores -1 ${PROJECT_ID}
openstack quota set --ram -1 ${PROJECT_ID}
openstack quota set --gigabytes -1 ${PROJECT_ID}
openstack quota set --volumes -1 ${PROJECT_ID}
openstack quota set --secgroups -1 ${PROJECT_ID}
openstack quota set --secgroup-rules -1 ${PROJECT_ID}
neutron quota-update --network -1
neutron quota-update --floatingip -1
neutron quota-update --port -1
neutron quota-update --router -1
neutron quota-update --security-group -1
neutron quota-update --security-group-rule -1
neutron quota-update --subnet -1
neutron quota-show# set up router to allow bastion to access juju controller
GATEWAY_IP=$(openstack router show provider-router -f value -c external_gateway_info \|awk '/ip_address/ { for (i=1;i<NF;i++) if ($i~"ip_address") print $(i+1)}' |cut -f2 -d\')
CIDR=$(openstack subnet show private_subnet -f value -c cidr)
sudo ip route add ${CIDR} via ${GATEWAY_IP}# define juju cloud
sudo bash -c 'cat > mystack.yaml' << EOF
clouds:mystack:type: openstackauth-types: [ userpass ]regions:RegionOne:endpoint: $OS_AUTH_URL
EOF
juju remove-cloud --local mystack
juju add-cloud --local mystack mystack.yaml
juju show-cloud mystack --local
sudo bash -c 'cat > mystack_credentials.txt' << EOF
credentials:mystack:admin:auth-type: userpasspassword: openstacktenant-name: admindomain-name: "" # ensure we don't get a domain-scoped tokenproject-domain-name: admin_domainuser-domain-name: admin_domainusername: adminversion: "3"
EOF
juju remove-credential --local mystack admin
juju add-credential --local mystack -f ./mystack_credentials.txt
juju show-credential --local mystack admin# deploy juju controller
mkdir -p ~/simplestreams/images && rm -rf ~/simplestreams/images/*
source ~/stsstack-bundles/openstack/novarc
IMAGE_ID=$(openstack image list -f value |grep 'bionic active' |awk '{print $1}')
OS_SERIES=$(openstack image list -f value |grep 'bionic active' |awk '{print $2}')
juju metadata generate-image -d ~/simplestreams -i $IMAGE_ID -s $OS_SERIES -r $OS_REGION_NAME -u $OS_AUTH_URL
ls ~/simplestreams/*/streams/*
NETWORK_ID=$(openstack network show private -f value -c id)
# can remove juju controller 'mystack-regionone' from ~/.local/share/juju/controllers.yaml if it exists
juju bootstrap mystack --config network=${NETWORK_ID} --model-default network=${NETWORK_ID} --model-default use-default-secgroup=true --metadata-source ~/simplestreams# deploy upper k8s model
juju switch mystack-regionone
juju destroy-model --destroy-storage --force upperk8s -y && juju add-model upperk8s
wget https://api.jujucharms.com/charmstore/v5/bundle/canonical-kubernetes-933/archive/bundle.yaml
sed -i "s/num_units: 2/num_units: 1/g" ./bundle.yaml
sed -i "s/num_units: 3/num_units: 2/g" ./bundle.yaml
sed -i "s/cores=4/cores=2/g" ./bundle.yaml
juju deploy ./bundle.yaml
juju deploy cs:~containers/openstack-integrator
juju add-relation openstack-integrator:clients kubernetes-master:openstack
juju add-relation openstack-integrator:clients kubernetes-worker:openstackjuju config openstack-integrator subnet-id=$(openstack subnet show private_subnet -c id -f value)
juju config openstack-integrator floating-network-id=$(openstack network show ext_net -c id -f value)
juju trust openstack-integrator
watch -c juju status --color# we don't use the following way to deploy k8s because it can't be customized to use bionic
juju deploy charmed-kubernetes
# we don't use the following way as well because it says no networks exist with label "zhhuabj_port_sec_enabled"
~/stsstack-bundles/kubernetes/generate-bundle.sh --name k8s:mystack --create-model -s bionic --run# deploy test pod
mkdir -p ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
sudo snap install kubectl --classic
#Flag --replicas has been deprecated
#kubectl run hello-world --replicas=2 --labels="run=lb-test" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
kubectl create deployment hello-world --image=gcr.io/google-samples/node-hello:1.0 -o yaml --dry-run=client > helloworld.yaml
sudo bash -c 'cat > helloworld.yaml' << EOF
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: hello-worldname: hello-world
spec:replicas: 2selector:matchLabels:app: hello-worldtemplate:metadata:labels:app: hello-worldspec:containers:- image: gcr.io/google-samples/node-hello:1.0name: node-helloports:- containerPort: 8080
EOF
kubectl create -f ./helloworld.yaml
kubectl get deployments hello-world -o wide# deploy LoadBalancer service
# # remember to relace the following <ext_net_id> to avoid 'pending' status for FIP
sudo bash -c 'cat > helloservice.yaml' << EOF
kind: Service
apiVersion: v1
metadata:name: helloannotations:service.beta.kubernetes.io/openstack-internal-load-balancer: "false"loadbalancer.openstack.org/floating-network-id: "<ext_net_id>"
spec:selector:app: hello-worldtype: LoadBalancerports:- name: httpport: 80targetPort: 8080
EOF
kubectl create -f ./helloservice.yaml
watch kubectl get svc -o wide hello
NAME    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE     SELECTOR
hello   LoadBalancer   10.152.183.88   10.5.150.220   80:32315/TCP   2m47s   app=hello-world
$ curl http://10.5.150.220:80
Hello Kubernetes!openstack loadbalancer list            #use 'kubectl  delete svc hello' to delete lb
openstack loadbalancer amphora list#vip=$(openstack loadbalancer list |grep ACTIVE |awk '{print $8}')
#public_network=$(openstack network show ext_net -f value -c id)
#fip=$(openstack floating ip create $public_network -f value -c floating_ip_address)
#openstack floating ip set $fip --fixed-ip-address $vip --port $(openstack port list --fixed-ip ip-address=$vip -c id -f value)$ openstack floating ip list
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
| ID                                   | Floating IP Address | Fixed IP Address | Port                                 | Floating Network                     | Project                          |
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
| c284dfc3-d294-48ae-8ccf-20a7e47fe039 | 10.5.150.220        | 192.168.21.26    | ded911a8-f213-4884-a387-7efcf14c8a89 | 1a83b2d3-1c1b-4bc9-b882-f132b9ff9d87 | 0d1886170941437fa46fb34508e67d24 |
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+$ openstack loadbalancer list
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
| id                                   | name                                                                   | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
| 72d96673-8723-4dde-9035-66c3bd095632 | kube_service_kubernetes-n7cgun28wpbsfgiiza5qralulupdtspv_default_hello | 0d1886170941437fa46fb34508e67d24 | 192.168.21.26 | ACTIVE              | amphora  |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+---------------+---------------------+----------+$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+-----------+------------+-----------------------------------------+---------------+
| id                                   | loadbalancer_id                      | status    | role       | lb_network_ip                           | ha_ip         |
+--------------------------------------+--------------------------------------+-----------+------------+-----------------------------------------+---------------+
| 068104f1-ffee-4b4a-88ab-2e46cee1cbbc | 72d96673-8723-4dde-9035-66c3bd095632 | ALLOCATED | STANDALONE | fc00:961f:bb53:993b:f816:3eff:fe0d:6433 | 192.168.21.26 |
+--------------------------------------+--------------------------------------+-----------+------------+-----------------------------------------+---------------+$ nova list --all |grep amphora
| 966f0ec0-b48a-48b9-8078-d7406ee08311 | amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc | 144901cff394489d9095b1361caa6872 | ACTIVE | -          | Running     | lb-mgmt-net=fc00:961f:bb53:993b:f816:3eff:fe0d:6433; private=192.168.21.174 |#nova keypair-add --pub-key=~/.ssh/id_amphora.pub amphora-backdoor --user $(openstack user show octavia --domain service_domain -f value -c id)
$ nova keypair-list --user $(openstack user show octavia --domain service_domain -f value -c id)
+------------------+------+-------------------------------------------------+
| Name             | Type | Fingerprint                                     |
+------------------+------+-------------------------------------------------+
| amphora-backdoor | ssh  | d9:53:1e:eb:70:42:24:f3:01:e2:4c:9d:c9:97:bd:11 |
+------------------+------+-------------------------------------------------+$ openstack router list
+--------------------------------------+-----------------+--------+-------+----------------------------------+-------------+-------+
| ID                                   | Name            | Status | State | Project                          | Distributed | HA    |
+--------------------------------------+-----------------+--------+-------+----------------------------------+-------------+-------+
| 05d8e256-ba53-4b32-b40d-17472ac09040 | provider-router | ACTIVE | UP    | 0d1886170941437fa46fb34508e67d24 | True        | False |
| 6876c9c3-d8c5-4b31-876b-fe830d4b5f0b | lb-mgmt         | ACTIVE | UP    | 144901cff394489d9095b1361caa6872 | False       | False |
+--------------------------------------+-----------------+--------+-------+----------------------------------+-------------+-------+
$ openstack subnet list
+--------------------------------------+------------------+--------------------------------------+--------------------------+
| ID                                   | Name             | Network                              | Subnet                   |
+--------------------------------------+------------------+--------------------------------------+--------------------------+
| 401b11c1-b209-4545-81f2-81dc93674616 | private_subnet   | 2c927db4-ee05-4d4a-b35b-f106cbd785c4 | 192.168.21.0/24          |
| 54365c1c-1987-4607-8e98-4341cff4795f | ext_net_subnet   | 1a83b2d3-1c1b-4bc9-b882-f132b9ff9d87 | 10.5.0.0/16              |
| 56847939-4776-4b39-bdac-e04a4a9b6555 | lb-mgmt-subnetv6 | 983933d9-3078-47ce-b581-961fbb53993b | fc00:961f:bb53:993b::/64 |
+--------------------------------------+------------------+--------------------------------------+--------------------------+#nova list --all |grep amphora
#juju scp ~/.ssh/id_amphora* nova-compute/5:/home/ubuntu/
#amphora instance is on nova-compute/3 (juju-7917e4-octavia-9), but gateway for lb-mgmt-subnetv6 is on nova-compute/5 (neutron l3-agent-list-hosting-router lb-mgmt)
juju ssh nova-compute/5 -- sudo ip netns exec qrouter-6876c9c3-d8c5-4b31-876b-fe830d4b5f0b ping6 fc00:961f:bb53:993b:f816:3eff:fe0d:6433
juju ssh nova-compute/5 -- sudo ip netns exec qrouter-6876c9c3-d8c5-4b31-876b-fe830d4b5f0b ssh -6 -i ~/id_amphora ubuntu@fc00:961f:bb53:993b:f816:3eff:fe0d:6433 -vubuntu@amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc:~$ sudo ip netns ls
amphora-haproxy (id: 0)
ubuntu@amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc:~$ sudo ip netns exec amphora-haproxy ip addr show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1508 qdisc fq state UP group default qlen 1000link/ether fa:16:3e:df:d2:3b brd ff:ff:ff:ff:ff:ffinet 192.168.21.174/24 brd 192.168.21.255 scope global eth1valid_lft forever preferred_lft foreverinet 192.168.21.26/24 brd 192.168.21.255 scope global secondary eth1:0valid_lft forever preferred_lft forever
ubuntu@amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc:~$ ps -ef |grep haproxy
root      1546     1  0 01:18 ?        00:00:01 /usr/sbin/haproxy -Ws -f /var/lib/octavia/72d96673-8723-4dde-9035-66c3bd095632/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/72d96673-8723-4dde-9035-66c3bd095632/72d96673-8723-4dde-9035-66c3bd095632.pid -L 7eAyDHfLMNtMDh0lrSe_vQODo0g -sf 1800
ubuntu@amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc:~$ sudo cat /var/lib/octavia/72d96673-8723-4dde-9035-66c3bd095632/haproxy.cfg
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/72d96673-8723-4dde-9035-66c3bd095632.sock mode 0666 level usermaxconn 1000000
defaultslog globalretries 3option redispatchoption splice-requestoption splice-responseoption http-keep-alive
frontend 7f26223c-63f8-490a-8aa3-0141b112bacboption tcplogmaxconn 1000000bind 192.168.21.26:80mode tcpdefault_backend 632d6cf2-020f-4c63-9ca7-4bc952f8f324:7f26223c-63f8-490a-8aa3-0141b112bacbtimeout client 50000
backend 632d6cf2-020f-4c63-9ca7-4bc952f8f324:7f26223c-63f8-490a-8aa3-0141b112bacbmode tcpbalance roundrobinfullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 150e2027-ec51-469d-b5f8-675575c76c79 192.168.21.114:32315 weight 1server 9b42d8f0-5719-48fe-b17c-2e7cd2b29b10 192.168.21.232:32315 weight 1server d9f1f6ba-895b-43c6-ad09-362f4993aa73 192.168.21.251:32315 weight 1server e7485d2c-1de5-4438-8b3b-0f74ba870895 192.168.21.87:32315 weight 1$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
hello        LoadBalancer   10.152.183.88   10.5.150.220   80:32315/TCP   51m
kubernetes   ClusterIP      10.152.183.1    <none>         443/TCP        10h
$ juju status |grep worker
kubernetes-worker      1.18.4   waiting    4/5  kubernetes-worker      jujucharms  682  ubuntu  exposed
kubernetes-worker/0       waiting   allocating  6        192.168.21.128                  waiting for machine
kubernetes-worker/1*      active    idle        7        192.168.21.114  80/tcp,443/tcp  Kubernetes worker running.
kubernetes-worker/2       active    idle        8        192.168.21.232  80/tcp,443/tcp  Kubernetes worker running.
kubernetes-worker/3       active    idle        10       192.168.21.251  80/tcp,443/tcp  Kubernetes worker running.
kubernetes-worker/4       active    idle        11       192.168.21.87   80/tcp,443/tcp  Kubernetes worker running.$ juju switch zhhuabj
mystack-regionone:admin/k8s -> zhhuabj:admin/octavia
$ nova list
+--------------------------------------+--------------------------+--------+------------+-------------+------------------------+
| ID                                   | Name                     | Status | Task State | Power State | Networks               |
+--------------------------------------+--------------------------+--------+------------+-------------+------------------------+
| 5cbd03dc-9e6b-42ea-a8a2-9ca725144df0 | juju-68d107-k8s-0        | ACTIVE | -          | Running     | private=192.168.21.219 |
| 5cc74c32-1547-44b3-8c2d-68fcf33e5080 | juju-68d107-k8s-1        | ACTIVE | -          | Running     | private=192.168.21.127 |
| 46dc9a58-1a51-4b45-912c-8c768a4d5f28 | juju-68d107-k8s-10       | ACTIVE | -          | Running     | private=192.168.21.251 |
| 773a845c-f87d-4352-9c74-b3f3d354590f | juju-68d107-k8s-11       | ACTIVE | -          | Running     | private=192.168.21.87  |
| 211a6abd-bc97-4faf-984f-987fa90f21f3 | juju-68d107-k8s-2        | ACTIVE | -          | Running     | private=192.168.21.78  |
| df6b642a-9af8-4468-b1bf-691401c4835f | juju-68d107-k8s-3        | ACTIVE | -          | Running     | private=192.168.21.158 |
| c6f82159-98c9-4884-a7b1-eb9e950618bb | juju-68d107-k8s-4        | ACTIVE | -          | Running     | private=192.168.21.242 |
| e81b9566-3627-4a05-9460-415e76db9483 | juju-68d107-k8s-5        | ACTIVE | -          | Running     | private=192.168.21.217 |
| 569087bd-9ef8-4e6f-a898-72b69daef6ea | juju-68d107-k8s-6        | ACTIVE | -          | Running     | private=192.168.21.128 |
| 38d8887c-54ae-4b2c-b0c4-7f64542af8f1 | juju-68d107-k8s-7        | ACTIVE | -          | Running     | private=192.168.21.114 |
| 178c5eee-945a-4295-8c94-5ad5d821f251 | juju-68d107-k8s-8        | ACTIVE | -          | Running     | private=192.168.21.232 |
| 09a9c0c1-c795-494b-ac21-49fa3a2ef070 | juju-68d107-k8s-9        | ACTIVE | -          | Running     | private=192.168.21.147 |
| 2d5bff68-21f1-46e0-b6b8-c83d97c8cc27 | juju-bb6752-controller-0 | ACTIVE | -          | Running     | private=192.168.21.21  |
+--------------------------------------+--------------------------+--------+------------+-------------+------------------------+$ nova show juju-68d107-k8s-7 |grep security
| security_groups                      | default, juju-f49ad379-07f8-4564-847b-f7999f9df56c-39ead59a-0d9c-4664-8a4d-40a74168d107, juju-f49ad379-07f8-4564-847b-f7999f9df56c-39ead59a-0d9c-4664-8a4d-40a74168d107-7                                     |ubuntu@zhhuabj-bastion:~$ openstack security group list --project $(openstack project list --domain admin_domain |grep admin |awk '{print $2}') |grep default
| bfc84b34-6f23-4561-87ff-58077f667bea | default                                                                           | Default security group | 0d1886170941437fa46fb34508e67d24 | []   |ubuntu@amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc:~$ sudo ip netns exec amphora-haproxy ping 192.168.21.114 -c 1
sudo: unable to resolve host amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc
PING 192.168.21.114 (192.168.21.114) 56(84) bytes of data.
64 bytes from 192.168.21.114: icmp_seq=1 ttl=64 time=1.14 msubuntu@amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc:~$ sudo ip netns exec amphora-haproxy nc -w4 -vz 192.168.21.114 32315
Connection to 192.168.21.114 32315 port [tcp/*] succeeded!#NOTE
os security group rule create --ingress --protocol tcp --remote-group b1eff65c-b6c1-4f09-8ed6-1b163e447318 --dst-port 32315:32315 --ethertype ipv4 3f95d2fa-5ba5-4719-96ee-a0e6211c7c46"
Where remote-group is the sec group associated with the port on 192.168.100.xxx of the amphora VM
And 3f95d2fa-5ba5-4719-96ee-a0e6211c7c46 is the default SG of the juju VMs in the k8s model$ nova list --all |grep amp
| 966f0ec0-b48a-48b9-8078-d7406ee08311 | amphora-068104f1-ffee-4b4a-88ab-2e46cee1cbbc | 144901cff394489d9095b1361caa6872 | ACTIVE | -          | Running     | lb-mgmt-net=fc00:961f:bb53:993b:f816:3eff:fe0d:6433; private=192.168.21.174 |
$ nova show 966f0ec0-b48a-48b9-8078-d7406ee08311 |grep sec
| security_groups                      | lb-72d96673-8723-4dde-9035-66c3bd095632, lb-mgmt-sec-grp          |
$ openstack security group rule list lb-72d96673-8723-4dde-9035-66c3bd095632
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
| 1197c266-015b-4967-8680-35f76328fa85 | tcp         | IPv4      | 0.0.0.0/0 | 1025:1025  | None                  |
| 2f715ac6-f7fb-4823-871c-617559cf8d0d | tcp         | IPv4      | 0.0.0.0/0 | 80:80      | None                  |
| dc0d368e-74f6-4cf0-a863-0e56e071ad46 | None        | IPv4      | 0.0.0.0/0 |            | None                  |
| f38b32b6-3472-43bb-89dc-3f0ef221e95b | None        | IPv6      | ::/0      |            | None                  |
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
$ openstack security group rule list lb-mgmt-sec-grp
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
| 4e913725-bc6b-441f-be3e-8ea668d8d2bd | None        | IPv6      | ::/0      |            | None                  |
| 698d9ee1-86d2-407f-8ec8-2f1453ad2427 | tcp         | IPv6      | ::/0      | 22:22      | None                  |
| b7605acb-feb7-4a47-ba48-c6a395fca934 | tcp         | IPv6      | ::/0      | 9443:9443  | None                  |
| c1b5fcc9-16ee-4f7a-9488-c2096b05941e | None        | IPv4      | 0.0.0.0/0 |            | None                  |
| dcbdaaf2-140c-41e6-8f4e-f36624628047 | icmpv6      | IPv6      | ::/0      |            | None                  |
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+$ openstack server list |grep 114
| 38d8887c-54ae-4b2c-b0c4-7f64542af8f1 | juju-68d107-k8s-7        | ACTIVE | private=192.168.21.114 | bionic | m1.medium |
$ nova show 38d8887c-54ae-4b2c-b0c4-7f64542af8f1 |grep sec
| security_groups                      | default, juju-f49ad379-07f8-4564-847b-f7999f9df56c-39ead59a-0d9c-4664-8a4d-40a74168d107, juju-f49ad379-07f8-4564-847b-f7999f9df56c-39ead59a-0d9c-4664-8a4d-40a74168d107-7                                     |
$ openstack security group rule list juju-f49ad379-07f8-4564-847b-f7999f9df56c-39ead59a-0d9c-4664-8a4d-40a74168d107
+--------------------------------------+-------------+-----------+-----------+-------------+--------------------------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range  | Remote Security Group                |
+--------------------------------------+-------------+-----------+-----------+-------------+--------------------------------------+
| 00469e1d-109c-47ea-878a-403cc47f94f2 | tcp         | IPv6      | ::/0      | 1:65535     | eee9c463-5689-460e-9ca5-22f880e1a761 |
| 2134cf1f-a82d-43e4-a73f-8b1c06c2e400 | icmp        | IPv4      | 0.0.0.0/0 |             | eee9c463-5689-460e-9ca5-22f880e1a761 |
| 2a5e6886-4657-4227-b9e6-215a121c70c5 | tcp         | IPv6      | ::/0      | 22:22       | None                                 |
| 430c12ff-0a15-4878-90b0-67ff18ef5cb7 | None        | IPv4      | 0.0.0.0/0 |             | None                                 |
| 4a4644ef-6563-492f-8732-749a5b02eb78 | udp         | IPv4      | 0.0.0.0/0 | 1:65535     | eee9c463-5689-460e-9ca5-22f880e1a761 |
| 8435b75a-f40b-4adb-9eda-1bbef5aecca6 | icmp        | IPv6      | ::/0      |             | eee9c463-5689-460e-9ca5-22f880e1a761 |
| 9adf865c-04ed-40db-a506-6a9687efb1b0 | None        | IPv6      | ::/0      |             | None                                 |
| a68fac61-ed0c-4027-96ff-e98d182a287d | tcp         | IPv6      | ::/0      | 17070:17070 | None                                 |
| b475b293-b071-4ae9-a3ab-b8a29cbcce33 | tcp         | IPv4      | 0.0.0.0/0 | 17070:17070 | None                                 |
| c5450f4c-7c82-45a2-a676-4fe3ee413ff5 | tcp         | IPv4      | 0.0.0.0/0 | 1:65535     | eee9c463-5689-460e-9ca5-22f880e1a761 |
| cf8dd9b6-f59f-4d90-9bbf-76eeb1754680 | udp         | IPv6      | ::/0      | 1:65535     | eee9c463-5689-460e-9ca5-22f880e1a761 |
| f8246bec-4d07-4764-8f37-dd2c204e80e4 | tcp         | IPv4      | 0.0.0.0/0 | 22:22       | None                                 |
+--------------------------------------+-------------+-----------+-----------+-------------+--------------------------------------+
$ openstack security group rule list juju-f49ad379-07f8-4564-847b-f7999f9df56c-39ead59a-0d9c-4664-8a4d-40a74168d107-7
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
| 115688b5-ab4d-4614-afc1-db1458d67ea5 | tcp         | IPv4      | 0.0.0.0/0 | 443:443    | None                  |
| 77bb8ed0-2e7c-4e2f-8320-9acf59b50301 | tcp         | IPv4      | 0.0.0.0/0 | 80:80      | None                  |
| 8c412473-74f0-410f-bd16-7851404743bc | None        | IPv4      | 0.0.0.0/0 |            | None                  |
| ba9a07f9-07d6-4222-ae3c-57255d4a8f83 | None        | IPv6      | ::/0      |            | None                  |
+--------------------------------------+-------------+-----------+-----------+------------+-----------------------+
$ openstack security group list --project $(openstack project list --domain admin_domain |grep admin |awk '{print $2}') |grep default
| bfc84b34-6f23-4561-87ff-58077f667bea | default                                                                           | Default security group | 0d1886170941437fa46fb34508e67d24 | []   |openstack security group rule list bfc84b34-6f23-4561-87ff-58077f667bea
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Remote Security Group                |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| 12a3c6b3-a4c4-470d-a4d2-dcf6cde5b740 | tcp         | IPv4      | 0.0.0.0/0 | 443:443    | None                                 |
| 13068f00-11d4-4cec-b145-a71b68ffc583 | icmp        | IPv4      | 0.0.0.0/0 |            | None                                 |
| 39ab4cb5-4bee-4272-9811-4e405ae568aa | None        | IPv6      | ::/0      |            | None                                 |
| 3d005483-bd84-467b-8cc6-427771e68645 | tcp         | IPv4      | 0.0.0.0/0 | 80:80      | None                                 |
| 436b644b-cf5f-4dfc-a5b0-f86159f8a3fb | None        | IPv4      | 0.0.0.0/0 |            | None                                 |
| 985dc09f-213c-426e-aedf-35b673fd5830 | tcp         | IPv4      | 0.0.0.0/0 | 53:53      | None                                 |
| 9d936527-04c0-490f-9249-27e2f156d452 | None        | IPv6      | ::/0      |            | bfc84b34-6f23-4561-87ff-58077f667bea |
| e7179f46-a322-43fd-aeef-fce38685e749 | tcp         | IPv4      | 0.0.0.0/0 | 22:22      | None                                 |
| f5b12670-7c76-42e7-9cd8-e0fd1ba5eaef | None        | IPv4      | 0.0.0.0/0 |            | bfc84b34-6f23-4561-87ff-58077f667bea |
| f9fdfe84-b239-4adf-96c5-94402debbf1c | None        | IPv6      | ::/0      |            | None                                 |
| fdb8a39b-747b-4b52-96a3-e8cc0110d6b7 | None        | IPv4      | 0.0.0.0/0 |            | None                                 |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+

20190904更新 - octavia测试环境搭建全过程

juju kill-controller zhhuabj -y -t 1s  #or delete vm directly
rm .local/share/juju/controllers.yaml
#modify your ~/juju_config/2.x/bootstrap.sh to add 'container-networking-method="provider"' in model_defaults variable and comment proxy parts
~/juju_config/2.x/gencloud.sh./generate-bundle.sh --name octavia --create-model --run --octavia -r stein --dvr-snat --num-compute 2
./bin/add-data-ports.sh
juju config neutron-openvswitch data-port="br-data:ens7"
#refere here to create certs - https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html
juju config octavia \lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \lb-mgmt-issuing-ca-key-passphrase=foobar \lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"
juju run-action --wait octavia/0 configure-resources
BARE_METAL=TRUE ./configure
juju run-action octavia-diskimage-retrofit/0 --wait retrofit-image image-id=$(openstack image list |grep bionic |awk '{print $2}')wget https://people.canonical.com/~zhhuabj/amphora-x64-haproxy.qcow2  #ssh root@<amphora-ip, password:123qwe
#scp -i ~/.ssh/phykey amphora-x64-haproxy.qcow2 zhhuabj@people.canonical.com:/home/zhhuabj/public_html/
source ~/stsstack-bundles/openstack/novarc
glance image-create --tag octavia-amphora --disk-format qcow2 --name amphora-haproxy-xenial --file ./amphora-x64-haproxy.qcow2 --visibility public --container-format bare --progress
#注意, 20191206更新
# 镜像应该由octavia-diskimage-retrofit来创建来操作apmphora client与api版本一致, 否则会造成service vm无法work
#https://github.com/openstack-charmers/octavia-diskimage-retrofit
#amphora vms have to be created using the uca that corresponds to the release of openstack deployed
#so that amphora client and api versions match, you can build one with the octavia-diskimage-retrofit charm
sudo snap install --edge --devmode octavia-diskimage-retrofit
sudo -s
wget https://cloud-images.ubuntu.com/minimal/releases/bionic/release/ubuntu-18.04-minimal-cloudimg-amd64.img
- or -
wget https://cloud-images.ubuntu.com/minimal/daily/bionic/current/bionic-minimal-cloudimg-amd64.img
sudo mv ubuntu-18.04-minimal-cloudimg-amd64.img /var/snap/octavia-diskimage-retrofit/common
sudo octavia-diskimage-retrofit /var/snap/octavia-diskimage-retrofit/common/ubuntu-18.04-minimal-cloudimg-amd64.img /var/snap/octavia-diskimage-retrofit/common/ubuntu-amphora-haproxy-amd64.qcow2 -d u stein
openstack image create --disk-format qcow2 --container-format bare --public --tag octavia-amphora --file /var/snap/octavia-diskimage-retrofit/common/ubuntu-amphora-haproxy-amd64.qcow2 amphora-bionic-x64-haproxy#create a test backend
./tools/instance_launch.sh 1 xenial
./tools/sec_groups.sh
fix_ip=$(openstack server list |grep 'private=' |awk -F '=' '{print $2}' |awk '{print $1}')
ext_net=$(openstack network show ext_net -f value -c id)
fip=$(openstack floating ip create $ext_net -f value -c floating_ip_address)
openstack floating ip set $fip --fixed-ip-address $fix_ip --port $(openstack port list --fixed-ip ip-address=$fix_ip -c id -f value)
ssh -i ~/testkey.priv ubuntu@$fip -- sudo apt install python-minimal -y
ssh -i ~/testkey.priv ubuntu@$fip -- sudo python -m SimpleHTTPServer 80 &
curl $fip#backend ip 192.168.21.252 (fip: 10.5.151.97)
#service vm fc00:a895:61e6:b86f:f816:3eff:fef7:26f5/192.168.21.54  (vip: 192.168.21.117, fip: 10.5.151.155)
sudo apt install python-octaviaclient python3-octaviaclient
openstack complete |sudo tee /etc/bash_completion.d/openstack
source <(openstack complete)
#No module named 'oslo_log'
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
#lb_vip_port_id=$(openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id private_subnet)
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
openstack loadbalancer show lb1
nova list --all
openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer member create --subnet-id private_subnet --address $fix_ip --protocol-port 80 pool1
openstack loadbalancer member list pool1
vip=$(openstack loadbalancer show lb1 -f value -c vip_address)
vip_fip=$(openstack floating ip create $ext_net -f value -c floating_ip_address)
openstack floating ip set $vip_fip --fixed-ip-address $vip --port $(openstack port list --fixed-ip ip-address=$vip -c id -f value)
curl $vip_fip#ssh into service vm
nova list --all
juju ssh nova-compute/1 -- sudo ip netns exec qrouter-73d87977-2eaf-40ba-818d-6a17aecd1d16 ping6 fc00:a895:61e6:b86f:f816:3eff:fef7:26f5
#password: 123qwe
juju ssh nova-compute/1 -- sudo ip netns exec qrouter-73d87977-2eaf-40ba-818d-6a17aecd1d16 ssh -6 root@fc00:a895:61e6:b86f:f816:3eff:fef7:26f5
#need to add icmp firewall rule by hand when using ipv4 address
#juju ssh nova-compute/1 -- sudo ip netns exec qrouter-891c4da6-c03b-4a56-a901-a5efb1dbcd15 ssh root@192.168.21.54
#openstack security group rule create lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2 --protocol icmp --remote-ip 0.0.0.0/0
#openstack security group rule create lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2 --protocol tcp --dst-port 22# cat /var/lib/octavia/27057bca-c504-4ca2-9bff-89b342767afd/haproxy.cfg
# Configuration for loadbalancer 4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/27057bca-c504-4ca2-9bff-89b342767afd.sock mode 0666 level usermaxconn 1000000
defaultslog globalretries 3option redispatchoption splice-requestoption splice-responseoption http-keep-alive
frontend 27057bca-c504-4ca2-9bff-89b342767afdoption httplogmaxconn 1000000bind 192.168.21.117:80mode httpdefault_backend 87d56822-1f5c-4a47-88d6-ddd5d038523dtimeout client 50000
backend 87d56822-1f5c-4a47-88d6-ddd5d038523dmode httphttp-reuse safebalance roundrobinfullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 4dcd830b-33b2-49e7-b50b-e91b2ce65afb 192.168.21.252:80 weight 1root@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# cat /etc/netns/amphora-haproxy/network/interfaces.d/eth1.cfg
# Generated by Octavia agent
auto eth1 eth1:0
iface eth1 inet static
address 192.168.21.54
broadcast 192.168.21.255
netmask 255.255.255.0
gateway 192.168.21.1
mtu 1458
iface eth1:0 inet static
address 192.168.21.117
broadcast 192.168.21.255
netmask 255.255.255.0
# Add a source routing table to allow members to access the VIP
post-up /sbin/ip route add default via 192.168.21.1 dev eth1 onlink table 1
post-down /sbin/ip route del default via 192.168.21.1 dev eth1 onlink table 1
post-up /sbin/ip route add 192.168.21.0/24 dev eth1 src 192.168.21.117 scope link table 1
post-down /sbin/ip route del 192.168.21.0/24 dev eth1 src 192.168.21.117 scope link table 1
post-up /sbin/ip rule add from 192.168.21.117/32 table 1 priority 100
post-down /sbin/ip rule del from 192.168.21.117/32 table 1 priority 100
post-up /sbin/iptables -t nat -A POSTROUTING -p udp -o eth1 -j MASQUERADE
post-down /sbin/iptables -t nat -D POSTROUTING -p udp -o eth1 -j MASQUERADEroot@amphora-91c5098c-7578-4de7-b38e-d3712711bb15root@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# ip netns exec amphora-haproxy ip -4 addr show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1458 qdisc fq_codel state UP group default qlen 1000inet 192.168.21.54/24 brd 192.168.21.255 scope global eth1valid_lft forever preferred_lft foreverinet 192.168.21.117/24 brd 192.168.21.255 scope global secondary eth1:0valid_lft forever preferred_lft foreverroot@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# ip netns exec amphora-haproxy ip route list
default via 192.168.21.1 dev eth1 onlink
192.168.21.0/24 dev eth1 proto kernel scope link src 192.168.21.54
root@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# ip netns exec amphora-haproxy ip route list table 1
default via 192.168.21.1 dev eth1 onlink
192.168.21.0/24 dev eth1 scope link src 192.168.21.117 root@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# ip netns exec amphora-haproxy ip route show table all
default via 192.168.21.1 dev eth1 table 1 onlink
192.168.21.0/24 dev eth1 table 1 scope link src 192.168.21.117
default via 192.168.21.1 dev eth1 onlink
192.168.21.0/24 dev eth1 proto kernel scope link src 192.168.21.54
broadcast 192.168.21.0 dev eth1 table local proto kernel scope link src 192.168.21.54
local 192.168.21.54 dev eth1 table local proto kernel scope host src 192.168.21.54
local 192.168.21.117 dev eth1 table local proto kernel scope host src 192.168.21.54
broadcast 192.168.21.255 dev eth1 table local proto kernel scope link src 192.168.21.54
ff00::/8 dev eth1 table local metric 256 pref mediumroot@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# ip netns exec amphora-haproxy iptables-save
# Generated by iptables-save v1.6.1 on Wed Sep  4 04:00:59 2019
*nat
:PREROUTING ACCEPT [3:448]
:INPUT ACCEPT [3:448]
:OUTPUT ACCEPT [1:60]
:POSTROUTING ACCEPT [1:60]
-A POSTROUTING -o eth1 -p udp -j MASQUERADE
COMMIT
# Completed on Wed Sep  4 04:00:59 2019#backend ip 192.168.21.252 (fip: 10.5.151.97)
#service vm fc00:a895:61e6:b86f:f816:3eff:fef7:26f5/192.168.21.54  (vip: 192.168.21.117, fip: 10.5.151.155)#ping backend from service vm
root@amphora-91c5098c-7578-4de7-b38e-d3712711bb15:~# ip netns exec amphora-haproxy ping 192.168.21.252
PING 192.168.21.252 (192.168.21.252) 56(84) bytes of data.
64 bytes from 192.168.21.252: icmp_seq=1 ttl=64 time=3.45 ms#ping service vm vip from backend
ubuntu@xenial-030345:~$ ping -c 1 192.168.21.54
PING 192.168.21.54 (192.168.21.54) 56(84) bytes of data.ubuntu@zhhuabj-bastion:~$ nova show aa09d2d0-a8fb-4a2f-bc8f-f39c6dff6713 |grep security_groups
| security_groups                      | lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2, lb-mgmt-sec-grp      |
ubuntu@zhhuabj-bastion:~$ openstack security group rule list lb-mgmt-sec-grp
+--------------------------------------+-------------+----------+------------+-----------------------+
| ID                                   | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+----------+------------+-----------------------+
| 1bb77bae-6e4c-4982-8c00-8ebfafd896c7 | icmpv6      | None     |            | None                  |
| 407d8dd6-a0c3-406f-b16d-2453ae4ad015 | tcp         | None     | 9443:9443  | None                  |
| 4d927c00-b4aa-4033-ab66-b96fdb2d9722 | None        | None     |            | None                  |
| 771122a1-ad5e-4842-8ba3-dcd0b950d47a | None        | None     |            | None                  |
| 8b91fd37-dddf-4200-8835-261c203696d0 | tcp         | None     | 22:22      | None                  |
+--------------------------------------+-------------+----------+------------+-----------------------+
ubuntu@zhhuabj-bastion:~$ openstack security group rule list lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2
+--------------------------------------+-------------+----------+------------+-----------------------+
| ID                                   | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+----------+------------+-----------------------+
| 4eb66163-44dc-44d2-b969-a890d35986a6 | tcp         | None     | 80:80      | None                  |
| 80e0d109-9f97-4559-bc20-de89c001725e | tcp         | None     | 1025:1025  | None                  |
| 92c09499-c9ad-4682-8697-9c17cc33e785 | None        | None     |            | None                  |
| feeb42a2-b49b-4f89-8b3e-5a30cee1e08f | None        | None     |            | None                  |
+--------------------------------------+-------------+----------+------------+-----------------------+openstack security group rule create lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2 --protocol icmp --remote-ip 0.0.0.0/0
openstack security group rule create lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2 --protocol tcp --dst-port 22
ubuntu@zhhuabj-bastion:~$ openstack security group rule list lb-4fca1a12-fd1d-4da3-bf7b-e48386c5c1a2
+--------------------------------------+-------------+-----------+------------+-----------------------+
| ID                                   | IP Protocol | IP Range  | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+-----------------------+
| 4eb66163-44dc-44d2-b969-a890d35986a6 | tcp         | None      | 80:80      | None                  |
| 80e0d109-9f97-4559-bc20-de89c001725e | tcp         | None      | 1025:1025  | None                  |
| 92c09499-c9ad-4682-8697-9c17cc33e785 | None        | None      |            | None                  |
| 936b47f0-3897-4abe-89fe-37d7c11862ea | icmp        | 0.0.0.0/0 |            | None                  |
| bd832940-7e80-414d-b43b-51042084b934 | tcp         | 0.0.0.0/0 | 22:22      | None                  |
| feeb42a2-b49b-4f89-8b3e-5a30cee1e08f | None        | None      |            | None                  |
+--------------------------------------+-------------+-----------+------------+-----------------------+ubuntu@xenial-030345:~$ ping -c 1 10.5.151.155
PING 10.5.151.155 (10.5.151.155) 56(84) bytes of data.
64 bytes from 10.5.151.155: icmp_seq=1 ttl=60 time=37.2 ms客户遇到的问题:
1, arp issue - https://bugs.launchpad.net/neutron/+bug/1794991/comments/59https://bugs.launchpad.net/neutron/+bug/1853613
2, fip issue - 对于octavia的计算节点使用了没有data-port的neutron-openvswitch charm, dvr-snat会在一堆计算节点中找一个来安装snat-xxx, 这样当恰好找了一个没有data-port的ocatavia计算节点就出问题了 (如使用了https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1822558中提到的neutron-openvswitch-octavia)
3, https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1843557
4, 最后建议使用一个existing provider network解决 (一个tenant看不见它的话需要使用rbac设置共享)

20191207更新

遇到又一例, service vm不work, 创建lb后, service vm是ACTIVE状态其管理网段IP也work, 但LB的状态是PENDING_STATE, 但LB有VIP也可以ssh, 但过一会之后, service vm变成ERROR状态且被删除. 查出来的原因是:
During verification of the agent update, It was also found that firewall changes introduced caused DNS to become unreachable for the octavia load balancer instances. These rules have been updated to allow access, which allowed DNS to work inside the amphora VMs, which in turn, in combination with disabling a host, and the update of the agent,returned the load balancers to a working 'state."

20200220更新

用下列方法更新添加nbthread元素之后,LB创建成功后一会儿死掉,

juju config octavia haproxy-template | base64 -d > /tmp/haproxy-custom.j2
vim /tmp/haproxy-custom.j2 # edit the nbthread option
nbproc 1
nbthread 2
cpu-map auto:1/1-2 0-1
maxconns=64000
juju config octavia haproxy-template="$(base64 /tmp/haproxy-custom.j2)"

这种问题一般能从octavia-worker.log中找到答案:

2020-02-19 04:53:11.465 14180 ERROR octavia.controller.worker.controller_worker jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'listener'

原因是这个patch(https://review.opendev.org/#/c/673518/)引入了combined_listeners, combined_listeners是为了解决下列问题:

Since 1.8.x version, haproxy consumes at least 160MB at init time using the default configuration provided by octavia. When a LB is updated, haproxy configuration is updated and a signal is sent to haproxy to reload the configuration.
When haproxy reloads its configuration, it creates a new worker, does some allocations (up to 160MB), then destroys the previous worker. So during a short time, memory consumption is increased, and if 2 processes reload a the same time, it may fail with a "Cannot fork" error.

所以custom template (https://paste.ubuntu.com/p/PGvD7fzjd2/ )中的之前的split_listeners的中loadbalancer.listener.pools应该改成现在combined_listeners中的loadbalancer.listeners.pools

20200427更新 -

另一个问题, 如下, socket.getfqdn在处理/etc/hosts里的fqdn(见: https://bugs.python.org/issue5004), 所以还是要使用socket.getaddrinfo代替socket.getfqdn来处理fqdn (neutron agent现在改成使用fqdn注册), 但观察到socket.getaddrinfo时时而能获取到fqdn时而不能获取, 从而导致octavia里的出现binding error从而导致o-hm0无法获取IP.

ubuntu@juju-5b1810-octavia-6:~$ python3 -c 'import socket; print(socket.getaddrinfo("juju-5b1810-octavia-6", None, 0, socket.SOCK_DGRAM, 0,
socket.AI_CANONNAME)[0][3])'
juju-5b1810-octavia-6ubuntu@juju-5b1810-octavia-6:~$ python3 -c 'import socket; print(socket.getfqdn("juju-5b1810-octavia-6"))'
juju-5b1810-octavia-6.cloud.stsubuntu@juju-5b1810-octavia-6:~$ python3 -c 'import socket; print(socket.getaddrinfo("juju-5b1810-octavia-6", None, 0, socket.SOCK_DGRAM, 0,
> socket.AI_CANONNAME))'
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, 'juju-5b1810-octavia-6', ('10.5.0.14', 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('252.0.14.1', 0)), (<AddressFamily.AF_INET6: 10>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('fe80::f816:3eff:fe8a:6e3', 0, 0, 0)), (<AddressFamily.AF_INET6: 10>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('fe80::34b8:fff:feea:4717', 0, 0, 0)), (<AddressFamily.AF_INET6: 10>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('fe80::34b8:fff:feea:4717', 0, 0, 0))]ubuntu@juju-5b1810-octavia-6:~$ hostname --fqdn
juju-5b1810-octavia-6
ubuntu@juju-5b1810-octavia-6:~$ hostname --all-fqdns
juju-5b1810-octavia-6.cloud.sts juju-5b1810-octavia-6
ubuntu@juju-5b1810-octavia-6:~$ hostname -f
juju-5b1810-octavia-6

尤其是vm里的/etc/resovle.conf里用到了如maas dns与neutron ml2-dns等多个search项时, ml2-dns的search项排到第一位时会导致octavia上的l2-agent无法正常处理fqdn从而导致 o-hm0无IP, 见: https://bugs.launchpad.net/charm-octavia/+bug/1845303/comments/15

while true; do for X in {0..2}; do juju ssh octavia/$X "cat /etc/resolv.conf | grep -v "^#"; sudo python -c 'import socket; name=socket.gethostname(); addrs = socket.getaddrinfo(name, None, 0, socket.SOCK_DGRAM, 0, socket.AI_CANONNAME); print(addrs)'" 2>/dev/null; done; done

解决方法是在systemd-network中使用UseDomains=route
UseDomains接受布尔参数或特殊值“ route”。设置为true时,将从DHCP服务器(neutron dns)收到的域名用作DNS, 通过该链接搜索域。如果设置为“ route”,则从DHCP接收的域名 , 服务器将仅用于路由DNS查询.

产生密钥

#https://www.dazhuanlan.com/2019/12/15/5df63aed10999
#generate ca key pairs
mkdir -p ca/{private,certs,newcerts} && cd ca
openssl genrsa -aes256 -passout pass:password -out private/ca.key.pem 4096
chmod 400 private/ca.key.pem
wget https://jamielinux.com/docs/openssl-certificate-authority/_downloads/root-config.txt -O openssl.cnf
sed -i "s,/root/ca,.,g" openssl.cnf
openssl req -config ./openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -sha256 -extensions v3_ca -passin pass:password \-subj "/C=CN/ST=BJ/O=STS/OU=quqi.com/CN=quqi.com.rootca/emailAddress=quqi@mail.com" -out certs/ca.cert.pem
chmod 444 certs/ca.cert.pem
openssl x509 -noout -text -in certs/ca.cert.pem #verify#generate intermediate key pairs
mkdir -p intermediate/{certs,crl,csr,newcerts,private}
chmod 744 intermediate/private
touch index.txt && echo 1000 > serial && echo 1000 > crlnumber
openssl genrsa -aes256 -passout pass:password -out intermediate/private/intermediate.key.pem 4096
chmod 400 intermediate/private/intermediate.key.pem
cp ./openssl.cnf ./openssl-im.cnf
#modify the following section of openssl-im.cnf file
[ CA_default ]
dir             = .
private_key     = $dir/private/intermediate.key.pem
certificate     = $dir/certs/intermediate.cert.pem
crl             = $dir/crl/intermediate.crl.pem
policy          = policy_loose
openssl req -config ./openssl-im.cnf -new -sha256 -passin pass:password \-subj "/C=CN/ST=BJ/O=STS/OU=quqi.com/CN=quqi.com.imca/emailAddress=quqi@mail.com" \-key intermediate/private/intermediate.key.pem -out intermediate/csr/intermediate.csr.pem
openssl ca -config ./openssl.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -passin pass:password \-in intermediate/csr/intermediate.csr.pem -out intermediate/certs/intermediate.cert.pem
chmod 444 intermediate/certs/intermediate.cert.pem
openssl x509 -noout -text -in intermediate/certs/intermediate.cert.pem
openssl verify -CAfile certs/ca.cert.pem intermediate/certs/intermediate.cert.pem#generate certificate chain
cat intermediate/certs/intermediate.cert.pem certs/ca.cert.pem > intermediate/certs/ca-chain.cert.pem
chmod 444 intermediate/certs/ca-chain.cert.pem#generate clinet.quqi.com key pairs
openssl genrsa -out intermediate/private/client.quqi.com.key.pem 2048
chmod 444 intermediate/private/client.quqi.com.key.pem
openssl req -config ./openssl-im.cnf -key intermediate/private/client.quqi.com.key.pem \-subj "/C=CN/ST=BJ/O=STS/OU=quqi.com/CN=client.quqi.com/emailAddress=quqi@mail.com" \-new -sha256 -out intermediate/csr/client.quqi.com.csr.pem
openssl ca -config ./openssl.cnf -extensions server_cert -days 3650 -notext -md sha256 -passin pass:password\-in intermediate/csr/client.quqi.com.csr.pem -out intermediate/certs/client.quqi.com.cert.pem
chmod 444 intermediate/certs/client.quqi.com.cert.pem
openssl x509 -noout -text -in intermediate/certs/client.quqi.com.cert.pem
openssl verify -CAfile intermediate/certs/ca-chain.cert.pem intermediate/certs/client.quqi.com.cert.pem

证书

openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out ca.crt -keyout ca.key -subj "/C=US/ST=UK/L=London/O=Ubuntu/OU=IT/CN=CA"
openssl genrsa -out server.key
openssl req -new -key server.key -out server.csr -subj "/C=GB/ST=UK/L=London/O=Ubuntu/OU=Cloud/CN=server"
openssl x509 -req -in server.csr -out server.crt -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650
cat server.crt server.key > server.pem
openssl x509 -noout -text -in server.crt |grep CN
#openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca.crt -passout pass: -out server.p12
#kubectl create secret generic keystone-auth-certs --from-file=cert-file=server.crt --from-file=key-file=server.key -n kube-systemsudo apt install python3-minimal -y
sudo bash -c 'cat >simple-https-server.py' <<EOF
#!/usr/bin/env python3
# coding=utf-8
import http.server, ssl
server_address = ('0.0.0.0', 443)
httpd = http.server.HTTPServer(server_address, http.server.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket,server_side=True,keyfile='server.key',certfile='server.crt',ssl_version=ssl.PROTOCOL_TLS)
httpd.serve_forever()
EOF
sudo bash -c 'cat >index.html' <<EOF
test1
EOF
sudo python3 simple-https-server.py
#nohup sudo python3 simple-https-server.py &$ curl -k https://192.168.99.136
test1
$ curl --cacert ./ca.crt https://192.168.99.136
curl: (60) SSL: certificate subject name 'server' does not match target host name '192.168.99.136'
$ curl --resolve server:443:192.168.99.136 --cacert ./ca.crt https://server
test1注意:当为keystone创建key时,应该使用domain而不是ip避免SNI问题,同时务必记得为keystone配置hostname让openstackclient也能拿到带domain的url

20200515更新 - loadbalancer-topology=ACTIVE_STANDBY’

After running 'juju config octavia loadbalancer-topology=ACTIVE_STANDBY'ubuntu@zhhuabj-bastion:~$ openstack loadbalancer list
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
| id                                   | name                                                                   | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
| dc343e21-0dca-4be3-8815-c5645e07c28d | kube_service_kubernetes-tkwt84oxsw1bvxt0xorlgp3ibcrwgjo9_default_hello | 40cd6bca224f46c9b34c0f6813c1f2d0 | 192.168.21.63 | ACTIVE              | amphora  |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
ubuntu@zhhuabj-bastion:~$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+---------------+
| id                                   | loadbalancer_id                      | status    | role   | lb_network_ip                           | ha_ip         |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+---------------+
| a82b27d0-68cb-49a7-9387-4df3dbe617ca | dc343e21-0dca-4be3-8815-c5645e07c28d | ALLOCATED | BACKUP | fc00:9084:1613:154e:f816:3eff:fee3:7bf8 | 192.168.21.63 |
| d03e0ab4-a23c-4a4c-939c-3bcaf5356e30 | dc343e21-0dca-4be3-8815-c5645e07c28d | ALLOCATED | MASTER | fc00:9084:1613:154e:f816:3eff:fe70:dc9c | 192.168.21.63 |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+---------------+
ubuntu@zhhuabj-bastion:~$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
hello        LoadBalancer   10.152.183.26   10.5.150.54   80:32371/TCP   9m17s
kubernetes   ClusterIP      10.152.183.1    <none>        443/TCP        16hjuju scp ~/.ssh/id_amphora* nova-compute/8:/home/ubuntu/
juju ssh nova-compute/8 -- sudo ip netns exec qrouter-b03302b5-fc48-4ef7-8ded-ba17ac20c6da ssh -6 -i ~/id_amphora ubuntu@fc00:9084:1613:154e:f816:3eff:fe70:dc9cubuntu@amphora-d03e0ab4-a23c-4a4c-939c-3bcaf5356e30:~$ sudo cat /var/lib/octavia/dc343e21-0dca-4be3-8815-c5645e07c28d/haproxy.cfg
sudo: unable to resolve host amphora-d03e0ab4-a23c-4a4c-939c-3bcaf5356e30
# Configuration for loadbalancer dc343e21-0dca-4be3-8815-c5645e07c28d
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/dc343e21-0dca-4be3-8815-c5645e07c28d.sock mode 0666 level usermaxconn 1000000defaultslog globalretries 3option redispatchoption splice-requestoption splice-responseoption http-keep-alivepeers dc343e210dca4be38815c5645e07c28d_peerspeer dVi2pNT1gedR2yc35WJnwbb5fQI 192.168.21.60:1025peer ejcAi8u6hFfukxfJNty7MxS8unY 192.168.21.241:1025frontend 3d6a3f5a-c35d-4718-809d-e1cce82b059boption tcplogmaxconn 1000000bind 192.168.21.63:80mode tcpdefault_backend c28d0103-75cd-42f2-ba7b-ee7c3946eb81:3d6a3f5a-c35d-4718-809d-e1cce82b059btimeout client 50000backend c28d0103-75cd-42f2-ba7b-ee7c3946eb81:3d6a3f5a-c35d-4718-809d-e1cce82b059bmode tcpbalance roundrobinfullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 193a2e05-12a5-4aa3-baf2-1bcc0847324c 192.168.21.160:32371 weight 1server 79ea1870-45e9-4934-9a11-5826271dec96 192.168.21.252:32371 weight 1ubuntu@amphora-d03e0ab4-a23c-4a4c-939c-3bcaf5356e30:~$ sudo cat /var/lib/octavia/vrrp/octavia-keepalived.conf
sudo: unable to resolve host amphora-d03e0ab4-a23c-4a4c-939c-3bcaf5356e30
vrrp_script check_script {script /var/lib/octavia/vrrp/check_script.shinterval 5fall 2rise 2
}vrrp_instance dc343e210dca4be38815c5645e07c28d {state MASTERinterface eth1virtual_router_id 1priority 100nopreemptacceptgarp_master_refresh 5garp_master_refresh_repeat 2advert_int 1authentication {auth_type PASSauth_pass 36750c5}unicast_src_ip 192.168.21.241unicast_peer {192.168.21.60}virtual_ipaddress {192.168.21.63}virtual_routes {192.168.21.0/24 dev eth1 src 192.168.21.63 scope link table 1}virtual_rules {from 192.168.21.63/32 table 1 priority 100}track_script {check_script}
}ubuntu@amphora-d03e0ab4-a23c-4a4c-939c-3bcaf5356e30:~$ sudo ip netns exec amphora-haproxy ip addr show
sudo: unable to resolve host amphora-d03e0ab4-a23c-4a4c-939c-3bcaf5356e30
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1508 qdisc fq state UP group default qlen 1000link/ether fa:16:3e:67:47:a7 brd ff:ff:ff:ff:ff:ffinet 192.168.21.241/24 brd 192.168.21.255 scope global eth1valid_lft forever preferred_lft foreverinet 192.168.21.63/32 scope global eth1valid_lft forever preferred_lft foreverjuju ssh nova-compute/8 -- sudo ip netns exec qrouter-b03302b5-fc48-4ef7-8ded-ba17ac20c6da ssh -6 -i ~/id_amphora ubuntu@fc00:9084:1613:154e:f816:3eff:fee3:7bf8ubuntu@amphora-a82b27d0-68cb-49a7-9387-4df3dbe617ca:~$ sudo cat /var/lib/octavia/dc343e21-0dca-4be3-8815-c5645e07c28d/haproxy.cfg
sudo: unable to resolve host amphora-a82b27d0-68cb-49a7-9387-4df3dbe617ca
# Configuration for loadbalancer dc343e21-0dca-4be3-8815-c5645e07c28d
globaldaemonuser nobodylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/octavia/dc343e21-0dca-4be3-8815-c5645e07c28d.sock mode 0666 level usermaxconn 1000000defaultslog globalretries 3option redispatchoption splice-requestoption splice-responseoption http-keep-alivepeers dc343e210dca4be38815c5645e07c28d_peerspeer dVi2pNT1gedR2yc35WJnwbb5fQI 192.168.21.60:1025peer ejcAi8u6hFfukxfJNty7MxS8unY 192.168.21.241:1025frontend 3d6a3f5a-c35d-4718-809d-e1cce82b059boption tcplogmaxconn 1000000bind 192.168.21.63:80mode tcpdefault_backend c28d0103-75cd-42f2-ba7b-ee7c3946eb81:3d6a3f5a-c35d-4718-809d-e1cce82b059btimeout client 50000backend c28d0103-75cd-42f2-ba7b-ee7c3946eb81:3d6a3f5a-c35d-4718-809d-e1cce82b059bmode tcpbalance roundrobinfullconn 1000000option allbackupstimeout connect 5000timeout server 50000server 193a2e05-12a5-4aa3-baf2-1bcc0847324c 192.168.21.160:32371 weight 1server 79ea1870-45e9-4934-9a11-5826271dec96 192.168.21.252:32371 weight 1ubuntu@amphora-a82b27d0-68cb-49a7-9387-4df3dbe617ca:~$ sudo cat /var/lib/octavia/vrrp/octavia-keepalived.conf
sudo: unable to resolve host amphora-a82b27d0-68cb-49a7-9387-4df3dbe617ca
vrrp_script check_script {script /var/lib/octavia/vrrp/check_script.shinterval 5fall 2rise 2
}vrrp_instance dc343e210dca4be38815c5645e07c28d {state BACKUPinterface eth1virtual_router_id 1priority 90nopreemptacceptgarp_master_refresh 5garp_master_refresh_repeat 2advert_int 1authentication {auth_type PASSauth_pass 36750c5}unicast_src_ip 192.168.21.60unicast_peer {192.168.21.241}virtual_ipaddress {192.168.21.63}virtual_routes {192.168.21.0/24 dev eth1 src 192.168.21.63 scope link table 1}virtual_rules {from 192.168.21.63/32 table 1 priority 100}track_script {check_script}
}ubuntu@amphora-a82b27d0-68cb-49a7-9387-4df3dbe617ca:~$ sudo ip netns exec amphora-haproxy ip addr show
sudo: unable to resolve host amphora-a82b27d0-68cb-49a7-9387-4df3dbe617ca
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1508 qdisc fq state UP group default qlen 1000link/ether fa:16:3e:b7:fb:25 brd ff:ff:ff:ff:ff:ffinet 192.168.21.60/24 brd 192.168.21.255 scope global eth1valid_lft forever preferred_lft forever

debug

nova keypair-list --user $(openstack user show octavia --domain service_domain -f value -c id)
#nova keypair-add --pub-key=~/.ssh/id_amphora.pub amphora-backdoor --user $(openstack user show octavia --domain service_domain -f value -c id)
GW_HOST=$(neutron l3-agent-list-hosting-router lb-mgmt -f value -c host)
juju scp ~/.ssh/id_amphora* ${GW_HOST}:/home/ubuntu/
AMP_IP6=$(nova list --all |grep amphora |awk -F '=' '{print $2}' |awk '{print $1}')
juju ssh $GW_HOST -- sudo ip netns exec qrouter-4d236939-4586-499b-b620-b7f6923d605a ssh -6 -i ~/.ssh/id_amphora ubuntu@$AMP_IP6 -v
sudo ip netns exec amphora-haproxy nc -w4 -vz 10.24.45.133 80
juju config octavia amp-ssh-key-name=amphora-backdoorselect * from amphora;
select * from load_balancer;
select * from member;
select * from pool;
SELECT load_balancer.id, load_balancer.enabled, \
load_balancer.provisioning_status AS lb_prov_status, \
load_balancer.operating_status AS lb_op_status, \
listener.id AS list_id, \
listener.operating_status AS list_op_status, \
listener.enabled AS list_enabled, \
listener.protocol AS list_protocol, \
pool.id AS pool_id, \
pool.operating_status AS pool_op_status, \
member.id AS member_id, \
member.operating_status AS mem_op_status from \
amphora JOIN load_balancer ON \
amphora.load_balancer_id = load_balancer.id LEFT JOIN \
listener ON load_balancer.id = listener.load_balancer_id \
LEFT JOIN pool ON load_balancer.id = pool.load_balancer_id \
LEFT JOIN member ON pool.id = member.pool_id WHERE \
amphora.id = '5ceb9e07d203464992634e5f6d6bc778';

20201030更新

#./generate-bundle.sh --name ovn-octavia --series bionic --release ussuri --octavia --ovn --create-model --run
#juju config neutron-api default-tenant-network-type=gre
#openstack --insecure endpoint list
#DOMAIN=$(juju ssh keystone/0 -- hostname -f |sed s/[[:space:]]//g)
#juju config keystone os-admin-hostname=$DOMAIN os-public-hostname=$DOMAIN os-internal-hostname=$DOMAIN
#source novarc
#export OS_AUTH_URL=https://${DOMAIN}:5000/v3
#export OS_CACERT=/etc/ssl/certs/    #need install cacert.pem into system level as well
#juju ssh keystone/0 -- openssl x509 -noout -text -in /etc/apache2/ssl/keystone/cert_10.5.1.45 |grep CN
#curl --resolve ${DOMAIN}:5000:10.5.1.45 --cacert ~/stsstack-bundles/openstack/ssl/openstack-ssl/results/cacert.pem https://${DOMAIN}:5000
#./tools/vault-unseal-and-authorise.sh
#./tools/configure_octavia.sh && openstack port show octavia-health-manager-octavia-0-listen-port./generate-bundle.sh --name octavia:stsstack --create-model --octavia -s focal --num-compute 1
./generate-bundle.sh --name octavia:stsstack --replay --run
./configure
source novarc
./tools/instance_launch.sh 1 cirros2
./tools/float_all.sh
./tools/sec_groups.sh./tools/configure_octavia.sh
openstack port show octavia-health-manager-octavia-0-listen-port# 第一个容易出现的问题o-hm0 出现binding failed错误多半是因为host没有使用fqdn ( https://bugs.launchpad.net/charm-octavia/+bug/1902765, workaroud: openstack port set --host <octavia-unit-fqdn> <port-uuid> ) ,即与neutron-ovs-agent绑定的host不一致所致
#fix binding failed error 'Cannot get tag for port o-hm0 from its other_config: {}'
neutron router-gateway-clear lb-mgmt
neutron router-interface-delete lb-mgmt lb-mgmt-subnetv6
neutron subnet-delete lb-mgmt-subnetv6
neutron port-delete octavia-health-manager-octavia-0-listen-port
neutron net-delete lb-mgmt-net
neutron router-delete lb-mgmt
#./tools/create_ipv4_octavia.sh
openstack network create lb-mgmt-net --tag charm-octavia
openstack subnet create --tag charm-octavia --subnet-range 21.0.0.0/29 --dhcp  --ip-version 4 --network lb-mgmt-net lb-mgmt-subnet
openstack router create lb-mgmt --tag charm-octavia
openstack router add subnet lb-mgmt lb-mgmt-subnet   #neutron router-interface-add lb-mgmt lb-mgmt-subnet
#openstack security group create lb-mgmt-sec-grp --project $(openstack security group show lb-mgmt-sec-grp -f value -c project_id)
openstack security group rule create --protocol udp --dst-port 5555 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp
openstack security group rule create --protocol icmp lb-mgmt-sec-grp
openstack security group show lb-mgmt-sec-grp
#openstack security group create lb-health-mgr-sec-grp --project $(openstack security group show lb-mgmt-sec-grp -f value -c project_id)
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol icmp lb-health-mgr-sec-grp
LB_HOST=$(juju ssh octavia/0 -- hostname -f)  #should be fqdn name juju-305883-octavia-10.cloud.sts
juju ssh octavia/0 -- sudo ovs-vsctl del-port br-int o-hm0
neutron port-delete mgmt-port
neutron port-create --name mgmt-port --security-group $(openstack security group show lb-health-mgr-sec-grp -f value -c id) --device-owner Octavia:health-mgr --binding:host_id="juju-8284e4-ovn-octavia-7.cloud.sts" lb-mgmt-net --tenant-id $(openstack security group show lb-health-mgr-sec-grp -f value -c project_id)
neutron port-show mgmt-port |grep binding
juju ssh octavia/0 -- sudo ovs-vsctl del-port br-int o-hm0
juju ssh octavia/0 -- sudo ovs-vsctl --may-exist add-port br-int o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$(neutron port-show mgmt-port -f value -c mac_address) -- set Interface o-hm0 external-ids:iface-id=$(neutron port-show mgmt-port -f value -c id)
juju ssh octavia/0 -- sudo ip link set dev o-hm0 address $(neutron port-show mgmt-port -f value -c mac_address)#finally need to change amp_boot_network_list in octavia.conf, then run 'systemctl restart octavia*'#./tools/upload_octavia_amphora_image.sh --release focal
juju run-action octavia-diskimage-retrofit/0 --wait retrofit-image source-image=$(openstack image list |grep focal |awk '{print $2}')
#openstack loadbalancer create --name lb2 --vip-network-id lb-mgmt
sudo snap install --edge --devmode octavia-diskimage-retrofit
sudo -s
cd /var/snap/octavia-diskimage-retrofit/common/tmp
wget https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
/snap/bin/octavia-diskimage-retrofit ubuntu-20.04-minimal-cloudimg-amd64.img ubuntu-amphora-haproxy-amd64.qcow2 -d u ussuri
exit
# this image is just for ussuri
openstack image create --disk-format qcow2 --container-format bare --public --tag octavia-amphora --file /var/snap/octavia-diskimage-retrofit/common/tmp/ubuntu-amphora-haproxy-amd64.qcow2 amphora-bionic-x64-haproxy#./tools/create_octavia_lb.sh --name lb1 --member-vm bionic-081730 --protocol HTTP --protocol-port 80
sudo apt install python3-octaviaclient -y
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
while true; do[[ `openstack loadbalancer show lb1 --column provisioning_status --format value` = ACTIVE ]] \&& breakecho "waiting for lb1"
done
openstack loadbalancer listener create --name lb1-listener --protocol HTTP --protocol-port 80 lb1
while true; do[[ `openstack loadbalancer listener show lb1-listener --column provisioning_status --format value` = ACTIVE ]] \&& breakecho "waiting for lb1-listener"
done
openstack loadbalancer pool create --name lb1-pool --lb-algorithm LEAST_CONNECTIONS \
--session-persistence type=SOURCE_IP --listener lb1-listener --protocol HTTP
while true; do[[ `openstack loadbalancer pool show lb1-pool --column provisioning_status --format value` = ACTIVE ]] \&& breakecho "waiting for lb1-pool"
done
member1_IP='192.168.21.48'
member_id=$(openstack loadbalancer member create --subnet-id private_subnet --address $member1_IP --protocol-port 80 --format value --column id lb1-pool)
while true; do
[[ $(openstack loadbalancer member show --format value \--column provisioning_status lb1-pool ${member_id}) = ACTIVE ]] \&& break
echo "waiting for member (${member_id})"
done
openstack loadbalancer healthmonitor create --name lb1-monitor --timeout 120 --max-retries 3 --delay 5 --type PING lb1-pool
openstack loadbalancer healthmonitor list#第二个容易出现的问题, 如果lb1没有管理IP的话,会造成o-hm0无法从lb1 mgmt:5555处获得udp心跳,你将反复看到下列日志,但amphora_health表为空.
2020-10-29 10:13:25.201 31215 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check.<locals>.periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:642# debug sql
select * from health_monitor;
select * from load_balancer;
select * from listener;
select * from pool;
select * from member;
SELECT load_balancer.id, load_balancer.enabled, \
load_balancer.provisioning_status AS lb_prov_status, \
load_balancer.operating_status AS lb_op_status, \
listener.id AS list_id, \
listener.operating_status AS list_op_status, \
listener.enabled AS list_enabled, \
listener.protocol AS list_protocol, \
pool.id AS pool_id, \
pool.operating_status AS pool_op_status, \
member.id AS member_id, \
member.operating_status AS mem_op_status from \
amphora JOIN load_balancer ON \
amphora.load_balancer_id = load_balancer.id LEFT JOIN \
listener ON load_balancer.id = listener.load_balancer_id \
LEFT JOIN pool ON load_balancer.id = pool.load_balancer_id \
LEFT JOIN member ON pool.id = member.pool_id WHERE \
amphora.id = '44c230a150db410082ef209e2b6392fb';

20210305更新

./generate-bundle.sh --name octavia --series bionic --release ussuri --dvr --l3ha --octavia --num-compute 3 --create-model --run
juju config neutron-api enable-dvr=True
juju config neutron-openvswitch use-dvr-snat=True
juju config neutron-api enable-l3ha=true
juju config neutron-openvswitch enable-local-dhcp-and-metadata=True
./configure
juju config octavia loadbalancer-topology=ACTIVE_STANDBY
source novarc
#juju run-action octavia-diskimage-retrofit/0 --wait retrofit-image source-image=<uuid>'
./tools/configure_octavia.sh
./tools/upload_octavia_amphora_image.sh --release ussuri  #use a pre-created image
./tools/instance_launch.sh 1 bionic
./tools/float_all.sh
./tools/sec_groups.sh
ssh -i ~/testkey.priv ubuntu@10.5.153.58 -- sudo apt update && sudo apt install apache2 -y
./tools/create_octavia_lb.sh --name lb1 --member-vm bionic-003049 --protocol HTTP --protocol-port 80

一些数据

$ openstack loadbalancer show lb1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| availability_zone   | None                                 |
| created_at          | 2021-06-01T06:20:13                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5 |
| listeners           | 3bc7d0b8-aaf5-44ce-a5fe-28d9c9183fe0 |
| name                | lb1                                  |
| operating_status    | OFFLINE                              |
| pools               | 6b3ecd81-55e1-4bde-af49-f937e5823942 |
| project_id          | ca3908bc516741eeae3adacffe93a5d8     |
| provider            | amphora                              |
| provisioning_status | ACTIVE                               |
| updated_at          | 2021-06-01T06:28:38                  |
| vip_address         | 192.168.21.254                       |
| vip_network_id      | 76e499f9-23ec-46f7-b761-e61f170d0e08 |
| vip_port_id         | 1a567bbc-6309-4e12-987b-503c23c7e145 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 6f84f38a-1e45-4525-acce-b964923c0486 |
+---------------------+--------------------------------------+$ openstack loadbalancer listener show lb1-listener
+-----------------------------+--------------------------------------+
| Field                       | Value                                |
+-----------------------------+--------------------------------------+
| admin_state_up              | True                                 |
| connection_limit            | -1                                   |
| created_at                  | 2021-06-01T06:22:55                  |
| default_pool_id             | 6b3ecd81-55e1-4bde-af49-f937e5823942 |
| default_tls_container_ref   | None                                 |
| description                 |                                      |
| id                          | 3bc7d0b8-aaf5-44ce-a5fe-28d9c9183fe0 |
| insert_headers              | None                                 |
| l7policies                  |                                      |
| loadbalancers               | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5 |
| name                        | lb1-listener                         |
| operating_status            | OFFLINE                              |
| project_id                  | ca3908bc516741eeae3adacffe93a5d8     |
| protocol                    | HTTP                                 |
| protocol_port               | 80                                   |
| provisioning_status         | ACTIVE                               |
| sni_container_refs          | []                                   |
| timeout_client_data         | 50000                                |
| timeout_member_connect      | 5000                                 |
| timeout_member_data         | 50000                                |
| timeout_tcp_inspect         | 0                                    |
| updated_at                  | 2021-06-01T06:28:38                  |
| client_ca_tls_container_ref | None                                 |
| client_authentication       | NONE                                 |
| client_crl_container_ref    | None                                 |
| allowed_cidrs               | None                                 |
| tls_ciphers                 | None                                 |
+-----------------------------+--------------------------------------+$ openstack loadbalancer pool show lb1-pool
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| admin_state_up       | True                                 |
| created_at           | 2021-06-01T06:23:08                  |
| description          |                                      |
| healthmonitor_id     | afbc71bd-5f2b-44c6-ac48-e8ca10fe3f6e |
| id                   | 6b3ecd81-55e1-4bde-af49-f937e5823942 |
| lb_algorithm         | ROUND_ROBIN                          |
| listeners            | 3bc7d0b8-aaf5-44ce-a5fe-28d9c9183fe0 |
| loadbalancers        | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5 |
| members              | d526f4bf-4d3a-4310-8c09-8c3c1a33a0a1 |
| name                 | lb1-pool                             |
| operating_status     | OFFLINE                              |
| project_id           | ca3908bc516741eeae3adacffe93a5d8     |
| protocol             | HTTP                                 |
| provisioning_status  | ACTIVE                               |
| session_persistence  | None                                 |
| updated_at           | 2021-06-01T06:28:38                  |
| tls_container_ref    | None                                 |
| ca_tls_container_ref | None                                 |
| crl_container_ref    | None                                 |
| tls_enabled          | False                                |
| tls_ciphers          | None                                 |
+----------------------+--------------------------------------+$ openstack loadbalancer member list lb1-pool
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address       | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| d526f4bf-4d3a-4310-8c09-8c3c1a33a0a1 |      | ca3908bc516741eeae3adacffe93a5d8 | ACTIVE              | 192.168.21.34 |            80 | OFFLINE          |      1 |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+$ openstack loadbalancer amphora list --loadbalancer $(openstack loadbalancer show lb1 -f value -c id)
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+----------------+
| id                                   | loadbalancer_id                      | status    | role   | lb_network_ip                           | ha_ip          |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+----------------+
| 650ace64-1e10-491e-bb9c-ddec3ec536f2 | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5 | ALLOCATED | MASTER | fc00:bd3f:2b32:4808:f816:3eff:fe12:91c5 | 192.168.21.254 |
| afb96987-9328-4c24-8b02-015393da211c | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5 | ALLOCATED | BACKUP | fc00:bd3f:2b32:4808:f816:3eff:fe10:6360 | 192.168.21.254 |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+----------------+$ openstack loadbalancer amphora show 650ace64-1e10-491e-bb9c-ddec3ec536f2
+-----------------+-----------------------------------------+
| Field           | Value                                   |
+-----------------+-----------------------------------------+
| id              | 650ace64-1e10-491e-bb9c-ddec3ec536f2    |
| loadbalancer_id | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5    |
| compute_id      | c116e8d6-85c7-4d29-adab-9b22f5332910    |
| lb_network_ip   | fc00:bd3f:2b32:4808:f816:3eff:fe12:91c5 |
| vrrp_ip         | 192.168.21.46                           |
| ha_ip           | 192.168.21.254                          |
| vrrp_port_id    | e44dd694-fc44-45dd-858a-f4c80ca9b006    |
| ha_port_id      | 1a567bbc-6309-4e12-987b-503c23c7e145    |
| cert_expiration | 2021-07-01T06:20:19                     |
| cert_busy       | False                                   |
| role            | MASTER                                  |
| status          | ALLOCATED                               |
| vrrp_interface  | eth1                                    |
| vrrp_id         | 1                                       |
| vrrp_priority   | 100                                     |
| cached_zone     | nova                                    |
| created_at      | 2021-06-01T06:20:19                     |
| updated_at      | 2021-06-01T06:22:49                     |
| image_id        | 59b3fa63-b0a7-4e81-8670-0448184b7a0e    |
| compute_flavor  | b80aed58-cd0b-4af7-ad86-15e74864f105    |
+-----------------+-----------------------------------------+$ openstack loadbalancer amphora show afb96987-9328-4c24-8b02-015393da211c
+-----------------+-----------------------------------------+
| Field           | Value                                   |
+-----------------+-----------------------------------------+
| id              | afb96987-9328-4c24-8b02-015393da211c    |
| loadbalancer_id | 6e3e3679-038e-46b5-a76e-f9b83d48d1b5    |
| compute_id      | 97937653-1edb-466f-adb1-b14be73fe27a    |
| lb_network_ip   | fc00:bd3f:2b32:4808:f816:3eff:fe10:6360 |
| vrrp_ip         | 192.168.21.243                          |
| ha_ip           | 192.168.21.254                          |
| vrrp_port_id    | 1695e67c-8f77-4860-832e-3b453faae766    |
| ha_port_id      | 1a567bbc-6309-4e12-987b-503c23c7e145    |
| cert_expiration | 2021-07-01T06:20:19                     |
| cert_busy       | False                                   |
| role            | BACKUP                                  |
| status          | ALLOCATED                               |
| vrrp_interface  | eth1                                    |
| vrrp_id         | 1                                       |
| vrrp_priority   | 90                                      |
| cached_zone     | nova                                    |
| created_at      | 2021-06-01T06:20:19                     |
| updated_at      | 2021-06-01T06:22:49                     |
| image_id        | 59b3fa63-b0a7-4e81-8670-0448184b7a0e    |
| compute_flavor  | b80aed58-cd0b-4af7-ad86-15e74864f105    |
+-----------------+-----------------------------------------+$ openstack server show c116e8d6-85c7-4d29-adab-9b22f5332910
+-------------------------------------+----------------------------------------------------------------------------+
| Field                               | Value                                                                      |
+-------------------------------------+----------------------------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                                     |
| OS-EXT-AZ:availability_zone         | nova                                                                       |
| OS-EXT-SRV-ATTR:host                | juju-c2f77a-octavia-10.cloud.sts                                           |
| OS-EXT-SRV-ATTR:hypervisor_hostname | juju-c2f77a-octavia-10.cloud.sts                                           |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000002                                                          |
| OS-EXT-STS:power_state              | Running                                                                    |
| OS-EXT-STS:task_state               | None                                                                       |
| OS-EXT-STS:vm_state                 | active                                                                     |
| OS-SRV-USG:launched_at              | 2021-06-01T06:20:50.000000                                                 |
| OS-SRV-USG:terminated_at            | None                                                                       |
| accessIPv4                          |                                                                            |
| accessIPv6                          |                                                                            |
| addresses                           | lb-mgmt-net=fc00:bd3f:2b32:4808:f816:3eff:fe12:91c5; private=192.168.21.46 |
| config_drive                        | True                                                                       |
| created                             | 2021-06-01T06:20:22Z                                                       |
| flavor                              | charm-octavia (b80aed58-cd0b-4af7-ad86-15e74864f105)                       |
| hostId                              | 4e173318de05c5ead1fb8ff79ec0eadb51bc3a72a818ce197d8e59fc                   |
| id                                  | c116e8d6-85c7-4d29-adab-9b22f5332910                                       |
| image                               | octavia-amphora (59b3fa63-b0a7-4e81-8670-0448184b7a0e)                     |
| key_name                            | amphora-backdoor                                                           |
| name                                | amphora-650ace64-1e10-491e-bb9c-ddec3ec536f2                               |
| progress                            | 0                                                                          |
| project_id                          | 1f2f7a5140e548389177d7df6e84e8fa                                           |
| properties                          |                                                                            |
| security_groups                     | name='lb-mgmt-sec-grp'                                                     |
|                                     | name='lb-6e3e3679-038e-46b5-a76e-f9b83d48d1b5'                             |
| status                              | ACTIVE                                                                     |
| updated                             | 2021-06-01T06:20:51Z                                                       |
| user_id                             | b0f127415d4a4a328d54cbb3d3edc443                                           |
| volumes_attached                    |                                                                            |
+-------------------------------------+----------------------------------------------------------------------------+

20210601 - immutable and cannot be updated

As explained here [2], All objects in Octavia should and will end in a consistent state: ACTIVE or ERROR.
PENDING_* means the object is actively being worked on by a controller that has locked the object to make
sure others do not make changes to this object while the controller is working on the object.So in other words, we can see this immutable error, which means that at the momnet of the error, lb's
state or listener's state must be PENDING_* according to [3] and [4].and this state is transient. How can Octavia object be in PENDING_* state? The code [5] shows healthmonitor will change the pool status to PENDING_* and then back to ACTIVE once healthmonitor creation finishes. So HM(healthmonitor) creation is fine and they should wait until LB/pool is back to ACTIVE before adding
members, if not they will hit immutable since state will be like PENDING_UPDATE. [2] https://bugs.launchpad.net/octavia/+bug/1498130/comments/14
[3] https://github.com/openstack/octavia/blob/stable/ussuri/octavia/api/v2/controllers/load_balancer.py#L100
[4] https://github.com/openstack/octavia/blob/stable/ussuri/octavia/api/v2/controllers/listener.py#L103
[5] https://github.com/openstack/octavia/blob/stable/ussuri/octavia/controller/worker/v2/flows/health_monitor_flows.py#L42-L45

20210603 - 修复LB状态

有时LB处于error状态,其amphora vm有的role是None,有的amphora vm从nova show中看不到虚机,可通过下列方法修复

删除role为none的amphora行之后,LB的状态才会进入PENDING_UPDATE状态
$ os loadbalancer amphora list | grep 6f4734ff-c766-452d-88bb-bc0cfa3f925c
| 14fd746f-b718-42ba-9c26-ee2d1783287b | 6f4734ff-c766-452d-88bb-bc0cfa3f925c | ALLOCATED | BACKUP | fc00:302b:ae3a:c95c:f816:3eff:fe79:b82b | 172.16.1.200 |
| 889f2f73-1ce7-4d56-b770-346ae3c4711a | 6f4734ff-c766-452d-88bb-bc0cfa3f925c | ALLOCATED | MASTER | fc00:302b:ae3a:c95c:f816:3eff:fee7:e31e | 172.16.1.200 |
| 3ae4b5d1-29d8-4327-817f-bbdd2fe16ba4 | 6f4734ff-c766-452d-88bb-bc0cfa3f925c | ALLOCATED | None | fc00:302b:ae3a:c95c:f816:3eff:febd:1916 | 172.16.1.200 |# Delete the entry from amphora_health so that Octavia Health manager does not failover LB
delete from amphora_health where amphora_id = '3ae4b5d1-29d8-4327-817f-bbdd2fe16ba4';
update amphora set status='DELETED' where id = '3ae4b5d1-29d8-4327-817f-bbdd2fe16ba4';
#update load_balancer set provisioning_status='ACTIVE' where id=''<LB ID>"
openstack loadbalancer amphora failover 231a72a2-362e-4d94-8c7e-df2a524a5bdc

20211208更新

./generate-bundle.sh -s focal -n octavia:stsstack --use-stable-charms --octavia  --num-compute 1 --octavia-ipv4 --run
./tools/vault-unseal-and-authorise.sh
./configure
source novarc
./tools/sec_groups.sh
juju run-action octavia-diskimage-retrofit/0 --wait retrofit-image source-image=$(openstack image list |grep focal |awk '{print $2}')
./tools/configure_octavia.sh && ./tools/create_ipv4_octavia.sh
./tools/instance_launch.sh 1 focal
./tools/float_all.sh
ssh -i ~/testkey.priv ubuntu@10.5.151.133 -- sudo apt install apache2 -y
./tools/create_octavia_lb.sh --name lb1 --member-vm focal-061609 --protocol HTTP --protocol-port 80#Policy does not allow this request to be performed
openstack role |grep load-balancer
openstack role add --project $(openstack project show --domain admin_domain admin -fvalue -cid) --user admin load-balancer_admin
openstack role assignment list --project admin --user admin --name

想将现有环境修改成l3ha的话:

./generate-bundle.sh -s focal -n octavia:stsstack --use-stable-charms --octavia --dvr --l3ha --num-compute 3 --run
./configure
source novarc
openstack router set --disable lb-mgmt
openstack router set --no-ha lb-mgmt
openstack router set --ha lb-mgmt
openstack router set --enable lb-mgmt
neutron l3-agent-list-hosting-router lb-mgmt

这次的问题是o-hm0是DOWN状态的:

  • 有fe00的ipv6 mgmt地址,只是是DOWN状态。所以neutron-keepalived-state-change应该没问题,它能正常触发active/standby, keepalived应该也没问题因为有IP嘛
  • 应该是l3-agent的问题造成port没有UP, l3-agent的问题可能是SG太多造成的CPU load
  • disable HA for mgmt newtork 可能解决问题
  • 如果只是ipv6方面的问题,也可以switch mght newtork from ipv6 to ipv4

20220403更新 - Certificate does not have key usage extension

openvpn使用的证书需要extendedKeyUsage, 否则会报:Certificate does not have key usage extension
1, 这样不work

# https://blog.dreamtobe.cn/openwrt_openvpn/
openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out ca1.crt -keyout ca1.key -subj "/C=CN/ST=BJ/L=BJ/O=STS/OU=HUA/CN=CA"
openssl genrsa -out server1.key
openssl req -new -key server1.key -out server1.csr -subj "/C=CN/ST=BJ/L=BJ/O=STS/OU=HUA/CN=server1"
openssl x509 -req -in server1.csr -out server1.crt -sha256 -CA ca1.crt -CAkey ca1.key -CAcreateserial -days 3650
openssl genrsa -out client1.key
openssl req -new -key client1.key -out client1.csr -subj "/C=CN/ST=BJ/L=BJ/O=STS/OU=HUA/CN=client1"
openssl x509 -req -in client1.csr -out client1.crt -sha256 -CA ca1.crt -CAkey ca1.key -CAcreateserial -days 3650
chmod 0600 ca1.key && chmod 0600 server1.key && chmod 0600 client1.key
openssl dhparam -out dh2048.pem 2048

2, 这样也不work

openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out ca2.crt -keyout ca2.key -subj "/C=CN/ST=BJ/L=BJ/O=STS/OU=HUA/CN=CA"
openssl genrsa -out server2.key
openssl req -new -key server2.key -out server2.csr -subj "/C=CN/ST=BJ/L=BJ/O=STS/OU=HUA/CN=server2" -addext "keyUsage = digitalSignature, keyEncipherment" -addext "extendedKeyUsage = serverAuth"
openssl x509 -req -in server2.csr -out server2.crt -sha256 -CA ca2.crt -CAkey ca2.key -CAcreateserial -days 3650
openssl genrsa -out client2.key
openssl req -new -key client2.key -out client2.csr -subj "/C=CN/ST=BJ/L=BJ/O=STS/OU=HUA/CN=client2" -addext "keyUsage = digitalSignature, keyEncipherment" -addext "extendedKeyUsage = clientAuth"
openssl x509 -req -in client2.csr -out client2.crt -sha256 -CA ca2.crt -CAkey ca2.key -CAcreateserial -days 3650
chmod 0600 ca2.key && chmod 0600 server2.key && chmod 0600 client2.key
openssl dhparam -out dh2048.pem 2048

3, 这样work

PKI_DIR="/etc/openvpn/ssl"
mkdir -p ${PKI_DIR}
chmod -R 0600 ${PKI_DIR}
cd ${PKI_DIR}
touch index.txt; echo 1000 > serial
mkdir newcerts
cp /etc/ssl/openssl.cnf ${PKI_DIR}
PKI_CNF=${PKI_DIR}/openssl.cnf
sed -i '/^dir/   s:=.*:= /etc/openvpn/ssl:'                      ${PKI_CNF}
sed -i '/.*Name/ s:= match:= optional:'                    ${PKI_CNF}
sed -i '/organizationName_default/    s:= .*:= WWW Ltd.:'  ${PKI_CNF}
sed -i '/stateOrProvinceName_default/ s:= .*:= London:'    ${PKI_CNF}
sed -i '/countryName_default/         s:= .*:= GB:'        ${PKI_CNF}
sed -i '/default_days/   s:=.*:= 3650:'                    ${PKI_CNF} ## default usu.: -days 365
sed -i '/default_bits/   s:=.*:= 4096:'                    ${PKI_CNF} ## default usu.: -newkey rsa:2048
cat >> ${PKI_CNF} <<"EOF"
###############################################################################
### Check via: openssl x509 -text -noout -in *.crt | grep 509 -A 1
[ my-server ]
#  X509v3 Key Usage:          Digital Signature, Key Encipherment
#  X509v3 Extended Key Usage: TLS Web Server AuthenticationkeyUsage = digitalSignature, keyEnciphermentextendedKeyUsage = serverAuth[ my-client ]
#  X509v3 Key Usage:          Digital Signature
#  X509v3 Extended Key Usage: TLS Web Client AuthenticationkeyUsage = digitalSignatureextendedKeyUsage = clientAuthEOF
openssl req -batch -nodes -new -keyout "ca.key" -out "ca.crt" -x509 -config ${PKI_CNF}  ## x509 (self-signed) for the CA
openssl req -batch -nodes -new -keyout "my-server.key" -out "my-server.csr" -subj "/CN=my-server" -config ${PKI_CNF}
openssl ca  -batch -keyfile "ca.key" -cert "ca.crt" -in "my-server.csr" -out "my-server.crt" -config ${PKI_CNF} -extensions my-server
openssl req -batch -nodes -new -keyout "my-client.key" -out "my-client.csr" -subj "/CN=my-client" -config ${PKI_CNF}
openssl ca  -batch -keyfile "ca.key" -cert "ca.crt" -in "my-client.csr" -out "my-client.crt" -config ${PKI_CNF} -extensions my-client
chmod 0600 "ca.key"
chmod 0600 "my-server.key"
chmod 0600 "my-client.key"
openssl dhparam -out dh2048.pem 2048

此外要注意的一点是, 虽然192.168.99.1与10.8.0.1 (tun0)都在路由器上,192.168.99.1:53也是通的,但是从client端是无法访问192.168.99.1:53上的dns服务的.可以采用访问10.8.0.1:53代替

push "dhcp-option DNS 10.8.0.1"
push "dhcp-option DOMAIN lan"# /etc/dnsmasq.conf
domain-needed
bogus-priv
local-service
listen-address=127.0.0.1
listen-address=10.8.0.1
local=/myvpn.example.com/
addn-hosts=/etc/hosts.openvpn

20220611 - 创建证书

mkdir -p ss && cd ss
#openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out ca.crt -keyout ca.key -subj "/C=US/ST=Washington/L=Redmond/O=Microsoft/OU=IT/CN=CA"
#for DOMAIN in server
#do
#  openssl genrsa -out $DOMAIN.key
#  openssl req -new -key $DOMAIN.key -out $DOMAIN.csr -subj "/C=US/ST=Washington/L=Redmond/O=Microsoft/OU=IT/CN=$DOMAIN"
#  openssl x509 -req -in $DOMAIN.csr -out $DOMAIN.crt -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650
#done
openssl ecparam -out ca.key -name secp384r1 -genkey
openssl req -new -sha256 -key ca.key -out ca.csr \-subj "/C=US/ST=Washington/L=Redmond/O=Microsoft/OU=IT/CN=CA" \-addext "keyUsage=critical,keyCertSign,cRLSign,digitalSignature,keyEncipherment" \-addext "extendedKeyUsage=serverAuth,clientAuth,codeSigning,emailProtection,timeStamping"
openssl x509 -req -sha256 -days 3650 -in ca.csr -signkey ca.key -out ca.crt
openssl ecparam -out example.com.key -name secp384r1 -genkey
openssl req -new -sha256 -key example.com.key -out example.com.csr \-subj "/C=US/ST=Washington/L=Redmond/O=Microsoft/OU=IT/CN=example.com" \-addext "keyUsage=critical,keyCertSign,cRLSign,digitalSignature,keyEncipherment" \-addext "extendedKeyUsage=serverAuth,clientAuth,codeSigning,emailProtection,timeStamping"
openssl x509 -req -in example.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out example.com.crt -days 3650 -sha256
chmod +r example.com.key
openssl dhparam -out dhparam 2048apt install nginx -y
cp dhparam /etc/nginx/dhparam
cat << EOF | tee /etc/nginx/sites-available/default
server {listen 80 default_server;listen [::]:80 default_server;location / {return 301 https://\$host\$request_uri;}
}
server {listen 443 ssl http2;listen [::]:443 ssl http2;server_name example.com;ssl_certificate /root/ss/example.com.crt;ssl_certificate_key /root/ss/example.com.key;ssl_session_timeout 1d;ssl_session_cache shared:MozSSL:10m; ssl_session_tickets off;ssl_dhparam /root/ss/dhparam;ssl_ecdh_curve secp384r1;ssl_protocols TLSv1.2;ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;ssl_prefer_server_ciphers on; location /abcdefgh {proxy_redirect off;proxy_pass http://127.0.0.1:8008;proxy_http_version 1.1;proxy_set_header Upgrade \$http_upgrade;proxy_set_header Connection "upgrade";proxy_set_header Host \$http_host;}
}
EOF

客户端想要客户端认证的话(不想得添加no-verify标志),需:

sudo cp ./ca.crt /usr/share/ca-certificates/extras/ss.crt
sudo dpkg-reconfigure ca-certificates
sudo update-ca-certificates --fresh
#export SSL_CERT_DIR=/etc/ssl/certs

debug测试:

openssl x509 -noout -text -in ./ca.crt
openssl verify -CAfile ./ca.crt example.com.crt
nmap --script ssl-enum-ciphers -p 443 <IP>
OPENSSL_CONF=/etc/ssl/ openssl s_client -CApath /etc/ssl/certs/ -connect <IP>:443 -tls1_2 -servername example.com
curl -k <IP>:443
curl --resolve example.com:443:<IP> --cacert ./ca.crt <IP>:443
curl --proxy socks5://127.0.0.1:7072 https://ifconfig.me
google-chrome --proxy-server="socks5://127.0.0.1:7072"

20220923更新

为了验证: https://bugs.launchpad.net/charm-octavia/+bug/1946325

./generate-bundle.sh --name octavia:stsstack --create-model --octavia -s focal --num-compute 3 --ovn --run
./tools/vault-unseal-and-authorise.sh
#juju config neutron-api enable-dvr=True
juju config octavia loadbalancer-topology=ACTIVE_STANDBY
./configure
./tools/configure_octavia.sh
openstack port show octavia-health-manager-octavia-0-listen-port
source novarc
./tools/instance_launch.sh 1 focal
./tools/float_all.sh
./tools/sec_groups.sh
juju run-action octavia-diskimage-retrofit/0 --wait retrofit-image source-image=$(openstack image list |grep focal |awk '{print $2}')#change to use IPv4 lb mgmt network
neutron router-gateway-clear lb-mgmt
neutron router-interface-delete lb-mgmt lb-mgmt-subnetv6
neutron subnet-delete lb-mgmt-subnetv6
neutron port-delete octavia-health-manager-octavia-0-listen-port
neutron net-delete lb-mgmt-net
neutron router-delete lb-mgmt
openstack security group delete lb-mgmt-sec-grp
openstack security group delete lb-health-mgr-sec-grp
./tools/create_ipv4_octavia.sh
LB_HOST=$(juju ssh octavia/0 -- hostname -f)  #should be fqdn name juju-4e8193-octavia-11.cloud.sts
neutron port-delete mgmt-port
#if device-owner is neutron:LOADBALANCERV2 rather than Octavia:Health_mgr it will hit lp bug 1946325
neutron port-create --name mgmt-port --security-group $(openstack security group show lb-health-mgr-sec-grp -f value -c id) --device-owner neutron:LOADBALANCERV2 --binding:host_id="juju-8284e4-ovn-octavia-7.cloud.sts" lb-mgmt-net --tenant-id $(openstack security group show lb-health-mgr-sec-grp -f value -c project_id)
neutron port-show mgmt-port |grep binding
juju ssh octavia/0 -- sudo ovs-vsctl del-port br-int o-hm0
juju ssh octavia/0 -- sudo ovs-vsctl --may-exist add-port br-int o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$(neutron port-show mgmt-port -f value -c mac_address) -- set Interface o-hm0 external-ids:iface-id=$(neutron port-show mgmt-port -f value -c id)
juju ssh octavia/0 -- sudo ip link set dev o-hm0 address $(neutron port-show mgmt-port -f value -c mac_address)# o-hm0 has no IP due to lp bug 1946325
juju ssh octavia/0 -- ip addr show o-hm0
$ openstack port list |grep mgmt-port
| e95093cf-443a-4ec4-a862-b2fb60aca402 | mgmt-port | fa:16:3e:2a:a7:5e | ip_address='10.100.0.97', subnet_id='6ad182bc-3fba-47e0-8fc1-067b5aeb443a'   | DOWN   |
juju ssh ovn-central/1 --  sudo ovn-nbctl list dhcp_options
# ovn-nbctl show |grep mgmt-port -B1 -A1
switch 0d5a5c77-da58-4df5-9f07-ed40ccfdfcc6 (neutron-717cef9c-edda-422f-90d2-359f300c7800) (aka lb-mgmt-net)port e95093cf-443a-4ec4-a862-b2fb60aca402 (aka mgmt-port)addresses: ["fa:16:3e:2a:a7:5e 10.100.0.97"]
# ovn-nbctl lsp-get-dhcpv4-options e95093cf-443a-4ec4-a862-b2fb60aca402
<emtpy>#repeat above commands to change device-owner to Octavia:Health_mgr
openstack port show mgmt-port
# ovn-nbctl lsp-get-dhcpv4-options a87475d4-820b-4d36-9b1f-5afa7f2161ef
ea359d5c-abb7-4c74-8fb5-fc9346776def (10.100.0.0/24)

因为这里lb_mgmt network用的是IPv6且又不是slaac模式, 所以这里的dhcp_options是空的:

openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet$ nova list --all
+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+-----------------------------------------------------+
| ID                                   | Name                                         | Tenant ID                        | Status | Task State | Power State | Networks                                            |
+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+-----------------------------------------------------+
| c6e52d76-df37-48f7-bb91-8d8db30202b2 | amphora-3b9f62ad-363d-4863-bb84-e34470106dba | 9c3196d35bba49edb33e105fe4207435 | ACTIVE | -          | Running     | lb-mgmt-net=fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7 |
| b4e438d0-caba-4911-88de-70a379462eb2 | amphora-a16755f8-cebe-44ac-aaec-66be7173b02b | 9c3196d35bba49edb33e105fe4207435 | ACTIVE | -          | Running     | lb-mgmt-net=fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9 |
| 46605694-f14a-4884-b11b-8bd59394150c | focal-022224                                 | ae39f87adfc4440695df01ac66decc2b | ACTIVE | -          | Running     | private=192.168.21.157, 10.5.153.203                |
+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+-------------------------$ openstack port list |grep fc00
| 2b8c231f-f5b1-4c77-9d39-6671e6e80c13 |                                                      | fa:16:3e:8f:c8:b7 | ip_address='fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7', subnet_id='ebb70198-5da8-47fa-91fa-f7d5c9d66a22' | ACTIVE |
| 394763a6-d506-4928-8cde-85f2b10d8345 |                                                      | fa:16:3e:66:11:f9 | ip_address='fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9', subnet_id='ebb70198-5da8-47fa-91fa-f7d5c9d66a22' | ACTIVE |
| 4d8e072c-7742-4d2b-8504-b00c3abfc216 | octavia-health-manager-octavia-0-listen-port         | fa:16:3e:91:4f:78 | ip_address='fc00:ee1f:1550:bae9:f816:3eff:fe91:4f78', subnet_id='ebb70198-5da8-47fa-91fa-f7d5c9d66a22' | DOWN   |
| b56b4fd1-9eda-4bb7-bf75-03d4f772c88c |                                                      | fa:16:3e:a1:39:30 | ip_address='fc00:ee1f:1550:bae9:f816:3eff:fea1:3930', subnet_id='ebb70198-5da8-47fa-91fa-f7d5c9d66a22' | DOWN   |
| d115e1ba-e1da-473f-ba48-8c87e1214d9e |                                                      | fa:16:3e:34:82:e1 | ip_address='fc00:ee1f:1550:bae9::', subnet_id='ebb70198-5da8-47fa-91fa-f7d5c9d66a22'                   | ACTIVE |# ovn-nbctl lsp-get-dhcpv6-options 2b8c231f-f5b1-4c77-9d39-6671e6e80c13
<emtpy>
# ovn-nbctl lsp-get-dhcpv6-options 394763a6-d506-4928-8cde-85f2b10d8345
<empty># ovn-nbctl list Logical_Switch_Port 2b8c231f-f5b1-4c77-9d39-6671e6e80c13
_uuid               : d084afcd-b48b-4ab9-9d4a-4fbdf8eec526
addresses           : ["fa:16:3e:8f:c8:b7 fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7"]
dhcpv4_options      : []
dhcpv6_options      : []
dynamic_addresses   : []
enabled             : true
external_ids        : {"neutron:cidrs"="fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7/64", "neutron:device_id"="c6e52d76-df37-48f7-bb91-8d8db30202b2", "neutron:device_owner"="compute:nova", "neutron:network_name"=neutron-610cf760-36f4-4346-8323-ee1f1550bae9, "neutron:port_name"="", "neutron:project_id"="9c3196d35bba49edb33e105fe4207435", "neutron:revision_number"="4", "neutron:security_group_ids"="b7b81287-f482-427e-aa10-e42f42602ea6"}
ha_chassis_group    : []
name                : "2b8c231f-f5b1-4c77-9d39-6671e6e80c13"
options             : {mcast_flood_reports="true", requested-chassis=juju-f1ec8c-octavia-8.cloud.sts}
parent_name         : []
port_security       : ["fa:16:3e:8f:c8:b7 fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7"]
tag                 : []
tag_request         : []
type                : ""
up                  : true# ovn-nbctl list Logical_Switch_Port 394763a6-d506-4928-8cde-85f2b10d8345
_uuid               : 616abd09-bf62-4ed8-8a22-19e179e02694
addresses           : ["fa:16:3e:66:11:f9 fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9"]
dhcpv4_options      : []
dhcpv6_options      : []
dynamic_addresses   : []
enabled             : true
external_ids        : {"neutron:cidrs"="fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9/64", "neutron:device_id"="b4e438d0-caba-4911-88de-70a379462eb2", "neutron:device_owner"="compute:nova", "neutron:network_name"=neutron-610cf760-36f4-4346-8323-ee1f1550bae9, "neutron:port_name"="", "neutron:project_id"="9c3196d35bba49edb33e105fe4207435", "neutron:revision_number"="4", "neutron:security_group_ids"="b7b81287-f482-427e-aa10-e42f42602ea6"}
ha_chassis_group    : []
name                : "394763a6-d506-4928-8cde-85f2b10d8345"
options             : {mcast_flood_reports="true", requested-chassis=juju-f1ec8c-octavia-8.cloud.sts}
parent_name         : []
port_security       : ["fa:16:3e:66:11:f9 fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9"]
tag                 : []
tag_request         : []
type                : ""
up                  : true

OVN supported DHCP options - https://docs.openstack.org/neutron/latest/ovn/dhcp_opts.html
neutron port-update <PORT_ID> --extra-dhcp-opt opt_name=‘server-ip-address’,opt_value=‘10.0.0.1’
neutron port-update <PORT_ID> --extra-dhcp-opt ip_version=4, opt_name=dhcp_disabled, opt_value=false

#Using the native DHCP feature provided by OVN - https://docs.openstack.org/networking-ovn/ocata/design/native_dhcp.html
OVN implements a native DHCPv6 support similar to DHCPv4. When a v6 subnet is created, the OVN ML2 driver will insert a new entry into DHCP_Options table only when the subnet ‘ipv6_address_mode’ is not ‘slaac’, and enable_dhcp is True.

347 def is_dhcp_options_ignored(subnet):
348     # Don't insert DHCP_Options entry for v6 subnet with 'SLAAC' as
349     # 'ipv6_address_mode', since DHCPv6 shouldn't work for this mode.
350     return (subnet['ip_version'] == const.IP_VERSION_6 and
351             subnet.get('ipv6_address_mode') == const.IPV6_SLAAC)

所以slaac类型的IPv6是没有dhcp的(那就不通过dhcpv6 server来提供前缀和dns信息),那就只有通过router来提供前缀, 那问题就出在router端.
对于router port (lrp-xxx)如果定义了 HA Chassis Group 就会bind到 highest priority chassis, 否则通过requested-chassis设置(create_port|update_port -> _get_port_options)
1, 可以看到lrp-xxx port是没有requested-chassis属性的

# ovn-nbctl list Logical_Router_Port |grep fc00 -A2 -B8
_uuid               : 6cb05a94-f924-429d-a73a-4e897a63ec6d
enabled             : []
external_ids        : {"neutron:network_name"=neutron-610cf760-36f4-4346-8323-ee1f1550bae9, "neutron:revision_number"="3", "neutron:router_name"="6eab025d-9409-4728-b1e8-f2bf1d7e9a63", "neutron:subnet_ids"="ebb70198-5da8-47fa-91fa-f7d5c9d66a22"}
gateway_chassis     : []
ha_chassis_group    : []
ipv6_ra_configs     : {address_mode=slaac, mtu="1492", send_periodic="true"}
mac                 : "fa:16:3e:34:82:e1"
name                : lrp-d115e1ba-e1da-473f-ba48-8c87e1214d9e
networks            : ["fc00:ee1f:1550:bae9::/64"]
options             : {}
peer                : []$ openstack port list |grep 'd115e1ba-e1da-473f-ba48-8c87e1214d9e'
| d115e1ba-e1da-473f-ba48-8c87e1214d9e |                                                      | fa:16:3e:34:82:e1 | ip_address='fc00:ee1f:1550:bae9::', subnet_id='ebb70198-5da8-47fa-91fa-f7d5c9d66a22'                   | ACTIVE |# on-nbctl lrp-list neutron-6eab025d-9409-4728-b1e8-f2bf1d7e9a63
6cb05a94-f924-429d-a73a-4e897a63ec6d (lrp-d115e1ba-e1da-473f-ba48-8c87e1214d9e)
# ovn-nbctl lrp-get-gateway-chassis lrp-d115e1ba-e1da-473f-ba48-8c87e1214d9e
<emtpy>
# ovn-nbctl list HA_Chassis
<emtpy># ovn-sbctl list Port_Binding
...
_uuid               : c80810c4-35cb-48c0-a219-d23208451db7
chassis             : []
datapath            : 3410bb60-8689-4258-b631-c311f28bd2ab
encap               : []
external_ids        : {"neutron:cidrs"="fc00:ee1f:1550:bae9::/64", "neutron:device_id"="6eab025d-9409-4728-b1e8-f2bf1d7e9a63", "neutron:device_owner"="network:router_interface", "neutron:network_name"=neutron-610cf760-36f4-4346-8323-ee1f1550bae9, "neutron:port_name"="", "neutron:project_id"="9c3196d35bba49edb33e105fe4207435", "neutron:revision_number"="3", "neutron:security_group_ids"=""}
gateway_chassis     : []
ha_chassis_group    : []
logical_port        : "d115e1ba-e1da-473f-ba48-8c87e1214d9e"
mac                 : [router]
nat_addresses       : []
options             : {peer=lrp-d115e1ba-e1da-473f-ba48-8c87e1214d9e}
parent_port         : []
tag                 : []
tunnel_key          : 2
type                : patch
virtual_parent      : []_uuid               : fa2f2c28-d331-44f7-b232-d4194041eb40
chassis             : []
datapath            : 75d0ddbd-186b-487e-98e9-78243b916807
encap               : []
external_ids        : {}
gateway_chassis     : []
ha_chassis_group    : []
logical_port        : lrp-d115e1ba-e1da-473f-ba48-8c87e1214d9e
mac                 : ["fa:16:3e:34:82:e1 fc00:ee1f:1550:bae9::/64"]
nat_addresses       : []
options             : {ipv6_ra_address_mode=slaac, ipv6_ra_max_interval="600", ipv6_ra_min_interval="200", ipv6_ra_mtu="1492", ipv6_ra_prefixes="fc00:ee1f:1550:bae9::/64", ipv6_ra_prf=MEDIUM, ipv6_ra_send_periodic="true", ipv6_ra_src_addr="fe80::f816:3eff:fe34:82e1", ipv6_ra_src_eth="fa:16:3e:34:82:e1", peer="d115e1ba-e1da-473f-ba48-8c87e1214d9e"}
parent_port         : []
tag                 : []
tunnel_key          : 1
type                : patch
virtual_parent      : []

2, 对于non-lrp-xxx port (如lb-mgmt-net subnet下的port)则是有requested-chassis属性的.

_uuid               : bb44be8d-2bc5-42de-b8cd-5a526036c6a1
chassis             : 2dbc4b5b-85cd-4b5e-a96e-fa8b47dc3620
datapath            : 3410bb60-8689-4258-b631-c311f28bd2ab
encap               : []
external_ids        : {"neutron:cidrs"="fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7/64", "neutron:device_id"="c6e52d76-df37-48f7-bb91-8d8db30202b2", "neutron:device_owner"="compute:nova", "neutron:network_name"=neutron-610cf760-36f4-4346-8323-ee1f1550bae9, "neutron:port_name"="", "neutron:project_id"="9c3196d35bba49edb33e105fe4207435", "neutron:revision_number"="4", "neutron:security_group_ids"="b7b81287-f482-427e-aa10-e42f42602ea6"}
gateway_chassis     : []
ha_chassis_group    : []
logical_port        : "2b8c231f-f5b1-4c77-9d39-6671e6e80c13"
mac                 : ["fa:16:3e:8f:c8:b7 fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7"]
nat_addresses       : []
options             : {mcast_flood_reports="true", requested-chassis=juju-f1ec8c-octavia-8.cloud.sts}
parent_port         : []
tag                 : []
tunnel_key          : 5
type                : ""
virtual_parent      : []

这个环境的计算节点重启之后,'nova list --all’会看到SHUTOFF, 'openstack loadbalancer amphora list’则看到ERROR.

$ openstack server list --all |grep amphora
| b4e438d0-caba-4911-88de-70a379462eb2 | amphora-a16755f8-cebe-44ac-aaec-66be7173b02b | SHUTOFF  | lb-mgmt-net=fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9; private=192.168.21.103 | amphora-haproxy-x86_64-ubuntu-20.04-20211207 | charm-octavia |
| c6e52d76-df37-48f7-bb91-8d8db30202b2 | amphora-3b9f62ad-363d-4863-bb84-e34470106dba | SHUTOFF | lb-mgmt-net=fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7; private=192.168.21.198 | amphora-haproxy-x86_64-ubuntu-20.04-20211207 | charm-octavia |
$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+--------+--------+-----------------------------------------+----------------+
| id                                   | loadbalancer_id                      | status | role   | lb_network_ip                           | ha_ip          |
+--------------------------------------+--------------------------------------+--------+--------+-----------------------------------------+----------------+
| 3b9f62ad-363d-4863-bb84-e34470106dba | db800b07-89fa-4db9-a0e7-f2855c1949a7 | ERROR  | MASTER | fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7 | 192.168.21.165 |
| a16755f8-cebe-44ac-aaec-66be7173b02b | db800b07-89fa-4db9-a0e7-f2855c1949a7 | ERROR  | BACKUP | fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9 | 192.168.21.165 |
+--------------------------------------+--------------------------------------+--------+--------+-----------------------------------------+----------------+

通过’nova reboot’启动amphora之后’nova list --all’看到状态正常了,但 'openstack loadbalancer amphora list’仍看到ERROR.最后通过下列脚本恢复状态.

DB_PASS=$(juju run -u mysql/leader leader-get mysql.passwd)
DB_IP=$(juju ssh mysql/leader -- ip addr show ens3 |grep global |awk '{print $2}' |awk -F/ '{print $1}')
LB_IDS=$(openstack loadbalancer amphora list --status ERROR -c loadbalancer_id -f value | sort | uniq)
AMPHORA_IDS=$(openstack loadbalancer amphora list --status ERROR --loadbalancer ${LB_ID} -c id -f value)
for LB_ID in ${LB_IDS}; do# update LB status to ACTIVEjuju ssh mysql/leader -- "sudo mysql --database=octavia --user=root --password=${DB_PASS} \--execute=\"update load_balancer set provisioning_status='ACTIVE' where id='${LB_ID}';\""# get amphora IDAMPHORA_IDS=$(openstack loadbalancer amphora list --status ERROR --loadbalancer ${LB_ID} -c id -f value)for AMPHORA_ID in ${AMPHORA_IDS}; do# failover broken amphora VMsopenstack loadbalancer amphora failover ${AMPHORA_ID}sleep 60done
done

在这个过程中,会看到下列中间状态

$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+----------------+--------+-----------------------------------------+----------------+
| id                                   | loadbalancer_id                      | status         | role   | lb_network_ip                           | ha_ip          |
+--------------------------------------+--------------------------------------+----------------+--------+-----------------------------------------+----------------+
| 3b9f62ad-363d-4863-bb84-e34470106dba | db800b07-89fa-4db9-a0e7-f2855c1949a7 | PENDING_DELETE | MASTER | fc00:ee1f:1550:bae9:f816:3eff:fe8f:c8b7 | 192.168.21.165 |
| a16755f8-cebe-44ac-aaec-66be7173b02b | db800b07-89fa-4db9-a0e7-f2855c1949a7 | ERROR          | BACKUP | fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9 | 192.168.21.165 |
| 66509096-5275-430f-92bd-4e441d62e356 | db800b07-89fa-4db9-a0e7-f2855c1949a7 | BOOTING        | None   | None                                    | None           |
+--------------------------------------+--------------------------------------+----------------+--------+-----------------------------------------+----------------+
$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+----------------+
| id                                   | loadbalancer_id                      | status    | role   | lb_network_ip                           | ha_ip          |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+----------------+
| a16755f8-cebe-44ac-aaec-66be7173b02b | db800b07-89fa-4db9-a0e7-f2855c1949a7 | ERROR     | BACKUP | fc00:ee1f:1550:bae9:f816:3eff:fe66:11f9 | 192.168.21.165 |
| 66509096-5275-430f-92bd-4e441d62e356 | db800b07-89fa-4db9-a0e7-f2855c1949a7 | ALLOCATED | MASTER | fc00:ee1f:1550:bae9:f816:3eff:fea6:990f | 192.168.21.165 |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+----------------+

Reference

[1] http://www.iceyao.com.cn/2017/11/19/Neutron-lbaas%E4%BB%A3%E7%90%86https%E5%AE%9E%E8%B7%B5/
[2] https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-loadbalancer#Update_neutron_config
[3] https://serversforhackers.com/c/using-ssl-certificates-with-haproxy
[4] http://www.panticz.de/openstack/octavia/fix-octavia-amphora-lb

Use Octavia to Implement HTTPS Health Monitors (by quqi99)相关推荐

  1. k8s http/https nginx ingress (by quqi99)

    作者:张华 发表于:2019-11-05 版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (https://zhhuabj.blog.csdn.net) 问题 ...

  2. Octavia 项目加速 OpenStack LBaaS 落地大规模应用场景

    目录 文章目录 目录 OpenStack LBaaS Octavia 软件架构 网络架构 操作对象基本概念 功能实现基本概念 Ocatvia Daemon 列表 部署 Ocatvia 手动方式集成 O ...

  3. es6 ... 添加属性_如何在10分钟内免费将HTTPS添加到您的网站,以及为什么您现在不止需要这样做......

    es6 ... 添加属性 by Ayo Isaiah 通过Ayo Isaiah 如何在10分钟内免费将HTTPS添加到您的网站,以及为什么现在比以往更需要这样做 (How to add HTTPS t ...

  4. 微信公众号开发本地环境开发_如何在5分钟内使HTTPS在本地开发环境上工作

    微信公众号开发本地环境开发 Almost any website you visit today is protected by HTTPS. If yours isn't yet, it shoul ...

  5. ESP32 通过HTTPS进行OTA更新固件(在platform上进行编码)

    ESP32 通过HTTPS进行OTA更新固件(在platform上进行编码) 目录 ESP32 通过HTTPS进行OTA更新固件(在platform上进行编码) 1.OTA技术 简介 2.本章介绍 3 ...

  6. The evolution of cluster scheduler architectures--转

    原文地址:http://www.firmament.io/blog/scheduler-architectures.html cluster schedulers are an important c ...

  7. [论文阅读] (12)英文论文引言introduction如何撰写及精句摘抄——以入侵检测系统(IDS)为例

    <娜璋带你读论文>系列主要是督促自己阅读优秀论文及听取学术讲座,并分享给大家,希望您喜欢.由于作者的英文水平和学术能力不高,需要不断提升,所以还请大家批评指正,非常欢迎大家给我留言评论,学 ...

  8. Spring Boot Actuator

    https://www.baeldung.com/spring-boot-actuators 1. Overview In this article, we're going to introduce ...

  9. Mesos 1.1.1 发布说明

    2019独角兽企业重金招聘Python工程师标准>>> Release Notes - Mesos - Version 1.1.1 (WIP) This is a bug fix r ...

最新文章

  1. iOS 动态更换icon
  2. 探讨Express Router Route
  3. 阮一峰react demo代码研究的学习笔记 - demo 6 debug - how check works
  4. 基于 C# 的 ETL 大数据并行编程
  5. SpringBoot使用Mina框架进行服务端与客户端数据通信
  6. python 调用父类方法, 重写父类构造方法, 不显式调用,会报错
  7. Microsoft visio 2013 professional破解软件
  8. 最详细的Java入门到精通完整学习教程,学Java先收藏了!!
  9. macOS Monterey 12.0.1(21A559) 正式版三分区原版黑苹果镜像
  10. AnyLogic 建立谢林模型
  11. 打开桌面计算机投屏到扩展屏,无线投屏新玩法——Windows电脑扩展屏幕投屏
  12. 200行代码实现推流到直播平台
  13. 高等数学 · 第一章 函数
  14. 《基因突变》学习笔记
  15. 高端技能之教你学会iOS抓包以及Fiddler抓包软件的用法
  16. ESP32开发学习 LVGL Littlevgl 使用文件系统
  17. python pandas如何实现类似于excel中left或者right函数
  18. vscode试图写入的管道不存在
  19. 鸿蒙基于linux系统,鸿蒙操作系统(HarmonyOS)是基于Linux的吗?尽管已知道它是基于微内核的...
  20. 25个最适合摄影师的WordPress主题(2020)

热门文章

  1. 电脑技术员必备工具软件
  2. laydate日期控件修改去除秒保留时分
  3. Unity ASE制作彩色流光马赛克 像素风 舞池DJ台效果Shader
  4. 使用 ArcGIS Pro 对一幅没有空间参考的老照片进行配准
  5. 滴水逆向三期 win10 ASLR UnmapViewOfSection傀儡进程 加密壳项目
  6. sparkStreaming 处理kafka数据积压问题
  7. 磁通量,磁通,磁感应强度,磁场强度,磁导率
  8. 编程搞笑图_一些恶搞的小程序编程
  9. 用c 语言实现数组的并集,C++实现两个数组的并集,交集
  10. 汽车充电系统开发解决方案