一、控制节点架构如下图:

二、初始化环境:

1、配置IP地址:

1、节点1:ip addr add dev eth0  192.168.142.110/24
echo 'ip addr add dev eth0 192.168.142.110/24' >> /etc/rc.local
chmod +x /etc/rc.d/rc.local 2、节点2:
ip addr add dev eth0  192.168.142.111/24
echo 'ip addr add dev eth0 192.168.142.111/24' >> /etc/rc.local
chmod +x /etc/rc.d/rc.local
3、节点3:
ip addr add dev eth0  192.168.142.112/24
echo 'ip addr add dev eth0 192.168.142.112/24' >> /etc/rc.local
chmod +x /etc/rc.d/rc.local 

2、更改主机名:

配置主机名+修改/etc/hosts文件:
1、节点1
hostnamectl --static --transient  set-hostname  controller1
hosts文件:
192.168.142.110 controller1
192.168.142.111 controller2
192.168.142.112 controller3
2、节点2:
hostnamectl --static --transient  set-hostname controller2
hosts文件:
192.168.142.110 controller1
192.168.142.111 controller2
192.168.142.112 controller3
3、节点3:
hostnamectl --static --transient  set-hostname controller3
hosts文件:
192.168.142.110 controller1
192.168.142.111 controller2
192.168.142.112 controller3

3、设置防火墙及selinux:

systemctl disable firewalld
systemctl stop firewalld
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 

4、设置时间同步:

yum install ntp -y
ntpdate cn.pool.ntp.org

5、安装基础软件包:

yum install -y centos-release-openstack-ocata
yum upgrade -y
yum install -y python-openstackclient

三、安装基础基础服务:

1、安装Pacemaker

(1~4在三个节点都执行)
1、配置免密码登录:节点1:
ssh-keygen -t rsa
ssh-copy-id root@controller2
ssh-copy-id root@controller3
节点2:
ssh-keygen -t rsa
ssh-copy-id root@controller1
ssh-copy-id root@controller3
节点3:
ssh-keygen -t rsa
ssh-copy-id root@controller1
ssh-copy-id root@controller22、安装pacemaker
yum install -y pcs pacemaker corosync fence-agents-all resource-agents3、启动pcsd服务(开机自启动)
systemctl start pcsd.service
systemctl enable pcsd.service4、创建集群用户:
echo 'password' |passwd --stdin hacluster  (此用户在安装pcs时候会自动创建)5、集群各节点之间进行认证:
pcs cluster auth controller1 controller2 controller3 -u hacluster -p password (此处需要输入的用户名必须为pcs自动创建的hacluster,其他用户不能添加成功)6、创建并启动名为openstack-ha的集群:
pcs cluster setup --start --name openstack-ha controller1 controller2 controller3
6、设置集群自启动: pcs cluster enable --all 

7、查看并设置集群属性: 查看当前集群状态:pcs cluster status 检验Corosync的安装及当前corosync状态: corosync-cfgtool -s corosync-cmapctl | grep members pcs status corosync 检查配置是否正确(假若没有输出任何则配置正确): crm_verify -L -V 禁用STONITH: pcs property set stonith-enabled=false 无法仲裁时候,选择忽略: pcs property set no-quorum-policy=ignore

 2、Haproxy安装配置:

(1~3在三个节点都执行)
1、安装haproxy:
yum install -y haproxy lrzsz
2、初始化环境:
echo "net.ipv4.ip_nonlocal_bind=1" > /etc/sysctl.d/haproxy.confsysctl -p
echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind
cat >/etc/sysctl.d/tcp_keepalive.conf << EOF
net.ipv4.tcp_keepalive_intvl = 1
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_time = 5
EOFsysctl net.ipv4.tcp_keepalive_intvl=1
sysctl net.ipv4.tcp_keepalive_probes=5
sysctl net.ipv4.tcp_keepalive_time=5
mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak  &&cd /etc/haproxy/
(上传haproxy.cfg文件)

haproxy的配置文件:

globaldaemongroup    haproxy                         maxconn  4000pidfile  /var/run/haproxy.pid           user     haproxy  stats    socket /var/lib/haproxy/stats   log      192.168.142.110 local0
defaultsmode tcpmaxconn 10000timeout  connect 10stimeout  client 1mtimeout  server 1mtimeout  check 10s
listen stats                   mode          httpbind          192.168.142.110:8080                       stats         enable                     stats         hide-version                stats uri     /haproxy?openstack          stats realm   Haproxy\Statistics           stats admin if TRUE stats auth    admin:admin stats refresh 10s
frontend vip-dbbind 192.168.142.201:3306timeout client 90mdefault_backend db-vms-galerafrontend vip-qpidbind 192.168.142.215:5672timeout client 120sdefault_backend qpid-vms
frontend vip-horizonbind 192.168.142.211:80timeout client 180scookie SERVERID insert indirect nocachedefault_backend horizon-vms
frontend vip-ceilometerbind 192.168.142.214:8777timeout client 90sdefault_backend ceilometer-vms
frontend vip-rabbitmqoption clitcpkabind 192.168.142.202:5672timeout client 900mdefault_backend rabbitmq-vms
frontend vip-keystone-adminbind 192.168.142.203:35357default_backend keystone-admin-vms
backend keystone-admin-vmsbalance roundrobinserver controller1-vm 192.168.142.110:35357 check inter 1sserver controller2-vm 192.168.142.111:35357 check inter 1sserver controller3-vm 192.168.142.112:35357 check inter 1s
frontend vip-keystone-publicbind 192.168.142.203:5000default_backend keystone-public-vms
backend keystone-public-vmsbalance roundrobinserver controller1-vm 192.168.142.110:5000 check inter 1sserver controller2-vm 192.168.142.111:5000 check inter 1sserver controller3-vm 192.168.142.112:5000 check inter 1s
frontend vip-glance-apibind 192.168.142.205:9191default_backend glance-api-vms
backend glance-api-vmsbalance roundrobinserver controller1-vm 192.168.142.110:9191 check inter 1sserver controller2-vm 192.168.142.111:9191 check inter 1sserver controller3-vm 192.168.142.112:9191 check inter 1s
frontend vip-glance-registrybind 192.168.142.205:9292default_backend glance-registry-vms
backend glance-registry-vmsbalance roundrobinserver controller1-vm 192.168.142.110:9292 check inter 1sserver controller2-vm 192.168.142.111:9292 check inter 1sserver controller3-vm 192.168.142.112:9292 check inter 1s
frontend vip-cinderbind 192.168.142.206:8776default_backend cinder-vms
backend cinder-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8776 check inter 1sserver controller2-vm 192.168.142.111:8776 check inter 1sserver controller3-vm 192.168.142.112:8776 check inter 1s
frontend vip-swiftbind 192.168.142.208:8080default_backend swift-vms
backend swift-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8080 check inter 1sserver controller2-vm 192.168.142.111:8080 check inter 1sserver controller3-vm 192.168.142.112:8080 check inter 1s
frontend vip-neutronbind 192.168.142.209:9696default_backend neutron-vms
backend neutron-vmsbalance roundrobinserver controller1-vm 192.168.142.110:9696 check inter 1sserver controller2-vm 192.168.142.111:9696 check inter 1sserver controller3-vm 192.168.142.112:9696 check inter 1s
frontend vip-nova-vnc-novncproxybind 192.168.142.210:6080default_backend nova-vnc-novncproxy-vms
backend nova-vnc-novncproxy-vmsbalance roundrobinserver controller1-vm 192.168.142.110:6080 check inter 1sserver controller2-vm 192.168.142.111:6080 check inter 1sserver controller3-vm 192.168.142.112:6080 check inter 1s
frontend vip-nova-vnc-xvpvncproxybind 192.168.142.210:6081default_backend nova-vnc-xvpvncproxy-vms
backend nova-vnc-xvpvncproxy-vmsbalance roundrobinserver controller1-vm 192.168.142.110:6081 check inter 1sserver controller2-vm 192.168.142.111:6081 check inter 1sserver controller3-vm 192.168.142.112:6081 check inter 1s
frontend vip-nova-metadatabind 192.168.142.210:8775default_backend nova-metadata-vms
backend nova-metadata-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8775 check inter 1sserver controller2-vm 192.168.142.111:8775 check inter 1sserver controller3-vm 192.168.142.112:8775 check inter 1s
frontend vip-nova-apibind 192.168.142.210:8774default_backend nova-api-vms
backend nova-api-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8774 check inter 1sserver controller2-vm 192.168.142.111:8774 check inter 1sserver controller3-vm 192.168.142.112:8774 check inter 1s
backend horizon-vmsbalance roundrobintimeout server 108sserver controller1-vm 192.168.142.110:80 check inter 1sserver controller2-vm 192.168.142.111:80 check inter 1sserver controller3-vm 192.168.142.112:80 check inter 1s
frontend vip-heat-cfnbind 192.168.142.212:8000default_backend heat-cfn-vms
backend heat-cfn-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8000 check inter 1sserver controller2-vm 192.168.142.111:8000 check inter 1sserver controller3-vm 192.168.142.112:8000 check inter 1s
frontend vip-heat-cloudwbind 192.168.142.212:8004default_backend heat-cloudw-vms
backend heat-cloudw-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8004 check inter 1sserver controller2-vm 192.168.142.111:8004 check inter 1sserver controller3-vm 192.168.142.112:8004 check inter 1s
frontend vip-heat-srvbind 192.168.142.212:8004default_backend heat-srv-vms
backend heat-srv-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8004 check inter 1sserver controller2-vm 192.168.142.111:8004 check inter 1sserver controller3-vm 192.168.142.112:8004 check inter 1s
backend ceilometer-vmsbalance roundrobinserver controller1-vm 192.168.142.110:8777 check inter 1sserver controller2-vm 192.168.142.111:8777 check inter 1sserver controller3-vm 192.168.142.112:8777 check inter 1s
backend qpid-vmsstick-table type ip size 2stick on dsttimeout server 120sserver controller1-vm 192.168.142.110:5672 check inter 1sserver controller2-vm 192.168.142.111:5672 check inter 1sserver controller3-vm 192.168.142.112:5672 check inter 1s
backend db-vms-galeraoption httpchkoption tcpkastick-table type ip size 1000stick on dsttimeout server 90mserver controller1-vm 192.168.142.110:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessionsserver controller2-vm 192.168.142.111:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessionsserver controller3-vm 192.168.142.112:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
backend rabbitmq-vmsoption srvtcpkabalance roundrobintimeout server 900mserver controller1-vm 192.168.142.110:5672 check inter 1sserver controller2-vm 192.168.142.111:5672 check inter 1sserver controller3-vm 192.168.142.112:5672 check inter 1s

haproxy.cfg

3、创建haproxy的pacemaker资源:
pcs resource create lb-haproxy systemd:haproxy --clone
pcs resource enable lb-haproxy

4、创建vip(Python脚本)

import os
components=['db','rabbitmq','keystone','memcache','glance','cinder','swift-brick','swift','neutron','nova','horizon','heat','mongodb','ceilometer','qpid']
offset=201
internal_network='192.168.142'
for section in components:if section in ['memcache','swift-brick','mongodb']:passelse:print("%s:%s.%s"%(section,internal_network,offset))
#        os.system('pcs resource delete vip-%s '%section)os.system('pcs resource create vip-%s IPaddr2 ip=%s.%s nic=eth0'%(section,internal_network,offset))os.system('pcs constraint order start vip-%s then lb-haproxy-clone kind=Optional'%section)os.system('pcs constraint colocation add vip-%s with lb-haproxy-clone'%section)offset += 1
os.system('pcs cluster stop --all')
os.system('pcs cluster start --all')

create_vip.py

3、安装mariadb:

(1~6在三个节点都执行)1、安装mariadb-galera:yum install mariadb-galera-server xinetd rsync -ypcs resource disable lb-haproxy

2、galera集群检查:
cat > /etc/sysconfig/clustercheck << EOF
MYSQL_USERNAME="clustercheck"
MYSQL_PASSWORD="hagluster"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
EOF

3、创建集群用户:
systemctl start mariadb.servicemysql_secure_installation
mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY 'hagluster';"
systemctl stop mariadb.service

4、配置galera.cnf文件:

cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=192.168.142.110wsrep_cluster_address = "gcomm://"
wsrep_cluster_address = "gcomm://192.168.142.111,192.168.142.112"
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_on=ON
EOF

节点一的galera.cnf文件

cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=192.168.142.111wsrep_cluster_address = "gcomm://192.168.142.110,192.168.142.112"
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_on=ON
EOF

节点二的galera.cnf文件

cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=192.168.142.112wsrep_cluster_address = "gcomm://192.168.142.110,192.168.142.111"
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_on=ON
EOF

节点三的galera.cnf文件

5、配置基于http对数据库检查:
cat > /etc/xinetd.d/galera-monitor << EOF
service galera-monitor
{port            = 9200disable         = nosocket_type     = streamprotocol        = tcpwait            = nouser            = rootgroup           = rootgroups          = yesserver          = /usr/bin/clusterchecktype            = UNLISTEDper_source      = UNLIMITEDlog_on_success  = log_on_failure  = HOSTflags           = REUSE
}
EOFsystemctl enable xinetd
systemctl start xinetd

6、授权:
chown mysql:mysql -R /var/log/mariadb
chown mysql:mysql -R /var/lib/mysql
chown mysql:mysql -R  /var/run/mariadb/

7、创建galera集群资源pcs resource create galera galera enable_creation=true wsrep_cluster_address="gcomm://controller1,controller2,controller3" additional_parameters='--open-files-limit=16384' meta master-max=3 ordered=true op promote timeout=300s on-fail=block --masterpcs resource enable lb-haproxypcs constraint order start lb-haproxy-clone then start galera-master
8、检查服务:clustercheck
pcs resource show

创建数据库表:

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'heat';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'heat';
flush privileges;

View Code

4、安装memcache:

(1 在三个控制节点上执行)1、安装memcache:yum install -y memcached2、创建pacemaker资源
pcs resource create memcached systemd:memcached --clone interleave=true
pcs status

5、安装RabbitMQ:

(1~3在三个节点都执行)1、安装rabbitmq:
yum install -y rabbitmq-server2、添加rabbitmq文件:
节点1:
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
NODE_IP_ADDRESS=192.168.142.110
EOF
节点2:
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
NODE_IP_ADDRESS=192.168.142.111
EOF
节点3:
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
NODE_IP_ADDRESS=192.168.142.112
EOF3、创建目录并授权:
mkdir -p  /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq4、创建pacemaker资源
pcs resource create rabbitmq-cluster ocf:rabbitmq:rabbitmq-server-ha --master erlang_cookie=DPMDALGUKEOMPTHWPYKC node_port=5672 op monitor interval=30 timeout=120 op monitor interval=27 role=Master timeout=120 op monitor interval=103 role=Slave timeout=120 OCF_CHECK_LEVEL=30 op start interval=0 timeout=120 op stop interval=0 timeout=120 op promote interval=0 timeout=60 op demote interval=0 timeout=60 op notify interval=0 timeout=60 meta notify=true ordered=false interleave=false master-max=1 master-node-max=1
5、配置集群队列镜像,用户及权限(在主节点上):
rabbitmqctl set_policy ha-all "." '{"ha-mode":"all", "ha-sync-mode":"automatic"}' --apply-to all --priority 0
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
6、查看配置是否成功:
rabbitmqctl list_policies
rabbitmqctl list_users
rabbitmqctl list_permissions

5、安装mongodb:

(1~3在三个节点上都执行)1、安装软件包:yum -y install mongodb mongodb-server
2、修改配置文件:
sed -i "s/bind_ip = 127.0.0.1/bind_ip = 0.0.0.0/g" /etc/mongod.conf
sed -i "s/#replSet = arg/replSet = ceilometer/g" /etc/mongod.conf
sed -i "s/#smallfiles = true/smallfiles = true/g" /etc/mongod.conf

3、测试是否能启动:
systemctl start mongod
systemctl status mongod
systemctl stop mongod

4、创建pacemaker资源:pcs resource create mongodb systemd:mongod op start timeout=300s --clone
5、编写创建集群脚本:
cat >> /root/mongo_replica_setup.js << EOF
rs.initiate()
sleep(10000)rs.add("controller1");rs.add("controller2");rs.add("controller3");
EOF

6、执行脚本:
mongo /root/mongo_replica_setup.js

7、查看集群时候创建成功:mongo   #进入交互窗口后执行:rs.status()

ceilometer:PRIMARY> rs.status()
{"set" : "ceilometer","date" : ISODate("2017-12-07T06:41:00Z"),"myState" : 1,"members" : [{"_id" : 0,"name" : "controller1:27017","health" : 1,"state" : 1,"stateStr" : "PRIMARY","uptime" : 1163,"optime" : Timestamp(1512628074, 2),"optimeDate" : ISODate("2017-12-07T06:27:54Z"),"electionTime" : Timestamp(1512628064, 2),"electionDate" : ISODate("2017-12-07T06:27:44Z"),"self" : true},{"_id" : 1,"name" : "controller2:27017","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 786,"optime" : Timestamp(1512628074, 2),"optimeDate" : ISODate("2017-12-07T06:27:54Z"),"lastHeartbeat" : ISODate("2017-12-07T06:40:59Z"),"lastHeartbeatRecv" : ISODate("2017-12-07T06:40:59Z"),"pingMs" : 1,"syncingTo" : "controller1:27017"},{"_id" : 2,"name" : "controller3:27017","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 786,"optime" : Timestamp(1512628074, 2),"optimeDate" : ISODate("2017-12-07T06:27:54Z"),"lastHeartbeat" : ISODate("2017-12-07T06:40:59Z"),"lastHeartbeatRecv" : ISODate("2017-12-07T06:40:59Z"),"pingMs" : 1,"syncingTo" : "controller1:27017"}],"ok" : 1
}

输出状态信息

五、安装openstack服务:

1、安装配置keystone:

(1~3在三个控制节点执行)1、安装软件包:
yum install -y openstack-keystone httpd mod_wsgi openstack-utils  python-keystoneclient2、修改配置文件:
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@192.168.142.201/keystone
openstack-config --set /etc/keystone/keystone.conf token provider  fernet
openstack-config --set /etc/keystone/keystone.conf memcache servers  controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_host controller1,controller2,controller3
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_password openstack
查看配置:
cat /etc/keystone/keystone.conf | grep -v "^#"|grep -v "^$"3、修改httpd配置文件:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
sed -i "s/Listen 80/Listen 81/g" /etc/httpd/conf/httpd.conf
节点1:
sed -i "s/#ServerName www.example.com:80/ServerName controller1:80/g" /etc/httpd/conf/httpd.conf
sed -i "s/Listen 5000/Listen 192.168.142.110:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/Listen 35357/Listen 192.168.142.110:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf
节点2:
sed -i "s/#ServerName www.example.com:80/ServerName controller2:80/g" /etc/httpd/conf/httpd.conf
sed -i "s/Listen 5000/Listen 192.168.142.111:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/Listen 35357/Listen 192.168.142.111:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf
节点3:
sed -i "s/#ServerName www.example.com:80/ServerName controller3:80/g" /etc/httpd/conf/httpd.conf
sed -i "s/Listen 5000/Listen 192.168.142.112:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s/Listen 35357/Listen 192.168.142.112:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf 4、创建pacemaker资源:
pcs resource create keystone-http systemd:httpd --clone interleave=true5、同步数据库:
su -s /bin/sh -c "keystone-manage db_sync" keystone6、创建管理员账户(节点1执行):
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password keystone --bootstrap-admin-url http://192.168.142.203:35357/v3/ --bootstrap-internal-url http://192.168.142.203:5000/v3/ --bootstrap-public-url http://192.168.142.203:5000/v3/ --bootstrap-region-id RegionOne7、配置环境变量:
export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://192.168.142.203:35357/v3
export OS_IDENTITY_API_VERSION=37、将keys文件复制到其他两个节点:
节点(2,3)执行:
mkdir /etc/keystone/credential-keys/
mkdir /etc/keystone/fernet-keys/
节点(1)执行:
scp /etc/keystone/credential-keys/* root@controller2:/etc/keystone/credential-keys/
scp /etc/keystone/credential-keys/* root@controller3:/etc/keystone/credential-keys/
scp /etc/keystone/credential-keys/* root@controller2:/etc/keystone/fernet-keys/
scp /etc/keystone/credential-keys/* root@controller3:/etc/keystone/fernet-keys/
节点(2,3)执行:
chown -R keystone:keystone /etc/keystone/credential-keys/
chown -R keystone:keystone /etc/keystone/fernet-keys/
chmod 700 /etc/keystone/credential-keys/
chmod 700 /etc/keystone/fernet-keys/8、创建服务及用户:
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password demo demo
openstack role create user
openstack role add --project demo --user demo user
openstack --os-auth-url http://192.168.142.203:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://192.168.142.203:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password demo9、为glance服务创建用户、服务及endpoint:
openstack user create --domain default --password glance glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://192.168.142.205:9292
openstack endpoint create --region RegionOne image internal http://192.168.142.205:9292
openstack endpoint create --region RegionOne image admin http://192.168.142.205:929210、为cinder服务创建用户、服务及endpoint:
openstack user create --domain default --password cinder cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne volumev2 public http://192.168.142.206:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://192.168.142.206:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://192.168.142.206:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://192.168.142.206:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://192.168.142.206:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://192.168.142.206:8776/v3/%\(project_id\)s11、为neutron服务创建用户、服务及endpoint:
openstack user create --domain default --password neutron neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://192.168.142.209:9696
openstack endpoint create --region RegionOne network internal http://192.168.142.209:9696
openstack endpoint create --region RegionOne network admin http://192.168.142.209:969612、为nova服务创建用户、服务及endpoint:
openstack user create --domain default --password nova nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://192.168.142.210:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://192.168.142.210:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://192.168.142.210:8774/v2.1openstack user create --domain default --password placement placementopenstack role add --project service --user placement adminopenstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://192.168.142.210:8778openstack endpoint create --region RegionOne placement internal http://192.168.142.210:8778openstack endpoint create --region RegionOne placement admin http://192.168.142.210:8778
13、为heat服务创建用户、服务及endpoint:
openstack user create --domain default --password heat heat
openstack role add --project service --user heat admin
openstack service create --name heat --description "Orchestration" orchestration
openstack service create --name heat-cfn --description "Orchestration"  cloudformation
openstack endpoint create --region RegionOne orchestration public http://192.168.142.212:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration internal http://192.168.142.212:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration admin http://192.168.142.212:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne cloudformation public http://192.168.142.212:8000/v1
openstack endpoint create --region RegionOne cloudformation internal http://192.168.142.212:8000/v1
openstack endpoint create --region RegionOne cloudformation admin http://192.168.142.212:8000/v114、为ceilometer服务创建用户、服务及endpoint:
openstack user create --domain default --password ceilometer ceilometer

验证配置结果:

1、验证keystone服务是否正常:
unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://192.168.142.203:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://192.168.142.203:5000/v3 --os-project-domain-name default --os-user-domain-name default  --os-project-name demo --os-username demo token issue
2、查看用户:
openstack user list
3、查看服务:
openstack service list
4、查看服务目录:openstack endpoint list

2、安装glance:

(1~3在三个控制节点上执行)1、安装软件包:
yum install -y openstack-glance
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak2、修改glance-api文件:
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@192.168.142.201/glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://192.168.142.203:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://192.168.142.203:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password  openstack
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts controller1,controller2,controller3
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60
openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host 192.168.142.205
节点1:
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.142.110
节点2:
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.142.111
节点3:
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.142.1123、修改glance-registry文件:
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@192.168.142.201/glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://192.168.142.203:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://192.168.142.203:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/glanceg/lance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance
openstack-config --set  /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set  /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password openstack
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller1,controller2,controller3
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60
openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host 192.168.142.205
节点1:
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.142.110
节点2:
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.142.111
节点3:
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.142.1124、创建nfs资源:
pcs resource create glance-fs Filesystem  device="192.168.72.100:/home/glance" directory="/var/lib/glance"  fstype="nfs" options="v3" --clone
chown glance:nobody /var/lib/glance5、同步数据库:
su -s /bin/sh -c "glance-manage db_sync" glance

6、创建pacemaker资源:
pcs resource create glance-registry systemd:openstack-glance-registry --clone interleave=true
pcs resource create glance-api systemd:openstack-glance-api --clone interleave=truepcs constraint order start glance-fs-clone then glance-registry-clone
pcs constraint colocation add glance-registry-clone with glance-fs-clone
pcs constraint order start glance-registry-clone then glance-api-clone
pcs constraint colocation add glance-api-clone with glance-registry-clonepcs constraint order start keystone-http-clone then glance-registry-clone

3、安装配置cinder:

4、安装配置neutron:

1、安装软件包:
yum install -y openstack-neutron openstack-neutron-openvswitch openstack-neutron-ml2 python-neutronclient  which  2、修改neutron.conf文件:cp /etc/neutron/neutron.conf  /etc/neutron/neutron.conf.bak
节点1:
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 192.168.142.110
节点2:
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 192.168.142.111
节点3:
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 192.168.142.112######default######
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_backend  rabbit
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin  ml2
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins  router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  True
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  True #######keystone######
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url http://192.168.142.203:35357
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_uri http://192.168.142.203:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  neutron######database#######
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:neutron@192.168.142.201:3306/neutron######rabbit######
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller1,controller2,controller3
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password  openstack######nova######
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://192.168.142.203:35357/
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_id default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_id default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password nova

3、修改ml2_conf.ini文件:cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

节点1:openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.142.110
节点2:openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.142.111
节点3:openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.142.112
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,gre,vxlan openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types  gre openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  openvswitch openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges  1:1000 openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group  True openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  True openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  externalopenstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini agent tunnel_types  gre
rm -rf /etc/neutron/plugin.ini && ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.inisu -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
4、l3与dhcp高可用设置:
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha Trueopenstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover Trueopenstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3
5、启动openvswitch:systemctl enable openvswitchsystemctl start openvswitch

ovs-vsctl del-br br-intovs-vsctl del-br br-exovs-vsctl del-br br-tun

ovs-vsctl add-br br-intovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex ens8ethtool -K ens8 gro off

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent l2_population False

6、修改metadata.agent.ini文件:cp -a  /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini_bakopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_strategy keystoneopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_uri  http://192.168.142.203:5000openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://192.168.142.203:35357
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_host 192.168.142.203openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOneopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name servicesopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutronopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutronopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_domain_id  default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT user_domain_id  default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.142.210openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_port 8775openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron_shared_secret
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_backlog 2048openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose  Trueopenstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug  True

7、修改dhcp_agent.ini文件:cp -a /etc/neutron/dhcp_agent.ini  /etc/neutron/dhcp_agent.ini_bak  openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriveropenstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_delete_namespaces Falseopenstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver  neutron.agent.linux.dhcp.Dnsmasq    openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file  /etc/neutron/dnsmasq-neutron.confecho "dhcp-option-force=26,1454" >/etc/neutron/dnsmasq-neutron.confpkill dnsmasq

8、修改L3_agent文件:openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriveropenstack-config --set /etc/neutron/l3_agent.ini DEFAULT handle_internal_only_routers Trueopenstack-config --set /etc/neutron/l3_agent.ini DEFAULT send_arp_for_ha 3openstack-config --set /etc/neutron/l3_agent.ini DEFAULT router_delete_namespaces Falseopenstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge br-ex openstack-config --set /etc/neutron/l3_agent.ini DEFAULT verbose   Trueopenstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug   True

rm -rf  /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig  && cp /usr/lib/systemd/system/neutron-openvswitch-agent.service  /usr/lib/systemd/system/neutron-openvswitch-agent.service.origsed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service

9、修改/etc/sysctl.conf:sed -e '/^net.bridge/d' -e '/^net.ipv4.conf/d' -i /etc/sysctl.confecho "net.bridge.bridge-nf-call-ip6tables=1" >>/etc/sysctl.confecho "net.bridge.bridge-nf-call-iptables=1" >>/etc/sysctl.confecho "net.ipv4.conf.all.rp_filter=0" >>/etc/sysctl.confecho "net.ipv4.conf.default.rp_filter=0" >>/etc/sysctl.confsysctl -p >>/dev/null

# source adminrcsu -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

systemctl start neutron-server

systemctl stop neutron-server

pcs resource delete neutron-server-api --force
pcs resource delete neutron-scale --force
pcs resource delete neutron-ovs-cleanup --force
pcs resource delete neutron-netns-cleanup --force
pcs resource delete neutron-openvswitch-agent --force
pcs resource delete neutron-dhcp-agent --force
pcs resource delete neutron-l3-agent --force
pcs resource delete neutron-metadata-agent --forcepcs resource create neutron-server-api systemd:neutron-server op start timeout=180 --clone interleave=true

pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true
pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true
pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true
pcs resource create neutron-openvswitch-agent  systemd:neutron-openvswitch-agent --clone interleave=true
pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent  --clone interleave=truepcs constraint order start keystone-clone then neutron-server-api-clone
pcs constraint order start neutron-scale-clone then neutron-openvswitch-agent-clone
pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-scale-clone
pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone
pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone

pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone
pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone
pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone
pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone
pcs constraint order start neutron-openvswitch-agent-clone then neutron-dhcp-agent-clone
pcs constraint colocation add neutron-dhcp-agent-clone with neutron-openvswitch-agent-clone
pcs constraint order start neutron-dhcp-agent-clone then neutron-l3-agent-clone
pcs constraint colocation add neutron-l3-agent-clone with neutron-dhcp-agent-clone
pcs constraint order start neutron-l3-agent-clone then neutron-metadata-agent-clone
pcs constraint colocation add neutron-metadata-agent-clone with neutron-l3-agent-clone
pcs constraint order start neutron-server-api-clone then neutron-scale-clone
pcs constraint order start keystone-clone then neutron-scale-clonesource /root/adminrc
loop=0; while ! neutron net-list > /dev/null 2>&1 && [ "$loop" -lt 60 ]; doecho waiting neutron to be stableloop=$((loop + 1))sleep 5
doneif neutron router-listthenneutron router-gateway-clear admin-router ext-netneutron router-interface-delete admin-router admin-subnetneutron router-delete admin-routerneutron subnet-delete admin-subnetneutron subnet-delete ext-subnetneutron net-delete admin-netneutron net-delete ext-net
fineutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net 192.168.115.0/24 --name ext-subnet --allocation-pool start=192.168.115.200,end=192.168.115.250  --disable-dhcp --gateway 192.168.115.254
neutron net-create admin-net
neutron subnet-create admin-net 192.128.1.0/24 --name admin-subnet --gateway 192.128.1.1
neutron router-create admin-router
neutron router-interface-add admin-router admin-subnet
neutron router-gateway-set admin-router ext-net

转载于:https://www.cnblogs.com/chimeiwangliang/p/7985888.html

openstack ha 部署相关推荐

  1. 多节点OpenStack Charms 部署指南0.0.1.dev303--21--控制器备份和还原

    目录: 第一节 多节点OpenStack Charms 部署指南0.0.1.dev223–1--OpenStack Charms 部署指南 第二节 多节点OpenStack Charms 部署指南0. ...

  2. OpenStack安装部署实战——问题集锦

    为什么80%的码农都做不了架构师?>>>    安装OpenStack是一个及其考验耐心的事情.最近前后花了一个月的时间,尝试手动.自动两种方式部署OpenStack.我想说的是,尼 ...

  3. 在Openstack上部署compute节点上时,开启服务openstack-nova-compute.service无法启动的解决方法

    在Openstack上部署compute节点上时,开启服务openstack-nova-compute.service无法启动的解决方法 参考文章: (1)在Openstack上部署compute节点 ...

  4. Openstack组件部署 — Networking service_Compute Node

    目录 目录 前文列表 安装组件 配置通用组件 配置自服务网络选项 配置Linux 桥接代理 配置Nova使用网络 完成安装 验证操作Execute following commands on Cont ...

  5. Openstack组件部署 — Netwotking service组件介绍与网络基本概念

    目录 目录 前文列表 Openstack Networking serivce 基本的Neutron概念 Neutron的抽象对象 网络networks 子网subnets 路由器routers 端口 ...

  6. Openstack组件部署 — Nova_Install and configure a compute node

    目录 目录 前文列表 Prerequisites 先决条件 Install and configure a compute node Install the packages Edit the etc ...

  7. Openstack组件部署 — Keystone功能介绍与认证实现流程

    目录 目录 前文列表 Keystone认证服务 Keystone认证服务中的概念 Keystone的验证过程 简单来说 前文列表 Openstack组件部署 - Overview和前期环境准备 Ope ...

  8. Openstack组建部署 — Environment of Controller Node

    目录 目录 前文列表 Controller Node Install and configure components Setup DNS Server Setup NTP Server Instal ...

  9. 深入理解Openstack自动化部署

    前言 说实话,看到自己在博客园的排名感到惭愧,因为自己最近两年没有持续地在博客园上写技术博客了,有人私下问我是不是荒废了?翻翻15年和16年的博客,真的是少的可怜.一方面的确由于岗位的变化,导致了工作 ...

最新文章

  1. 彻底理解OkHttp - OkHttp 源码解析及OkHttp的设计思想
  2. cocos2dx-2.2.0的开始
  3. requests从api中获取数据并存放到mysql中
  4. LESS-Middleware:Node.js 和 LESS 的完美搭配
  5. 中国电子用LCP树脂市场未来发展展望及十四五规划咨询建议报告2022-2028年版
  6. 字符串-拆分和拼接字符串
  7. C#ASP.NET执行BAT批处理代码
  8. (剑指Offer)面试题5:从尾到头打印链表
  9. (80)Verilog HDL测试激励:保存波形文件
  10. win7做wifi服务器
  11. [转载]Java多线程——创建线程池的几个核心构造参数
  12. 11gpath失败 oracle_win10安装oracle11g提示path长度不够,该怎样解决?
  13. 在springcacheinvokecontext中没找到field_CNN中的感受野
  14. 2022年最值得学习的5款开源Java框架
  15. svn分支合并到主干,主干合并到分支
  16. python监控网页变化教程_Python实时监控网站浏览记录实现过程详解
  17. 【附源码】计算机毕业设计java原创网络文学管理系统设计与实现
  18. faster-RCNN tensorflow-gpu环境配置及安装出现的问题
  19. 转载:html打开本地文件夹读取,显示图片
  20. 【转载】专访罗升阳:老罗的Android之旅

热门文章

  1. Android接收系统广播
  2. 制定2015年的移动开发策略
  3. Cannot attach the file as database 'membership'.
  4. Android开发者指南(7) —— App Install Location
  5. MakeDAO 推出新漏洞奖励计划,最高赏金1000万美元
  6. 钱少事多,开源项目维护人员几乎集体出走
  7. Netgear业务交换机被曝15个漏洞,有些不修复
  8. csharp: json to csharp
  9. 新上线的APP怎样推广才更获客呢?
  10. MyBatis 实现关联表查询