文章目录

  • 1 、部署架构
  • 2 、部署计划
    • 2.1、MySQL服务
      • 2.1.1、架构
      • 2.1.2、规划
      • 2.1.3、实施
        • 2.1.3.1、部署pxc集群
        • 2.1.3.2、部署MS架构
        • 2.1.3.3、部署mycat
        • 2.1.3.4、创建表以及测试
        • 2.1.3.5、部署HAProxy
    • 2.2、部署Redis集群
      • 2.2.1、规划
      • 2.2.3、实施
    • 2.3、部署Elasticsearch集群
      • 2.3.1、规划
      • 2.3.2、实施
      • 2.3.3、文档mapping
      • 2.3.4、导入数据
    • 2.4、部署RocketMQ集群
      • 2.4.1、规划
      • 2.4.2、实施
    • 2.5、搭建ZK集群
      • 2.5.1、规划
      • 2.5.2、实施
  • 3 、项目打包
    • 3.1、打包springboot项目
      • 第一步:添加springboot的打包插件
      • 第二步:执行打包命令
    • 3.2、构建Ant Design Pro
    • 3.3、itcast-haoke-manage-web系统的nginx配置
    • 3.4、配置虚拟域名
    • 3.5、nginx反向代理websocket

1 、部署架构

说明:

  • 在架构中集群的节点数根据实际情况设置

  • 项目的中的实际系统并没有完全展示出来

2 、部署计划

在实际项目中,在部署上线之前需要对所有的服务进行盘点, 然后根据用户数以及并发数,对需要的服务器进行统计,然后进行采购服务器,最后实施部署。

由于我们处于学习阶段,服务器资源有限,所以需要在现有的服务器资源上进行分配。

服务器资源目前拥有 3 台服务器,分别是192.168.1.7、192.168.1.18、192.168.1.19。

2.1、MySQL服务

2.1.1、架构

2.1.2、规划

服务 端口 服务名 容器名
MySQL-node01 13306 192.168.1.18 pxc_node1
MySQL-node02 13307 192.168.1.18 pxc_node2
MySQL-node03 13308 192.168.1.18 pxc_node3
MySQL-node04 13309 192.168.1.18 pxc_node4
MySQL-node05 13310 192.168.1.19 ms_node1
MySQL-node06 13311 192.168.1.19 ms_node2
MyCat-node01 11986,18068,19068 192.168.1.19 mycat_node01
MyCat-node02 11987,18069,19069 192.168.1.19 mycat_node02
HAProxy 4001,4002 192.168.1.19 haproxy

2.1.3、实施

2.1.3.1、部署pxc集群

# 创建数据卷(存储路径:/var/lib/docker/volumes)
docker volume create haoke-v
docker volume create haoke-v
docker volume create haoke-v
docker volume create haoke-v
docker volume create haoke-v
docker volume create haoke-v# 拉取镜像
docker pull percona/percona-xtradb-cluster:5.7
docker tag percona/percona-xtradb-cluster:5.7 pxc# 创建网络
docker network create --subnet=172.30.0.0/24 pxc-network
# 集群 1 ,第一节点
docker create -p 13306:3306 -v haoke-v1:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root -e CLUSTER_NAME=pxc --name=pxc_node1 \
--net=pxc-network --ip=172.30.0.2 pxc#第二节点(增加了CLUSTER_JOIN参数)
docker create -p 13307:3306 -v haoke-v2:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root -e CLUSTER_NAME=pxc --name=pxc_node2 \
-e CLUSTER_JOIN=pxc_node1 --net=pxc-network --ip=172.30.0.3 pxc# 集群 2
# 第一节点
docker create -p 13308:3306 -v haoke-v3:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root -e CLUSTER_NAME=pxc --name=pxc_node3 \
--net=pxc-network --ip=172.30.0.4 pxc# 第二节点(增加了CLUSTER_JOIN参数)
docker create -p 13309:3306 -v haoke-v4:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root -e CLUSTER_NAME=pxc --name=pxc_node4 \
-e CLUSTER_JOIN=pxc_node3 --net=pxc-network --ip=172.30.0.5 pxc# 启动
docker start pxc_node1 && docker logs -f pxc_node
docker start pxc_node2 && docker logs -f pxc_node
docker start pxc_node3 && docker logs -f pxc_node
docker start pxc_node4 && docker logs -f pxc_node# 查看集群节点
show status like 'wsrep_cluster%';

2.1.3.2、部署MS架构

# master
mkdir /data/mysql/haoke/master01/conf -p
vim my.cnf# 输入如下内容
[mysqld]
log-bin=mysql-bin  #开启二进制日志
server-id= 1 #服务id,不可重复
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,
NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'# 创建容器
docker create --name ms_node1 -v haoke-v5:/var/lib/mysql -v /data/mysql/haoke/master01/conf:/etc/my.cnf.d -p 13310 :3306 -e
MYSQL_ROOT_PASSWORD=root percona:5.7.23# 启动
docker start ms_node1 && docker logs -f ms_node# 创建同步账户以及授权
create user 'haoke'@'%' identified by 'haoke';
grant replication slave on *.* to 'haoke'@'%';
flush privileges;# 查看master状态
show master status;
# slave
mkdir /data/mysql/haoke/slave01/conf -p
vim my.cnf# 输入如下内容
[mysqld]
server-id= 2 # 服务id,不可重复
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,
NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'# 创建容器
docker create --name ms_node2 -v haoke-v6:/var/lib/mysql \
-v /data/mysql/haoke/slave01/conf:/etc/my.cnf.d -p 13311:3306 \
-e MYSQL_ROOT_PASSWORD=root percona:5.7.23# 启动
docker start ms_node2 && docker logs -f ms_node# 设置master相关信息
CHANGE MASTER TO
master_host='xxxxxx',
master_user='itcast',
master_password='itcast',
master_port= 13310 ,
master_log_file='xxxxx',
master_log_pos=xxxx;# 启动同步
start slave;# 查看master状态
show slave status;

2.1.3.3、部署mycat

在数据库中,tb_house_resources(房源表)进行pxc集群管理,其它表通过读写分离管理。

<!-- server.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:serverxmlns:mycat="http://io.mycat/"><system><property name="nonePasswordLogin"> 0 </property><property name="useHandshakeV10"> 1 </property><property name="useSqlStat"> 0 </property><property name="useGlobleTableCheck"> 0 </property><property name="sequnceHandlerType"> 2 </property><property name="subqueryRelationshipCheck">false</property><property name="processorBufferPoolType"> 0 </property><property name="handleDistributedTransactions"> 0 </property><property name="useOffHeapForMerge"> 1 </property><property name="memoryPageSize">64k</property><property name="spillsFileBufferSize">1k</property><property name="useStreamOutput"> 0 </property><property name="systemReserveMemorySize">384m</property><property name="useZKSwitch">false</property></system><!--这里是设置的itcast用户和虚拟逻辑库--><user name="haoke" defaultAccount="true"><property name="password">haoke123</property><property name="schemas">haoke</property></user>
</mycat:server>
<!--schema.xml-->
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schemaxmlns:mycat="http://io.mycat/"><!--配置数据表--><schema name="haoke" checkSQLschema="false" sqlMaxLimit="100"><table name="tb_house_resources" dataNode="dn1,dn2" rule="mod-long" /><table name="tb_ad" dataNode="dn3"/><table name="tb_estate" dataNode="dn3"/></schema><!--配置分片关系--><dataNode name="dn1" dataHost="cluster1" database="haoke" /><dataNode name="dn2" dataHost="cluster2" database="haoke" /><dataNode name="dn3" dataHost="cluster3" database="haoke" /><dataHost name="cluster1" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W1" url="192.168.1.18:13306" user="root" password="root"><readHost host="W1R1" url="192.168.1.18:13307" user="root" password="root" /></writeHost></dataHost><dataHost name="cluster2" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W2" url="192.168.1.18:13308" user="root" password="root"><readHost host="W2R1" url="192.168.1.18:13309" user="root" password="root" /></writeHost></dataHost><dataHost name="cluster3" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W2" url="192.168.1.19:13310" user="root" password="root"><readHost host="W2R1" url="192.168.1.19:13311" user="root" password="root" /></writeHost></dataHost>
</mycat:schema>
<!-- rule.xml -->
<function name="mod-long" class="io.mycat.route.function.PartitionByMod"><property name="count">2</property>
</function>
# 节点一
vim wrapper.conf# 设置jmx端口
wrapper.java.additional.7=-Dcom.sun.management.jmxremote.port=11986vim server.xml# 设置服务端口以及管理端口
<property name="serverPort">18068</property>
<property name="managerPort">19068</property># 节点二
vim wrapper.conf# 设置jmx端口
wrapper.java.additional.7=-Dcom.sun.management.jmxremote.port=11987vim server.xml#设置服务端口以及管理端口
<property name="serverPort">18069</property>
<property name="managerPort">19069</property>./startup_nowrap.sh && tail -f ../logs/mycat.log

2.1.3.4、创建表以及测试

CREATE TABLE `tb_ad` (`id` bigint(20) NOT NULL AUTO_INCREMENT,`type` int(10) DEFAULT NULL COMMENT '广告类型',`title` varchar(100) DEFAULT NULL COMMENT '描述',`url` varchar(200) DEFAULT NULL COMMENT '图片URL地址',`created` datetime DEFAULT NULL,`updated` datetime DEFAULT NULL,PRIMARY KEY (`id`)
) ENGINE = InnoDB AUTO_INCREMENT = 5 CHARSET = utf8 COMMENT '广告表';CREATE TABLE `tb_estate` (`id` bigint(20) NOT NULL AUTO_INCREMENT,`name` varchar(100) DEFAULT NULL COMMENT '楼盘名称',`province` varchar(10) DEFAULT NULL COMMENT '所在省',`city` varchar(10) DEFAULT NULL COMMENT '所在市',`area` varchar(10) DEFAULT NULL COMMENT '所在区',`address` varchar(100) DEFAULT NULL COMMENT '具体地址',`year` varchar(10) DEFAULT NULL COMMENT '建筑年代',`type` varchar(10) DEFAULT NULL COMMENT '建筑类型',`property_cost` varchar(10) DEFAULT NULL COMMENT '物业费',`property_company` varchar(20) DEFAULT NULL COMMENT '物业公司',`developers` varchar(20) DEFAULT NULL COMMENT '开发商',`created` datetime DEFAULT NULL COMMENT '创建时间',`updated` datetime DEFAULT NULL COMMENT '更新时间',PRIMARY KEY (`id`)
) ENGINE = InnoDB AUTO_INCREMENT = 1006 CHARSET = utf8 COMMENT '楼盘表';CREATE TABLE `tb_house_resources` (`id` bigint(20) NOT NULL AUTO_INCREMENT,`title` varchar(100) DEFAULT NULL COMMENT '房源标题',`estate_id` bigint(20) DEFAULT NULL COMMENT '楼盘id',`building_num` varchar(5) DEFAULT NULL COMMENT '楼号(栋)',`building_unit` varchar(5) DEFAULT NULL COMMENT '单元号',`building_floor_num` varchar(5) DEFAULT NULL COMMENT '门牌号',`rent` int(10) DEFAULT NULL COMMENT '租金',`rent_method` tinyint(1) DEFAULT NULL COMMENT '租赁方式,1-整租,2-合租',`payment_method` tinyint(1) DEFAULT NULL COMMENT '支付方式,1-付一押一,2-付三押一,3-付六
押一,4-年付押一,5-其它',`house_type` varchar(255) DEFAULT NULL COMMENT '户型,如: 2 室 1 厅 1 卫',`covered_area` varchar(10) DEFAULT NULL COMMENT '建筑面积',`use_area` varchar(10) DEFAULT NULL COMMENT '使用面积',`floor` varchar(10) DEFAULT NULL COMMENT '楼层,如:8/26',`orientation` varchar(2) DEFAULT NULL COMMENT '朝向:东、南、西、北',`decoration` tinyint(1) DEFAULT NULL COMMENT '装修,1-精装,2-简装,3-毛坯',`facilities` varchar(50) DEFAULT NULL COMMENT '配套设施, 如:1,2,3',`pic` varchar(1000) DEFAULT NULL COMMENT '图片,最多 5 张',`house_desc` varchar(200) DEFAULT NULL COMMENT '描述',`contact` varchar(10) DEFAULT NULL COMMENT '联系人',`mobile` varchar(11) DEFAULT NULL COMMENT '手机号',`time` tinyint(1) DEFAULT NULL COMMENT '看房时间,1-上午,2-中午,3-下午,4-晚上,5-全天',`property_cost` varchar(10) DEFAULT NULL COMMENT '物业费',`created` datetime DEFAULT NULL,`updated` datetime DEFAULT NULL,PRIMARY KEY (`id`)
) ENGINE = InnoDB AUTO_INCREMENT = 10 CHARSET = utf8 COMMENT '房源表';INSERT INTO `tb_ad` (`id`, `type`, `title`, `url`, `created`, `updated`) VALUES ('1',
'1', 'UniCity万科天空之城', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/26/15432029097062227.jpg', '2018-11-26 11:28:49',
'2018-11-26 11:28:51');
INSERT INTO `tb_ad` (`id`, `type`, `title`, `url`, `created`, `updated`) VALUES ('2',
'1', '天和尚海庭前', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/26/1543202958579877.jpg', '2018-11-26 11:29:27',
'2018-11-26 11:29:29');
INSERT INTO `tb_ad` (`id`, `type`, `title`, `url`, `created`, `updated`) VALUES ('3',
'1', '[奉贤 南桥] 光语著', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/26/15432029946721854.jpg', '2018-11-26 11:30:04',
'2018-11-26 11:30:06');
INSERT INTO `tb_ad` (`id`, `type`, `title`, `url`, `created`, `updated`) VALUES ('4',
'1', '[上海周边 嘉兴] 融创海逸长洲', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/26/15432030275359146.jpg', '2018-11-26 11:30:49',
'2018-11-26 11:30:53');
INSERT INTO `tb_estate` (`id`, `name`, `province`, `city`, `area`, `address`, `year`,
`type`, `property_cost`, `property_company`, `developers`, `created`, `updated`)
VALUES ('1001', '中远两湾城', '上海市', '上海市', '普陀区', '远景路97弄', '2001', '塔楼/板楼',
'1.5', '上海中远物业管理发展有限公司', '上海万业企业股份有限公司', '2018-11-06 23:00:20',
'2018-11-06 23:00:23');
INSERT INTO `tb_estate` (`id`, `name`, `province`, `city`, `area`, `address`, `year`,
`type`, `property_cost`, `property_company`, `developers`, `created`, `updated`)
VALUES ('1002', '上海康城', '上海市', '上海市', '闵行区', '莘松路958弄', '2001', '塔楼/板楼',
'1.5', '盛孚物业', '闵行房地产', '2018-11-06 23:02:30', '2018-11-27 23:02:33');
INSERT INTO `tb_estate` (`id`, `name`, `province`, `city`, `area`, `address`, `year`,
`type`, `property_cost`, `property_company`, `developers`, `created`, `updated`)
VALUES ('1003', '保利西子湾', '上海市', '上海市', '松江区', '广富林路1188弄', '2008', '塔楼/板
楼', '1.75', '上海保利物业管理', '上海城乾房地产开发有限公司', '2018-11-06 23:04:22', '2018-
11-06 23:04:25');
INSERT INTO `tb_estate` (`id`, `name`, `province`, `city`, `area`, `address`, `year`,
`type`, `property_cost`, `property_company`, `developers`, `created`, `updated`)
VALUES ('1004', '万科城市花园', '上海市', '上海市', '松江区', '广富林路1188弄', '2002', '塔楼/
板楼', '1.5', '上海保利物业管理', '上海城乾房地产开发有限公司', '2018-11-13 16:43:40', '2018-
11-13 16:43:42');INSERT INTO `tb_estate` (`id`, `name`, `province`, `city`, `area`, `address`, `year`,
`type`, `property_cost`, `property_company`, `developers`, `created`, `updated`)
VALUES ('1005', '上海阳城', '上海市', '上海市', '闵行区', '罗锦路888弄', '2002', '塔楼/板楼',
'1.5', '上海莲阳物业管理有限公司', '上海莲城房地产开发有限公司', '2018-11-06 23:23:52', '2018-
11-06 23:23:55');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('1', '东方曼哈顿 3室2厅 16000元', '1005', '2', '1', '1',
'1111', '1', '1', '1室1厅1卫1厨1阳台', '2', '2', '1/2', '南', '1', '1,2,3,8,9', NULL, '这
个经纪人很懒,没写核心卖点', '张三', '11111111111', '1', '11', '2018-11-16 01:16:00',
'2018-11-16 01:16:00');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('2', '康城 3室2厅1卫', '1002', '1', '2', '3', '2000', '1',
'2', '3室2厅1卫1厨2阳台', '100', '80', '2/20', '南', '1', '1,2,3,7,6', NULL, '拎包入住',
'张三', '18888888888', '5', '1.5', '2018-11-16 01:34:02', '2018-11-16 01:34:02');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('3', '2', '1002', '2', '2', '2', '2', '1', '1', '1室1厅1
卫1厨1阳台', '22', '11', '1/5', '南', '1', '1,2,3', NULL, '11', '22', '33', '1', '3',
'2018-11-16 21:15:29', '2018-11-16 21:15:29');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('4', '11', '1002', '1', '1', '1', '1', '1', '1', '1室1厅1
卫1厨1阳台', '11', '1', '1/1', '南', '1', '1,2,3', NULL, '11', '1', '1', '1', '1',
'2018-11-16 21:16:50', '2018-11-16 21:16:50');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('5', '最新修改房源5', '1002', '1', '1', '1', '3000', '1',
'1', '1室1厅1卫1厨1阳台', '80', '1', '1/1', '南', '1', '1,2,3', 'http://itcast-haoke.osscn-qingdao.aliyuncs.com/images/2018/12/04/15439353467987363.jpg,http://itcasthaoke.oss-cn-qingdao.aliyuncs.com/images/2018/12/04/15439354795233043.jpg', '11', '1',
'1', '1', '1', '2018-11-16 21:17:02', '2018-12-04 23:05:19');INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('6', '房源标题', '1002', '1', '1', '11', '1', '1', '1', '1
室1厅1卫1厨1阳台', '11', '1', '1/1', '南', '1', '1,2,3', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/16/15423743004743329.jpg,http://itcast-haoke.osscn-qingdao.aliyuncs.com/images/2018/11/16/15423743049233737.jpg', '11', '2', '2', '1',
'1', '2018-11-16 21:18:41', '2018-11-16 21:18:41');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('7', '房源标题', '1002', '1', '1', '11', '1', '1', '1', '1
室1厅1卫1厨1阳台', '11', '1', '1/1', '南', '1', '1,2,3', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/16/15423743004743329.jpg,http://itcast-haoke.osscn-qingdao.aliyuncs.com/images/2018/11/16/15423743049233737.jpg', '11', '2', '2', '1',
'1', '2018-11-16 21:18:41', '2018-11-16 21:18:41');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('8', '3333', '1002', '1', '1', '1', '1', '1', '1', '1室1
厅1卫1厨1阳台', '1', '1', '1/1', '南', '1', '1,2,3', 'http://itcast-haoke.oss-cnqingdao.aliyuncs.com/images/2018/11/17/15423896060254118.jpg,http://itcast-haoke.osscn-qingdao.aliyuncs.com/images/2018/11/17/15423896084306516.jpg', '1', '1', '1', '1',
'1', '2018-11-17 01:33:35', '2018-12-06 10:22:20');
INSERT INTO `tb_house_resources` (`id`, `title`, `estate_id`, `building_num`,
`building_unit`, `building_floor_num`, `rent`, `rent_method`, `payment_method`,
`house_type`, `covered_area`, `use_area`, `floor`, `orientation`, `decoration`,
`facilities`, `pic`, `house_desc`, `contact`, `mobile`, `time`, `property_cost`,
`created`, `updated`) VALUES ('9', '康城 精品房源2', '1002', '1', '2', '3', '1000', '1',
'1', '1室1厅1卫1厨1阳台', '50', '40', '3/20', '南', '1', '1,2,3', 'http://itcasthaoke.oss-cnqingdao.aliyuncs.com/images/2018/11/30/15435106627858721.jpg,http://itcast-haoke.osscn-qingdao.aliyuncs.com/images/2018/11/30/15435107119124432.jpg', '精品房源', '李四',
'18888888888', '1', '1', '2018-11-21 18:31:35', '2018-11-30 00:58:46');

2.1.3.5、部署HAProxy

# 拉取镜像docker pull haproxy:1.9.3
# 创建目录,用于存放配置文件
mkdir /haoke/haproxy# 创建容器
docker create --name haproxy --net host -v /haoke/haproxy:/usr/local/etc/haproxy haproxy:1.9.3

编写配置文件:

# 创建文件
vim /haoke/haproxy/haproxy.cfg
# 输入如下内容
globallog 127.0.0.1 local2maxconn 4000daemon
defaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000
listen admin_statsbind 0.0.0.0:4001mode httpstats uri /dbsstats realm Global\ statisticsstats auth admin:admin123
listen proxy-mysqlbind 0.0.0.0:4002mode tcpbalance roundrobinoption tcplog# 代理mycat服务server mycat_1 192.168.1.19:18068 check port 18068 maxconn 2000server mycat_2 192.168.1.19:18069 check port 18069 maxconn 2000

启动容器:

# 启动容器
docker restart haproxy && docker logs -f haproxy

2.2、部署Redis集群

Redis集群采用 3 主 3 从的架构。

2.2.1、规划

服务 端口 服务器 容器名
Redis-node01 6379 192.168.1.18 redis-node01
Redis-node02 6380 192.168.1.18 redis-node02
Redis-node03 6381 192.168.1.18 redis-node03
Redis-node04 16379 192.168.1.19 redis-node04
Redis-node05 16380 192.168.1.19 redis-node05
Redis-node06 16381 192.168.1.19 redis-node06

2.2.3、实施

docker volume create redis-node
docker volume create redis-node
docker volume create redis-node
docker volume create redis-node
docker volume create redis-node
docker volume create redis-node# 创建容器
docker create --name redis-node01 --net host \
-v redis-node01:/data redis:5.0.2 --cluster-enabled yes \
--cluster-config-file nodes-node-01.conf --port 6379docker create --name redis-node02 --net host \
-v redis-node02:/data redis:5.0.2 --cluster-enabled yes \
--cluster-config-file nodes-node-02.conf --port 6380docker create --name redis-node03 --net host \
-v redis-node03:/data redis:5.0.2 --cluster-enabled yes \
--cluster-config-file nodes-node-03.conf --port 6381docker create --name redis-node04 --net host \
-v redis-node04:/data redis:5.0.2 --cluster-enabled yes \
--cluster-config-file nodes-node-03.conf --port 16379docker create --name redis-node05 --net host \
-v redis-node05:/data redis:5.0.2 --cluster-enabled yes \
--cluster-config-file nodes-node-03.conf --port 16380docker create --name redis-node06 --net host \
-v redis-node06:/data redis:5.0.2 --cluster-enabled yes \
--cluster-config-file nodes-node-03.conf --port 16381# 启动容器
docker start redis-node01 redis-node02 redis-node03 redis-node04 redis-node05 redis-node06# 进入redis-node01容器进行操作
docker exec -it redis-node01 /bin/bash# 192.168.1.18,192.168.1.19是主机的ip地址
redis-cli --cluster create 192.168.1.18:6379 192.168.1.18:6380 192.168.1.18:6381 192.168.1.19:16379 192.168.1.19:16380 192.168.1.19:16381 --cluster-replicas 1

2.3、部署Elasticsearch集群

Elasticsearch集群部署 3 个节点的集群。

2.3.1、规划

服务 端口 服务器 容器名
es-node01 9200,9300 192.168.1.7 es-node01
es-node02 9200,9300 192.168.1.18 es-node02
es-node03 9200,9300 192.168.1.19 es-node03

2.3.2、实施

# elasticsearch.yml:
cluster.name: es-haoke-cluster
node.name: node01
node.master: true
node.data: true
network.host: 192.168.1.7
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.7","192.168.1.18","192.168.1.19"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
# jvm.options
-Xms512m
-Xmx512m
# 将IK的zip压缩包解压到/haoke/es-cluster/ik
docker create --name es-node01 --net host \
-v haoke/escluster/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml  \
-v /haoke/es-cluster/jvm.options:/usr/share/elasticsearch/config/jvm.options \
-v /haoke/es-cluster/data:/usr/share/elasticsearch/data \
-v /haoke/escluster/ik:/usr/share/elasticsearch/plugins/ik \
-v /haoke/escluster/pinyin:/usr/share/elasticsearch/plugins/pinyin elasticsearch:6.5.4docker create --name es-node02 --net host \
-v haoke/escluster/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /haoke/es-cluster/jvm.options:/usr/share/elasticsearch/config/jvm.options \
-v /haoke/es-cluster/data:/usr/share/elasticsearch/data \
-v /haoke/escluster/ik:/usr/share/elasticsearch/plugins/ik \
-v haoke/escluster/pinyin:/usr/share/elasticsearch/plugins/pinyin elasticsearch:6.5.4docker create --name es-node03 --net host \
-v haoke/escluster/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /haoke/es-cluster/jvm.options:/usr/share/elasticsearch/config/jvm.options \
-v /haoke/es-cluster/data:/usr/share/elasticsearch/data \
-v /haoke/escluster/ik:/usr/share/elasticsearch/plugins/ik \
-v /haoke/escluster/pinyin:/usr/share/elasticsearch/plugins/pinyin elasticsearch:6.5.4# 启动测试
docker start es-node01 && docker logs -f es-node01
docker start es-node02 && docker logs -f es-node02
docker start es-node03 && docker logs -f es-node03

2.3.3、文档mapping

PUT http://192.168.1.7:9200/haoke/
{"settings": {"index": {"number_of_shards": 6, "number_of_replicas": 1, "analysis": {"analyzer": {"pinyin_analyzer": {"tokenizer": "my_pinyin"}}, "tokenizer": {"my_pinyin": {"type": "pinyin", "keep_separate_first_letter": false, "keep_full_pinyin": true, "keep_original": true, "limit_first_letter_length": 16, "lowercase": true, "remove_duplicated_term": true}}}}}, "mappings": {"house": {"dynamic": false, "properties": {"title": {"type": "text", "analyzer": "ik_max_word", "fields": {"pinyin": {"type": "text", "analyzer": "pinyin_analyzer"}}}, "image": {"type": "keyword", "index": false}, "orientation": {"type": "keyword", "index": false}, "houseType": {"type": "keyword", "index": false}, "rentMethod": {"type": "keyword", "index": false}, "time": {"type": "keyword", "index": false}, "rent": {"type": "keyword", "index": false}, "floor": {"type": "keyword", "index": false}}}}
}

2.3.4、导入数据

@Test
public void tesBulk() throws Exception
{Request request = new Request("POST", "/haoke/house/_bulk");List < String > lines = FileUtils.readLines(new File("F:\\code\\data.json"), "UTF-8");String createStr = "{\"index\":{\"_index\":\"haoke\",\"_type\":\"house\"}}";StringBuilder sb = new StringBuilder();int count = 0;for(String line: lines){sb.append(createStr + "\n" + line + "\n");if(count >= 100){request.setJsonEntity(sb.toString());Response response = this.restClient.performRequest(request);System.out.println("请求完成 -> " + response.getStatusLine());System.out.println(EntityUtils.toString(response.getEntity()));count = 0;sb = new StringBuilder();}count++;}
}

2.4、部署RocketMQ集群

搭建2master+2slave的集群。

2.4.1、规划

服务 端口 服务器 容器名
rmqserver01 9876 192.168.1.7 rmqserver01
rmqserver02 9877 192.168.1.7 rmqserver02
rmqbroker01 10911 192.168.1.19 rmqbroker01
rmqbroker02 10811 192.168.1.19 rmqbroker02
rmqbroker01-slave 10711 192.168.1.18 rmqbroker01-slave
rmqbroker02-slave 10611 192.168.1.18 rmqbroker02-slave
rocketmq-console 8082 192.168.1.7 rocketmq-console

2.4.2、实施

# 创建2个master
# nameserver1
docker create -p 9876:9876 --name rmqserver01 \
-e "JAVA_OPT_EXT=-server -Xms128m -Xmx128m -Xmn128m" \
-e "JAVA_OPTS=-Duser.home=/opt" \
-v /haoke/rmq/rmqserver01/logs:/opt/logs \
-v /haoke/rmq/rmqserver01/store:/opt/store \
foxiswho/rocketmq:server-4.3.2# nameserver2
docker create -p 9877:9876 --name rmqserver02 \
-e "JAVA_OPT_EXT=-server -Xms128m -Xmx128m -Xmn128m" \
-e "JAVA_OPTS=-Duser.home=/opt" \
-v /haoke/rmq/rmqserver02/logs:/opt/logs \
-v /haoke/rmq/rmqserver02/store:/opt/store \
foxiswho/rocketmq:server-4.3.2
# 创建第1个master broker
# master broker01
docker create --net host --name rmqbroker01 \
-e "JAVA_OPTS=-Duser.home=/opt" \
-e "JAVA_OPT_EXT=-server -Xms128m -Xmx128m -Xmn128m" \
-v /haoke/rmq/rmqbroker01/conf/broker.conf:/etc/rocketmq/broker.conf \
-v /haoke/rmq/rmqbroker01/logs:/opt/logs \
-v /haoke/rmq/rmqbroker01/store:/opt/store \
foxiswho/rocketmq:broker-4.3.2# 配置
namesrvAddr=192.168.1.7:9876;192.168.1.7:9877
brokerClusterName=haokeCluster
brokerName=broker01
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
brokerIP1=192.168.1.19
brokerIp2=192.168.1.19
listenPort=10911
# 创建第2个master broker
# master broker02
docker create --net host --name rmqbroker02 \
-e "JAVA_OPTS=-Duser.home=/opt" \
-e "JAVA_OPT_EXT=-server -Xms128m -Xmx128m -Xmn128m" \
-v /haoke/rmq/rmqbroker02/conf/broker.conf:/etc/rocketmq/broker.conf \
-v /haoke/rmq/rmqbroker02/logs:/opt/logs \
-v /haoke/rmq/rmqbroker02/store:/opt/store \
foxiswho/rocketmq:broker-4.3.2
# master broker02
namesrvAddr=192.168.1.7:9876;192.168.1.7:9877
brokerClusterName=haokeCluster
brokerName=broker02
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
brokerIP1=192.168.1.19
brokerIp2=192.168.1.19
listenPort=10811
# 创建第1个slave broker
# slave broker01
docker create --net host --name rmqbroker03 \
-e "JAVA_OPTS=-Duser.home=/opt" \
-e "JAVA_OPT_EXT=-server -Xms128m -Xmx128m -Xmn128m" \
-v /haoke/rmq/rmqbroker03/conf/broker.conf:/etc/rocketmq/broker.conf \
-v /haoke/rmq/rmqbroker03/logs:/opt/logs \
-v /haoke/rmq/rmqbroker03/store:/opt/store \
foxiswho/rocketmq:broker-4.3.2# slave broker01
namesrvAddr=192.168.1.7:9876;192.168.1.7:9877
brokerClusterName=haokeCluster
brokerName=broker01
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
brokerIP1=192.168.1.18
brokerIp2=192.168.1.18
listenPort=10711
# 创建第2个slave broker
# slave broker01
docker create --net host --name rmqbroker04 \
-e "JAVA_OPTS=-Duser.home=/opt" \
-e "JAVA_OPT_EXT=-server -Xms128m -Xmx128m -Xmn128m" \
-v /haoke/rmq/rmqbroker04/conf/broker.conf:/etc/rocketmq/broker.conf \
-v /haoke/rmq/rmqbroker04/logs:/opt/logs \
-v /haoke/rmq/rmqbroker04/store:/opt/store \
foxiswho/rocketmq:broker-4.3.2
#slave broker02
namesrvAddr=192.168.1.7:9876;192.168.1.7:9877
brokerClusterName=haokeCluster
brokerName=broker02
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
brokerIP1=192.168.1.18
brokerIp2=192.168.1.18
listenPort=10611
# 启动容器
docker start rmqserver01 rmqserver02
docker start rmqbroker01 rmqbroker02
docker start rmqbroker03 rmqbroker04# rocketmq-console的部署安装
# 拉取镜像
docker pull styletang/rocketmq-console-ng:1.0.0# 创建并启动容器
docker run -e "JAVA_OPTS=-Drocketmq.namesrv.addr=192.168.1.7:9876;192.168.1.7:9877 -Dcom.rocketmq.sendMessageWithVIPChannel=false" \
-p 8082:8080 -t styletang/rocketmqconsole-ng:1.0.0

2.5、搭建ZK集群

搭建 3 个节点的zk集群。

2.5.1、规划

服务 端口 服务器 容器名
zk01 2181,2888,3888 192.168.1.7 zk01
zk02 2181,2888,3888 192.168.1.18 zk02
zk03 2181,2888,3888 192.168.1.19 zk03

2.5.2、实施

# 拉取zk镜像
docker pull zookeeper:3.4# 创建容器
docker create --name zk01 --net host -e ZOO_MY_ID=1 \
-e ZOO_SERVERS="server.1=192.168.1.7:2888:3888 server.2=192.168.1.18:2888:3888
server.3=192.168.1.19:2888:3888" zookeeper:3.4docker create --name zk02 --net host -e ZOO_MY_ID=2 \
-e ZOO_SERVERS="server.1=192.168.1.7:2888:3888 server.2=192.168.1.18:2888:3888
server.3=192.168.1.19:2888:3888" zookeeper:3.4docker create --name zk03 --net host -e ZOO_MY_ID=3 \
-e ZOO_SERVERS="server.1=192.168.1.7:2888:3888 server.2=192.168.1.18:2888:3888
server.3=192.168.1.19:2888:3888" zookeeper:3.4# 启动容器
docker start zk01 && docker logs -f zk01
docker start zk02 && docker logs -f zk02
docker start zk03 && docker logs -f zk03

3 、项目打包

项目的域名规划(虚拟域名):

服务 域名 机器
itcast-haoke-manage-api-server api.manage.haoke.com 192.168.1.7
itcast-haoke-manage-web manage.haoke.com 192.168.1.7
itcast-haoke-web www.haoke.com 192.168.1.18
itcast-haoke-im im.haoke.com 192.168.1.19

3.1、打包springboot项目

第一步:添加springboot的打包插件

<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId>  <artifactId>spring-boot-maven-plugin</artifactId>  <configuration> <!--替换成实际的类全路径-->  <mainClass>cn.itcast.haoke.dubbo.server.DubboProvider</mainClass> </configuration>  <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins>
</build>

第二步:执行打包命令

mvn install -Dmaven.test.skip=true

3.2、构建Ant Design Pro

# 通过umi命令进行构建,构建成功后会生成静态页面
umi build


生成的静态页面需要通过nginx进行访问,并且请求数据的代理也需要通过nginx进行代理。

# 安装nginx
apt install libpcre3 libpcre3-dev zlib1g-dev openssl libssl-dev./configure
make install# 启动
cd /usr/local/nginx/sbin/
./nginx

3.3、itcast-haoke-manage-web系统的nginx配置

server {listen 80 ;server_name manage.haoke.com;# charset koi8-r;# access_log logs/host.access.log main;proxy_set_header X-Forwarded-Host $host;proxy_set_header X-Forwarded-Server $host;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;location ^~ /haoke/ {# 末尾的/很重要 用于拿到haoke/请求后的内容proxy_pass http://192.168.1.7:18080/;proxy_connect_timeout 600;proxy_read_timeout 600;}# 非/haoke目录开头的则请求下面的路径location / {root /haoke/publish/manage-web;}
}

3.4、配置虚拟域名

server {listen 80;server_name api.manage.haoke.com;# charset koi8-r;# access_log logs/host.access.log main;proxy_set_header X-Forwarded-Host $host;proxy_set_header X-Forwarded-Server $host;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;location / {proxy_pass http://192.168.1.7:18080;proxy_connect_timeout 600;proxy_read_timeout 600;}
}

3.5、nginx反向代理websocket

server {listen 80;server_name im.haoke.com;# charset koi8-r;# access_log logs/host.access.log main;proxy_set_header X-Forwarded-Host $host;proxy_set_header X-Forwarded-Server $host;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;location / {proxy_pass http://192.168.1.19:18081;proxy_connect_timeout 600;proxy_read_timeout 600;proxy_set_header Upgrade $http_upgrade;proxy_set_header Connection "upgrade";}
}

好客租房 — 项目发布以及各种服务集群搭建相关推荐

  1. Eureka(eureka)服务集群搭建搭建

    编写Eureka Server集群: eureka-server总结 编写Eureka Server集群: 1.在POM文件中引入依赖:<dependency><groupId> ...

  2. MySQL服务MySQL+MHA高可用服务集群搭建

    MySQL MHA MySQL MHA 一.MHA概念 1.MHA 的组成 2.MHA 的特点 二.搭建MySQL+MHA 1.所有服务器,关闭系统防火墙和安全机制 2.修改 master(192.1 ...

  3. HyperLedger Fabric 1.4.4 ca服务集群搭建(MySQL)

    Fabric CA默认使用SQLite数据库,这是一种嵌入式的文件数据库,如果需要将在集群中部署Fabric CA,那么就需要使用PostgreSQL或者MySQL数据库,支持的最低版本如下: Pos ...

  4. spring boot 微服务集群 + 注册中心

    spring boot 微服务框架下载地址: https://start.spring.io/ 注册中心 Eureka Server提供服务注册服务,各个节点启动后,会在Eureka Server中进 ...

  5. tigase-7.1.3集群搭建

    文章目录 tigase-7.1.3集群搭建 前言 一.使用步骤 1.修改本地hosts文件 centos7 hosts文件修改 2.修改配置文件 init.properties tigase.conf ...

  6. Zabbix(六):项目实战之--自动发现nginx调度器及后端web服务集群、自定义参数监控...

    项目: 1.自动发现nginx调度器及后端apache构建的web服务集群: 2.使用自定义参数监控调度器上nginx服务的相关统计数据及速率数据: 3.使用自定义参数监控后端apache服务的相关统 ...

  7. .Net Core2.1 秒杀项目一步步实现CI/CD(Centos7)系列二:k8s高可用集群搭建总结以及部署API到k8s...

    前言:本系列博客又更新了,是博主研究很长时间,亲自动手实践过后的心得,k8s集群是购买了5台阿里云服务器部署的,这个集群差不多搞了一周时间,关于k8s的知识点,我也是刚入门,这方面的知识建议参考博客园 ...

  8. Dubbo之——Dubbo服务集群

    转载请注明出处:http://blog.csdn.net/l1028386804/article/details/72354649 集群的目的:实现高可用,容错功能,集群的服务器不要放在一台物理机,要 ...

  9. 分布式IM及Netty服务集群解决方案

    一.概述 使用netty开发分布式Im,提供分布netty集群解决方案.服务端通过负载均衡策略与服务集群建立连接,消息发送通过服务间集群的通信进行消息转发. 二.集群架构 三.项目地址 https:/ ...

最新文章

  1. python的标准库turtle_Python标准库: turtle--海龟绘图。
  2. 两种 js下载文件的方法(转)
  3. 内核同步机制——原子操作
  4. Android监听SD卡文件变化
  5. mongodb 事务_MongoDB4 事务 简单易懂的??
  6. 作业32-完成评论功能
  7. 使用实例工厂方法实例化_一些工厂实例
  8. Leetcode--461. 汉明距离
  9. linux gpio按键驱动程序,Linux GPIO Key 驱动的加载
  10. myeclipse 8.5 注册码
  11. c语言标准库函数system,C 库函数
  12. 滚动条禁止_平南人用了几十年的这种小东西,将全面禁止生产!你家还有某?...
  13. PHP 后台程序配置config文件,及form表单上传文件
  14. python设计模式pdf_精通python设计模式豆瓣-精通python设计模式第二版电子书pdf下载-精品下载...
  15. Tomcat7安装步骤
  16. Livereload介绍
  17. 快速从入门到精通!黑马java课程大纲
  18. python二元一次方程组用鸡兔同笼的思路来写编程_《应用二元一次方程组——鸡兔同笼》...
  19. WindwosServer系统一些设置【网卡驱动修复】【安装UWP应用】【服务器管理取消开机自启动】
  20. SDH(标准DH)和MDH(改进DH)

热门文章

  1. 微机原理:编写一个源程序,在键盘上按一个键,将从AL返回的ASCII码值显示出来,如果按下ESC键则程序退出。
  2. 一个java随机数据的工具类
  3. Unable to configure Windows to Trust the Fiddler Root certificate.The LOG tab may contain more infor
  4. 相对论通俗演义(1-10) 第十章
  5. 徐无忌深入JVM虚拟机笔记:Java代码到底是如何运行起来的?
  6. baked light+bake indirect+sampling lightmap
  7. 树莓派-利用DS18B20检测温度
  8. 个人网站—首页HTML+CSS(超级简单的那种)
  9. 大闹天竺里的机器人_数字看清王宝强《大闹天竺》里的植入
  10. OpenCV FLANN匹配器判断重复图像 思路及代码