linux搭建es集群
准备
安装docker。
安装好Docker Compose。
注意:运行内存最好8g以上,es运行会占用很多内存(2-3g)
方式1:
单机多节点。
参考官网的方式创建(docker-compose搭建elasticsearch集群)。
步骤
创建文件夹(以下用此文件夹表示)
mkdir cd /usr/local/src/es/docker
在此文件夹创建docker-compose.yml文件,内容如下:
version: "2.2"services:setup:image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- certs:/usr/share/elasticsearch/config/certsuser: "0"command: >bash -c 'if [ x${ELASTIC_PASSWORD} == x ]; thenecho "Set the ELASTIC_PASSWORD environment variable in the .env file";exit 1;elif [ x${KIBANA_PASSWORD} == x ]; thenecho "Set the KIBANA_PASSWORD environment variable in the .env file";exit 1;fi;if [ ! -f config/certs/ca.zip ]; thenecho "Creating CA";bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;unzip config/certs/ca.zip -d config/certs;fi;if [ ! -f config/certs/certs.zip ]; thenecho "Creating certs";echo -ne \"instances:\n"\" - name: es01\n"\" dns:\n"\" - es01\n"\" - localhost\n"\" ip:\n"\" - 127.0.0.1\n"\" - name: es02\n"\" dns:\n"\" - es02\n"\" - localhost\n"\" ip:\n"\" - 127.0.0.1\n"\" - name: es03\n"\" dns:\n"\" - es03\n"\" - localhost\n"\" ip:\n"\" - 127.0.0.1\n"\> config/certs/instances.yml;bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;unzip config/certs/certs.zip -d config/certs;fi;echo "Setting file permissions"chown -R root:root config/certs;find . -type d -exec chmod 750 \{\} \;;find . -type f -exec chmod 640 \{\} \;;echo "Waiting for Elasticsearch availability";until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;echo "Setting kibana_system password";until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;echo "All done!";'healthcheck:test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]interval: 1stimeout: 5sretries: 120es01:depends_on:setup:condition: service_healthyimage: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- certs:/usr/share/elasticsearch/config/certs- esdata01:/usr/share/elasticsearch/dataports:- ${ES_PORT}:9200environment:- node.name=es01- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es02,es03- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es01/es01.key- xpack.security.http.ssl.certificate=certs/es01/es01.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.http.ssl.verification_mode=certificate- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es01/es01.key- xpack.security.transport.ssl.certificate=certs/es01/es01.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120es02:depends_on:- es01image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- certs:/usr/share/elasticsearch/config/certs- esdata02:/usr/share/elasticsearch/dataenvironment:- node.name=es02- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es01,es03- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es02/es02.key- xpack.security.http.ssl.certificate=certs/es02/es02.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.http.ssl.verification_mode=certificate- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es02/es02.key- xpack.security.transport.ssl.certificate=certs/es02/es02.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120es03:depends_on:- es02image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- certs:/usr/share/elasticsearch/config/certs- esdata03:/usr/share/elasticsearch/dataenvironment:- node.name=es03- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es01,es02- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es03/es03.key- xpack.security.http.ssl.certificate=certs/es03/es03.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.http.ssl.verification_mode=certificate- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es03/es03.key- xpack.security.transport.ssl.certificate=certs/es03/es03.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120kibana:depends_on:es01:condition: service_healthyes02:condition: service_healthyes03:condition: service_healthyimage: docker.elastic.co/kibana/kibana:${STACK_VERSION}volumes:- certs:/usr/share/kibana/config/certs- kibanadata:/usr/share/kibana/dataports:- ${KIBANA_PORT}:5601environment:- SERVERNAME=kibana- ELASTICSEARCH_HOSTS=https://es01:9200- ELASTICSEARCH_USERNAME=kibana_system- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crtmem_limit: ${MEM_LIMIT}healthcheck:test:["CMD-SHELL","curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",]interval: 10stimeout: 10sretries: 120volumes:certs:driver: localesdata01:driver: localesdata02:driver: localesdata03:driver: localkibanadata:driver: local
特别注意!!!特别注意!!!特别注意!!!特别注意!!! |
特别注意!!!特别注意!!!特别注意!!!特别注意!!! |
特别注意!!!特别注意!!!特别注意!!!特别注意!!! |
上面文件的第一行 version: “2.2” 在后面运行时可能会造成如下错误:Non-string key at top level: true
[root@ycj docker]# docker-compose up -d
Non-string key at top level: true
[root@ycj docker]#
此时去掉文件第一行的version: "2.2"就可以了
在此文件夹创建.env文件,内容如下:
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=123456789# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=123456789# Version of Elastic products
STACK_VERSION=8.2.0# Set the cluster name
CLUSTER_NAME=docker-cluster# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
注意:
- 这个文件可有可无,如果不要这个文件的话,则docker-compose.yml这个文件的里面的参数必须填上 如文件上的
${CLUSTER_NAME} 变量必须写死。 - docker-compose.yml文件的变量会引用**.env**文件的变量
- .env文件和docker-compose.yml文件必须在 同一个目录
在此目录启动
docker-compose up -d
此命令会创建镜像并启动容器等。一步到位。
其他命令
#停止并删除集群
docker-compose down#停止集群时删除网络、容器和卷
docker-compose down -v
复制pod 集群cert证书到宿主机路径 的当前路径下
这个路径 /usr/share/elasticsearch/config/certs/ca/ca.crt 在 docker-compose.yml 中有配置。
docker cp docker-es01-1:/usr/share/elasticsearch/config/certs/ca/ca.crt .
在生产环境中在 Docker
中运行 Elasticsearch
时适用以下要求和建议。
设置vm.max_map_count
为至少262144
编辑
内核设置必须设置为vm.max_map_count
至少262144
用于生产用途。
如何设置vm.max_map_count取决于您的平台。
Linux编辑
要查看vm.max_map_count设置的当前值,请运行:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
要在实时系统上应用设置,请运行:
sysctl -w vm.max_map_count=262144
要永久更改设置vm.max_map_count
的值,请更新/etc/sysctl.conf
中的值。
方式2
多机多节点。
步骤
1、准备三台机器,192.168.110.141~192.168.110.143
2、在每台机器新建文件夹mkdir /home/es/docker /home/es/docker/data
。并且为data文件夹赋予权限chmod 777 /home/es/docker/data
3、在每台机器的此目录下编写docker-compose.yml
文件
192.168.110.141
version: '2.2'
services:es01: # 服务名称image: docker.elastic.co/elasticsearch/elasticsearch:7.13.3 # 使用的镜像container_name: es01 # 容器名称#restart: always # 失败自动重启策略environment: - node.name=es01 # 节点名称,集群模式下每个节点名称唯一- cluster.name=es-docker-cluster # 集群名称,相同名称为一个集群, 三个es节点须一致- discovery.seed_hosts=es02,es03 # es7.0之后新增的写法,写入候选主节点的设备地址,在开启服务后,如果master挂了,哪些可以被投票选为主节点- cluster.initial_master_nodes=es01,es02,es03 # es7.0之后新增的配置,初始化一个新的集群时需要此配置来选举master- bootstrap.memory_lock=true # 内存交换的选项,官网建议为true#- network.bind_host=0.0.0.0 # 设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0,即本机#- network.host=127.0.0.1- network.publish_host=es01 # 用于集群内各机器间通信,对外使用,其他机器访问本机器的es服务,一般为本机宿主机IP- cluster.join.timeout=180s- cluster.publish.timeout=180s #- network.publish_host=192.168.4.170#- network.host=127.0.0.1- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m" # 设置内存,如内存不足,可以尝试调低点ulimits: # 栈内存的上限memlock:soft: -1 # 不限制hard: -1 # 不限制volumes:- ./data:/usr/share/elasticsearch/data # 存放数据的文件, 注意:这里的esdata为 顶级volumes下的一项。ports:- 9200:9200 # http端口,可以直接浏览器访问- 9300:9300 # es集群之间相互访问的端口,jar之间就是通过此端口进行tcp协议通信,遵循tcp协议。extra_hosts:- "es01:192.168.110.141"- "es02:192.168.110.142"- "es03:192.168.110.143"networks:- elastickib01:image: docker.elastic.co/kibana/kibana:7.13.3container_name: kib01ports:- 5601:5601extra_hosts:- "es01:192.168.110.141"- "es02:192.168.110.142"- "es03:192.168.110.143"environment:ELASTICSEARCH_URL: http://es01:9200ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'networks:- elastic
networks:elastic:driver: bridge
192.168.110.142
version: '2.2'
services:es02:image: docker.elastic.co/elasticsearch/elasticsearch:7.13.3container_name: es02environment:- node.name=es02- cluster.name=es-docker-cluster- discovery.seed_hosts=es01,es03- cluster.initial_master_nodes=es01,es02,es03- bootstrap.memory_lock=true#- network.bind_host=192.168.4.171#- network.host=127.0.0.1- network.publish_host=es02- cluster.join.timeout=180s- cluster.publish.timeout=180s #- network.publish_host=192.168.4.170#- network.host=127.0.0.1- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"ulimits:memlock:soft: -1hard: -1volumes:- data:/usr/share/elasticsearch/dataports:- 9200:9200- 9300:9300extra_hosts:- "es01:192.168.110.141"- "es02:192.168.110.142"- "es03:192.168.110.143"networks:- elastic
volumes:data02:driver: local
networks:elastic:driver: bridge
192.168.110.143
version: '2.2'
services:es03:image: docker.elastic.co/elasticsearch/elasticsearch:7.13.3container_name: es03environment:- node.name=es03- cluster.name=es-docker-cluster- discovery.seed_hosts=es01,es02- cluster.initial_master_nodes=es01,es02,es03- bootstrap.memory_lock=true#- network.bind_host=192.168.4.171#- network.host=127.0.0.1- network.publish_host=es03- cluster.join.timeout=180s- cluster.publish.timeout=180s #- network.publish_host=192.168.4.170#- network.host=127.0.0.1- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"ulimits:memlock:soft: -1hard: -1volumes:- data:/usr/share/elasticsearch/dataports:- 9200:9200- 9300:9300extra_hosts:- "es01:192.168.110.141"- "es02:192.168.110.142"- "es03:192.168.110.143"networks:- elastic
volumes:data02:driver: local
networks:elastic:driver: bridge
4、各机器下执行docker-compose命令
docker-compose -f docker-compose.yml up -d
#如果上面无法执行,执行下面的,2选1
docker compose -f docker-compose.yml up -d
如果启动失败了,可能是没有配置vm.max_map_count的值。
Linux编辑
要查看vm.max_map_count设置的当前值,请运行:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
要在实时系统上应用设置,请运行:
sysctl -w vm.max_map_count=262144
要永久更改设置vm.max_map_count
的值,请更新/etc/sysctl.conf
中的值。
启动成功后:
安装 ik 分词器
从 ik 分词器项目仓库中下载 ik 分词器安装包,下载的版本需要与 Elasticsearch
版本匹配:
https://github.com/medcl/elasticsearch-analysis-ik
或者可以访问 gitee 镜像仓库:
https://gitee.com/mirrors/elasticsearch-analysis-ik
下载 elasticsearch-analysis-ik-7.13.3.zip 复制到 /root/ 目录下
在三个节点上分别安装 ik 分词器
# 复制 ik 分词器到三个 es 容器
docker cp elasticsearch-analysis-ik-7.13.3.zip es01:/root/# 在 es01 中安装 ik 分词器
docker exec -it es01 elasticsearch-plugin install file:///root/elasticsearch-analysis-ik-7.13.3.zip
查看安装结果
在浏览器中访问 http://192.168.110.141:9200/_cat/plugins
可以看到结果
springboot配置es集群
导入包
<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-elasticsearch</artifactId></dependency>
方式1(springboot版本太高的话,很多属性会过时):
spring:data:elasticsearch:cluster-name: mediacluster-nodes: 192.168.110.141:9300,192.168.110.142:9300,192.168.110.143:9300
方式2(上面不行时换下面的)
spring:elasticsearch:rest:uris: http://192.168.110.141:9200,http://192.168.110.142:9200,http://192.168.110.143:9200 #配置ES集群的地址, 多个地址之间我们使用,隔开即可。
es与java交互的默认端口号是9300,与http交互的端口号是9200
linux搭建es集群相关推荐
- Kubernetes 搭建 ES 集群(存储使用 local pv)
推荐阅读 Helm3(K8S 资源对象管理工具)视频教程:https://edu.csdn.net/course/detail/32506 Helm3(K8S 资源对象管理工具)博客专栏:https: ...
- 在Linux搭建Kafka集群
文章目录 前言 准备工作 安装和配置 测试 参考链接 前言 以kafka_2.13-2.8.0版本做示例,安装架构图如下所示,4台服务器,4个节点的Zookeeper集群(1主2从1观察)以及3个Ka ...
- Linux搭建eureka集群,基于dns搭建eureka集群
eureka集群方案: 1.通常我们部署的eureka节点多于两个,根据实际需求,只需要将相邻节点进行相互注册(eureka节点形成环状),就达到了高可用性集群,任何一个eureka节点挂掉不会受到影 ...
- Kubernetes 搭建 ES 集群(存储使用 cephfs)
推荐阅读 Helm3(K8S 资源对象管理工具)视频教程:https://edu.csdn.net/course/detail/32506 Helm3(K8S 资源对象管理工具)博客专栏:https: ...
- ElasticSearch 5. 搭建ES集群
Elasticsearch集群 1. why? 提高负载能力 提高存储容量上限 实现高可用 提高并发处理能力 - 2. 数据分片(Shard) es集群把数据拆分成多份,每一份存储到不同节点(no ...
- docker es持久化_Docker 搭建 ES 集群并整合 Spring Boot
一.前言 什么是 Elasticsearch ? Elasticsearch 是一个基于 Apache Lucene(TM) 的开源搜索引擎.无论在开源还是专有领域,Lucene 可以被认为是迄今为止 ...
- linux搭建Kafka集群
前置条件 已经安装jdk并配置好环境变量,可参考<centos7安装jdk8>; 已经搭建好zookeeper集群,可参考<搭建zookeeper集群>. 服务器:node1, ...
- Linux 搭建Kafka集群,最新教程,细到极致
大家好呀,今天给大家带来的是,最新版kafka集群的安装教程,希望给小伙伴们一点小小的帮助. 注意:提前安装好jdk, Jdk安装教程 1.准备安装包,Kafka官网下载 2.kafka安装需要z ...
- Linux 搭建zookpeer集群和配置
zookpeer和JDK1.8下载地址 下载地址:zookpeer和jdk1.8 提取码:w189 解压以及配置zookpeer tar -zxvf zookeeper-3.4.6.tar.gz ta ...
最新文章
- 修改网站自动关闭时间timeout_Testbench仿真方法2:在Quartus下Testbench编写及脚本文件修改...
- jvm性能调优实战 -53接口超时导致的OOM
- php文本教学,php中文本操作的类
- socket编程流程与函数(实用篇)
- npm上传自己的项目
- Linux 使用grep过滤多个条件及grep常用过滤命令
- gdb+zbacktrace找到cpu过高php代码
- 关于零拷贝技术,你了解多少?
- PHP用socket连接SMTP服务器发送邮件
- c语言实验报告问题错误分析,C语言实验报告(三)
- CC2540蓝牙开发一BLE例程
- 处理猪舌须知小窍门-滋阴润燥好良方-菜椒滑猪舌
- 与男友相爱7年的点滴,让我知道什么是好男人!(ZT)
- 150款国潮风城市插画
- Labview串口通信中ASCII码和数值相互转换
- JAVA word转pdf高清无乱码版本(图片也可以的)
- 关于input:-webkit-autofill样式问题
- android 蓝牙4.0广播功能应用
- 双11投影仪推荐,什么样的投影仪才是年轻人最爱的?
- 项目实训2021.07.02