这里我用的是Elasticsearch 6.2.1,logstash 6.2.1,mysql

一.ElasticSearch:

一.介绍

ElasticSearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java语言开发的,并作为Apache许可条款下的开放源码发布,是一种流行的企业级搜索引擎。ElasticSearch用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。官方客户端在Java、.NET(C#)、PHP、Python、Apache Groovy、Ruby和许多其他语言中都是可用的。根据DB-Engines的排名显示,Elasticsearch是最受欢迎的企业搜索引擎,其次是Apache Solr,也是基于Lucene。

优点:
1、elasticsearch是基于Lucene的高扩展的分布式搜索服务器,支持开箱即用。
2、elasticsearch隐藏了Lucene的复杂性,对外提供Restful 接口来操作索引,搜索。
3、扩展性好,可部署上百台服务器集群,处理PB级数据。
4、近实时的去索引数据、搜索数据。
下面就来说说springboot整合es。

安装:(jdk1.8以上)
点击安装es


bin:脚本目录,包括:启动、停止等可执行脚本
config:配置文件目录
data:索引目录,存放索引文件的地方
logs:日志目录
modules:模块目录,包括了es的功能模块
plugins :插件目录,es支持插件机制

二.配置一下config下的三个文件,启动elasticsearch:(如果配置不成功,可能会启动elasticsearch.bat)

elasticsearch.yml : 用于配置Elasticsearch运行参数
jvm.options : 用于配置Elasticsearch JVM设置
log4j2.properties: 用于配置Elasticsearch日志

1.elasticsearch.yml

cluster.name: (配置elasticsearch的集群名称,默认是elasticsearch。建议修改成一个有意义的名称。 node.name:)
node.name: (节点名,通常一台物理服务器就是一个节点,es会默认随机指定一个名字,建议指定一个有意义的名称,方便管 理)
network.host: 0.0.0.0
http.port: 9200(设置对外服务的http端口,默认为9200。)
transport.tcp.port: 9300 ( 集群结点之间通信端口)
node.master: true
node.data: true
#discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300", "0.0.0.0:9301", "0.0.0.0:9302"](设置集群中master节点的初始列表)
discovery.zen.minimum_master_nodes: 1(主结点数量的最少值 ,此值的公式为:(master_eligible_nodes / 2) + 1 ,比如:有3个符合要求的主结点,那么这 里要设置为2。)
node.ingest: true
bootstrap.memory_lock: false
node.max_local_storage_nodes: 1(单机允许的最大存储结点数,通常单机启动一个结点建议设置为1,开发环境如果单机启动多个节点可设置大于1.)
path.data: D:\ElasticSearch\elasticsearch-3\data
path.logs: D:\ElasticSearch\elasticsearch-3\logs
http.cors.enabled: true
http.cors.allow-origin: /.*/

2.jvm.options

## JVM configuration################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space-Xms1g
-Xmx1g################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly## optimizations# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch## basic# explicitly set the stack size
-Xss1m# set to headless, just in case
-Djava.awt.headless=true# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8# use our provided JNA always versus the system one
-Djna.nosys=true# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true-Djava.io.tmpdir=${ES_TMPDIR}## heap dumps# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=/heap/dump/path## JDK 8 GC logging8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
# time/date parsing will break in an incompatible way for some date patterns and locals
9-:-Djava.locale.providers=COMPAT

3. log4j2.properties

status = error# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debugappender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%nappender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GBrootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rollingappender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = falseappender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = truelogger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = falseappender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = truelogger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

4.启动elasticsearch:

进入bin目录,在cmd下运行:elasticsearch.bat
验证是否成功:

三.安装head页面可视化插件

head插件是ES的一个可视化管理插件,用来监视ES的状态,并通过head客户端和ES服务进行交互,比如创建映 射、创建索引等,head的项目地址在https://github.com/mobz/elasticsearch-head 。
从ES6.0开始,head插件支持使得node.js运行(安装node.js)。
下好后把它和elasticsearch放在一起,在根目录cmd,输入指令npm run start


点击索引找到新建索引就能新建索引,分片数取决于你的数据有多少,一片大概可以存1万条数据,副本数可以设0。

四. 安装ik分词器

1.安装地址:https://github.com/medcl/elasticsearch-analysis-ik,
下载好后解压,并将解压的文件拷贝到ES安装目录的plugins下的ik目录下

二.安装Logstash以及相关所需和使用

一.安装logstash
Logstash是ES下的一款开源软件,它能够同时 从多个来源采集数据、转换数据,然后将数据发送到Eleasticsearch 中创建索引。 本项目使用Logstash将MySQL中的数据采用到ES索引中。下载logstah
需要注意的是,这个版本需要和你的es版本一致。

二.安装logstash-input-jdbc
logstash-input-jdbc 是ruby开发的,先下载ruby并安装 下载地址: https://rubyinstaller.org/downloads/
Logstash5.x以上版本本身自带有logstash-input-jdbc,6.x版本本身不带logstash-input-jdbc插件,需要手动安装

.\logstash-plugin.bat install logstash-input-jdbc

三.配置创建索引的json和mysql.conf
1.mysql.conf

input {stdin {}jdbc {jdbc_connection_string => "jdbc:mysql://localhost:3306/database?useUnicode=true&characterEncoding=utf-8&useSSL=true&serverTimezone=UTC"# the user we wish to excute our statement asjdbc_user => "root"jdbc_password => root# the path to our downloaded jdbc driver  jdbc_driver_library => "D:/Maven/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar"# the name of the driver class for mysqljdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_paging_enabled => "true"jdbc_page_size => "50000"#要执行的sql文件#statement_filepath => "/conf/course.sql"statement => "select * from wp_ex_source_goods_tb_cat_copy where timestamp > date_add(:sql_last_value,INTERVAL 8 HOUR)"#定时配置schedule => "* * * * *"record_last_run => truelast_run_metadata_path => "D:/ElasticSearch/logstash-6.2.1/config/logstash_metadata"}
}output {elasticsearch {#ES的ip地址和端口hosts => "localhost:9200"#hosts => ["localhost:9200","localhost:9202","localhost:9203"]#ES索引库名称index => "goods"document_id => "%{cid}"document_type => "doc"template =>"D:/ElasticSearch/logstash-6.2.1/config/goods_template.json"template_name =>"goods"template_overwrite =>"true"}stdout {#日志输出codec => json_lines}
}

2.goods_template.json (名字是用的我的)

{"mappings" : {"doc" : {         "properties":{ "cid": { "type": "text" },"name": {"type": "keyword"},"is_parent": {"type": "text"},"parent_id": {"type": "text"},"level":{"type": "text"},"pathid": {"type": "text"},"path": {"type": "text"}               }}     },"template" : "goods"
}

三.启动logstash拉取mysql数据到es:

启动logstash.bat:

.\logstash.bat ‐f ..\config\mysql.conf

logstash是根据数据库的timestamp来进行实时抓取的,比较config的logstash_metadata文件的时间来抓取

三.Springboot整合es:

核心文件:

pom.xml: 注意: es的server和client的版本要一致,否则会出现请求响应不了,空指针异常

 <dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-high-level-client</artifactId><version>6.5.4</version></dependency><dependency><groupId>org.elasticsearch</groupId><artifactId>elasticsearch</artifactId><version>6.5.4</version></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><optional>true</optional></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-io</artifactId></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-lang3</artifactId></dependency>

application.yml

server:port: 8083
spring:application:name: sbes
cat: #随便取elasticsearch:hostlist: ${eshostlist:127.0.0.1:9200}es:index: goodstype: docsource_field: cid,name,is_parent,parent_id,level,pathid,path,timestamp

ElasticsearchConfig.java

/*** @author Administrator* @version 1.0**/
@Configuration
public class ElasticsearchConfig {@Value("${cat.elasticsearch.hostlist}")private String hostlist;@Beanpublic RestHighLevelClient restHighLevelClient(){//解析hostlist配置信息String[] split = hostlist.split(",");//创建HttpHost数组,其中存放es主机和端口的配置信息HttpHost[] httpHostArray = new HttpHost[split.length];for(int i=0;i<split.length;i++){String item = split[i];httpHostArray[i] = new HttpHost(item.split(":")[0], Integer.parseInt(item.split(":")[1]), "http");}//创建RestHighLevelClient客户端return new RestHighLevelClient(RestClient.builder(httpHostArray));}//项目主要使用RestHighLevelClient,对于低级的客户端暂时不用@Beanpublic RestClient restClient(){//解析hostlist配置信息String[] split = hostlist.split(",");//创建HttpHost数组,其中存放es主机和端口的配置信息HttpHost[] httpHostArray = new HttpHost[split.length];for(int i=0;i<split.length;i++){String item = split[i];httpHostArray[i] = new HttpHost(item.split(":")[0], Integer.parseInt(item.split(":")[1]), "http");}return RestClient.builder(httpHostArray).build();}
}

EsCatService.java :
注意:如果分页的size超过1w就会报错,所以需要在es的head做一个简单操作
(9200是es的uip,goods是索引名)

@Service
public class EsCatService {@Value("${cat.elasticsearch.es.index}")private String es_index;@Value("${cat.elasticsearch.es.type}")private String es_type;@Value("${cat.elasticsearch.es.source_field}")private String source_field;@AutowiredRestHighLevelClient restHighLevelClient;public QueryResponseResult<TbCatCopy> list(int page, int size, String keyword) throws IOException {//创建搜索请求对象SearchRequest searchRequest=new SearchRequest(es_index);//设置搜索类型searchRequest.types(es_type);SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();//过滤字段源String[]source_fields = source_field.split(",");searchSourceBuilder.fetchSource(source_fields, new String[]{});//关键字if(StringUtils.isNotEmpty(keyword)) {//匹配关键字MultiMatchQueryBuilder multiMatchQueryBuilder = QueryBuilders.multiMatchQuery(keyword, "name");
//            //设置匹配占比
//            multiMatchQueryBuilder.minimumShouldMatch("70%");
//            //向搜索请求设置搜索源
//            //提升另个字段的Boost值
//            multiMatchQueryBuilder.field("name",10);boolQueryBuilder.must(multiMatchQueryBuilder);}//过虑//布尔查询searchSourceBuilder.query(boolQueryBuilder);if(page<=0){page = 1;}if(size<=0){size=10;}//起始记录下标int from = (page-1) * size;searchSourceBuilder.from(from);searchSourceBuilder.size(size);searchRequest.source(searchSourceBuilder);QueryResult<TbCatCopy> queryResult=new QueryResult();//数据列表List<TbCatCopy> list = new ArrayList<>();//执行搜索SearchResponse searchResponse=restHighLevelClient.search(searchRequest);//获取响应结果SearchHits hits = searchResponse.getHits();//记录总数long totalHits = hits.getTotalHits();queryResult.setTotal(totalHits);//结果集处理SearchHit[] searchHits = hits.getHits();for (SearchHit hit:searchHits) {TbCatCopy tbCatCopy=new TbCatCopy();Map<String,Object> sourceAsMap=hit.getSourceAsMap();Integer cid=(Integer) sourceAsMap.get("cid");tbCatCopy.setCid(cid);String name = (String) sourceAsMap.get("name");tbCatCopy.setName(name);Integer is_parent=(Integer) sourceAsMap.get("is_parent");tbCatCopy.setIs_parent(is_parent);Integer parent_id=(Integer) sourceAsMap.get("parent_id");tbCatCopy.setParent_id(parent_id);Integer level=(Integer) sourceAsMap.get("level");tbCatCopy.setLevel(level);String pathid=(String) sourceAsMap.get("pathid");tbCatCopy.setPathid(pathid);String path = (String) sourceAsMap.get("path");tbCatCopy.setPath(path);list.add(tbCatCopy);}queryResult.setList(list);QueryResponseResult<TbCatCopy> coursePubQueryResponseResult = new QueryResponseResult<TbCatCopy>(CommonCode.SUCCESS,queryResult);return coursePubQueryResponseResult;}
}

es之java各种查询操作

各种查询操作详情

matchAllQuery 匹配所有文档
queryStringQuery 基于Lucene的字段检索
wildcardQuery 通配符查询匹配多个字符,?匹配1个字符*
termQuery 词条查询
matchQuery 字段查询
idsQuery 标识符查询
fuzzyQuery 文档相似度查询
includeLower includeUpper 范围查询
boolQuery 组合查询(复杂查询)
SortOrder 排序查询

四.vue.js渲染页面

不会vue.js的可以看看我之前的三级联动 (springboot+vue.js/ajax+mysql+SpringDataJpa/Mybatis),里面用到vue.js,也有ajax的方法。

1.proxytable配置:

 proxyTable: {'/cat': {// 测试环境target: 'http://localhost:8083', // 接口域名changeOrigin: true, //是否跨域pathRewrite: {'^/cat': '' //需要rewrite重写的,}}}

2.vue代码

<template><div><div class="usertable"><el-row class="usersavebtn"></el-button><el-input v-model="input" placeholder="请输入内容" style="width:20%"></el-input><el-button type="primary" icon="el-icon-search" @click="init()">搜索</el-button></el-row><el-table :data="tableData" stripe style="width: 100%"><el-table-column prop="cid" label="编号" width="150"></el-table-column><el-table-column prop="name" label="商品名" width="200"></el-table-column><el-table-column prop="level" label="等级" width="150"></el-table-column><el-table-column prop="pathid" label="类别编号" width="200"></el-table-column><el-table-column prop="path" label="类别" width="350"></el-table-column></el-table><el-paginationbackgroundlayout="prev, pager, next":total="total":page-size="params.size":current-page="params.page"@current-change="changePage"style="float: right"></el-pagination></div></div>
</template><script>const axios = require('axios')export default {name: 'catpage',data () {return {tableData: [],dialogFormVisible: false,formLabelWidth: '120px',total:0,params:{page:0,size:10,},input:'',
// 表单验证规则}},created: function() {this.init()},methods: {changePage:function (page) {this.params.page=page;this.init();},init: function (h) {var app = thisvar a = h == undefined ? app : hvar keyword=a.inputvar pageNum=a.params.pagevar pageSize=a.params.sizeconsole.log('init')axios.get('/cat/list',{params: {'page':pageNum,'size':pageSize,'keyword':keyword,}}).then(function (response) {// handle successconsole.log('============', response)a.tableData = response.data.contenta.tableData=response.data.queryResult.list;//将调用方法所返回的数据结果赋值给lista.total=response.data.queryResult.total;//同上}).catch(function (error) {// handle errorconsole.log(error)})}}}
</script><style scoped>.usertable {margin: 0 auto;width: 70%;position: relative;}
</style>

3.实现效果:

Springboot整合Elasticsearch搜索引擎+vue页面相关推荐

  1. 【数据篇】SpringBoot 整合 Elasticsearch 实践数据搜索引擎

    写在最前 Elasticsearch 入门必读 Docker安装ELK Spring Data Elasticsearch 参考文档 版本选择 Spring Data Release Train Sp ...

  2. SpringBoot 整合ElasticSearch全文检索

    ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口.Elasticsearch是用Java语言开发的,并作为Apa ...

  3. Springboot 整合 ElasticSearch 入门教学必看

    ElasticSearch 相比搜到这篇文章的人,都已经有过对它的了解, 一种流行的企业级搜索引擎,是一个分布式,高性能.高可用.可伸缩的搜索和分析系统. 那么用我粗俗的言语来说,它就是提供一个存储数 ...

  4. 微服务商城系统(六)商品搜索 SpringBoot 整合 Elasticsearch

    文章目录 一.Elasticsearch 和 IK 分词器的安装 二.Kibana 使用 三.数据导入 Elasticsearch 1.SpringData Elasticsearch 介绍 2.搜索 ...

  5. SpringBoot整合ElasticSearch实现多版本的兼容

    前言 在上一篇学习SpringBoot中,整合了Mybatis.Druid和PageHelper并实现了多数据源的操作.本篇主要是介绍和使用目前最火的搜索引擎ElastiSearch,并和Spring ...

  6. 【SpringBoot高级篇】SpringBoot集成Elasticsearch搜索引擎

    [SpringBoot高级篇]SpringBoot集成Elasticsearch搜索引擎 1. 什么是Elasticsearch? 2. 安装并运行Elasticsearch 2.1 拉取镜像 2.2 ...

  7. es springboot 不设置id_原创 | 一篇解决Springboot 整合 Elasticsearch

    ElasticSearch 结合业务的场景,在目前的商品体系需要构建搜索服务,主要是为了提供用户更丰富的检索场景以及高速,实时及性能稳定的搜索服务. ElasticSearch是一个基于Lucene的 ...

  8. 七、SpringBoot整合elasticsearch集群

    @Author : By Runsen @Date : 2020/6/12 作者介绍:Runsen目前大三下学期,专业化学工程与工艺,大学沉迷日语,Python, Java和一系列数据分析软件.导致翘 ...

  9. SpringBoot整合elasticsearch (java整合es)

    欢迎大家进群,一起探讨学习 微信公众号,每天给大家提供技术干货 博主技术笔记 博主网站地址1 博主网站地址2 博主开源微服架构前后端分离技术博客项目源码地址,欢迎各位star SpringBoot整合 ...

最新文章

  1. Docker入门六部曲——Stack
  2. 【PAT乙级】1019 数字黑洞 (20 分)
  3. 在web开发中的三个层次使用事务
  4. STM32连续采样_STM32 - 利用双缓冲实现实时曲线显示(续)
  5. greenplum 查询出来的数字加减日期_常用SQL系列之(八):列值累计、占比、平均值以及日期运算等
  6. n 的第 k 个因子
  7. Hbase shell练习题
  8. 编写xml文件不当时会出现R文件找不到情况
  9. 速达5000进销存PDA条码打印扫码开单-吉度PDA定制
  10. VMware虚拟机下Ubuntu系统安装VMware Tools
  11. MATLAB数字图像处理(二)直方图
  12. WLAN适配器故障(消失)的最快解决办法
  13. 量子计算机为什么低温,突破量子计算机瓶颈!超低温芯片能在接近绝对零度的温度下工作...
  14. Milton 1.5.1发布,开源服务器端类库
  15. 请教一下水卡校验算法
  16. c语言经典100例c22 规律题
  17. 容器部署在物理机还是虚拟机上?
  18. Verilog中repeat的用法
  19. [转载]汇编eax寄存器和AX,AH,AL之间的关系
  20. 网络基本概念之TCP, UDP, 单播(Unicast), 组播(Multicast)

热门文章

  1. 【步兵 经验篇】图片加密之我见
  2. s6 android 7.0,魅族魅蓝S6 Android 7.0 ROM刷机包 3GB RAM 全网通 官方固件
  3. UP对战平台DOTA全图 炸图 踢人
  4. 从零开始学习Qt(一)
  5. Java、对二维数组排序
  6. 随缘学习 扫雷游戏 题解
  7. Java HashMap 原理
  8. 华为荣耀6计算机,华为荣耀6怎么开启和打开usb调试模式
  9. C#实现 AES算法加密
  10. Win7双击文件夹总是打开新窗口的解决方法