Elasticsearch7.x学习
文章目录
- 一 Elasticsearch介绍
- 1.1 引言
- 1.2 ES的介绍
- 1.3 ES和Solr对比
- 1.4 倒排索引
- 二 ES安装
- 三 ES基本操作
- 3.1 ES的结构
- 3.1.1 索引
- 3.1.2 类型
- 3.1.3 文档
- 3.1.4 属性
- 3.2 ES的RESTful语法
- 3.2.1 使用RESTful语法
- 3.2.2 ES中Field可以指定的类型
- 四 java 操作ElasticSearch
- 4.1 准备环境和基础类
- 4.2 准备索引和文档
- 4.2.1 索引
- 4.2.1.1 创建索引
- 4.2.1.2 检查索引是否存在
- 4.2.1.3 删除索引
- 4.2.2 文档
- 4.2.2.1添加文档
- 4.2.2.2 批量添加
- 4.2.2.3 修改文档
- 4.2.2.4 删除文档
- 4.2.3 查询
- 4.2.3.1 term
- 4.2.3.2 terms
- 4.2.3.3 match_all
- 4.2.3.4 match
- 4.2.3.5 multi_match
- 4.2.3.6 id
- 4.2.3.7 ids
- 4.2.3.8 prefix
- 4.2.3.9 fuzzy
- 4.2.3.10 wildcard
- 4.2.3.11 rang
- 4.2.3.13 regexp
- 4.2.3.13 深分页 scrol l
- 4.2.3.14 delete-by-query
- 4.2.3.15 bool
- 4.2.3.16 boosting
- 4.2.3.17 filter
- 4.2.3.18 高亮查询
- 4.2.3.19 聚合查询
- 4.2.3.20 去重计数聚合查询
- 4.2.3.21 范围统计
- 4.2.3.22 统计聚合
- 4.2.4 地图经纬度搜索
- 4.2.4.1 准备数据
- 4.2.4.2 ES 的地图检索方式
一 Elasticsearch介绍
1.1 引言
在海量数据中执行搜索功能时,如果使用MySQL,效率太低。
如果关键字输入的不准确,一样可以搜索到想要的数据。
将搜索关键字,以红色的字体展示。
1.2 ES的介绍
ES是一个使用java语言并且基于Lucene编写的搜索引擎框架,它提供了分布式的全文搜索功能,提供了一个统一的基于RESTFUL风格的WEB接口,官方客户端也对多种语言都提供了相应的API
Lucene:本身就是一个搜索引擎的底层
分布式:ES主要为了突出它的横向扩展能力
全文检索:将一段词语进行分词,并且将分出的单个词语统一放到一个分词库中,在搜索时,根据关键字去分词中检索,找到匹配的内容。(倒排索引)
RESTful风格的web接口:操作ES很简单,只需要发送一个HTTP请求,并且根据请求方式不同,携带参数不同,执行相应的功能
应用广泛:Github、WIKI
1.3 ES和Solr对比
- solr在查询死数据的时候,速度相对ES更快一些,但是如果数据是实时改变的,Solr的查询效率会降低很多很多,但是ES的查询效率基本没有变化
- Solr搭建需要Zookeeper来帮助管理。ES本身就支持集群的搭建,不需要第三方介入
- 最开始solr的社区可以说是非常火爆,但针对国内的文档不多。在ES出现之后,ES的社区火爆程度直线上升,ES的文档非常健全。
- ES对现在的云计算和大数据支持的比较好。
1.4 倒排索引
- 将存放的数据,以一定的方式进行分词,并且将分词的内容存放到一个单独的分词库中。
- 当用户去查询数据时,会将用户的查询关键词进行分词
- 然后去分词库中匹配内容,最终得到数据的id标识
- 根据id标识去存放的位置拉取到指定的数据
二 ES安装
version: "3.1"
services:elasticsearch:image: elasticsearch:7.7.0restart: alwayscontainer_name: elasticsearchports:- 9200:9200environment:- ES_JAVA_OPTS=-Xms256m -Xmx256m- discovery.type=single-nodekibana:image: kibana:7.7.0restart: alwayscontainer_name: kibanaports:- 5601:5601environment:- elasticsearch_url=http://112.124.21.177depends_on:- elasticsearch
三 ES基本操作
3.1 ES的结构
index(索引)- tyep(类型) - document(文档)- field(属性)
3.1.1 索引
- ES服务中可以创建多个索引
- 每一个索引被默认分成1片存储(7.0以前默认5片)
- 每一个分片都会存在至少一个备份分片
- 备份分片默认不会帮助检索数据,当ES检索压力特别大的时候,备份分片才会帮助检索
- 备份的分片必须放在不同的服务器中
3.1.2 类型
- 一个索引下,有一个默认类型_doc(5.x可以建立多个,6.x只能建立一个)
3.1.3 文档
- 一个类型下,可以有多个文档,这个文档就类似于MySQL表中的多行数据
3.1.4 属性
- 一个文档中,可以包含多个属性,类似于MySQL表中一行数据有多个列
3.2 ES的RESTful语法
3.2.1 使用RESTful语法
- get请求
http://ip:port/index 查询索引信息
http://ip:port/index/type/doc_id 查询指定的文档信息
- POST请求
http://ip:port/index/_search 查询文档,可以在请求体中添加json字符串来代表查询条件
http://ip:port/index/_update/doc_id 修改文档,在请求体中添加json字符串来代表修改的信息
- PUT请求
http://ip:port/index : 创建一个索引,需要在请求体中指定索引的信息
- DELETE请求
http://ip:port/index: 删除
http://ip:port/index/type/doc_id: 删除指定的文档
3.2.2 ES中Field可以指定的类型
https://www.elastic.co/guide/en/elasticsearch/reference/7.7/mapping-types.html
四 java 操作ElasticSearch
4.1 准备环境和基础类
<!-- 1.elasticsearch--><dependency><groupId>org.elasticsearch</groupId><artifactId>elasticsearch</artifactId><version>7.7.0</version></dependency>
<!-- 2.elasticsearch 高级API--><dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-high-level-client</artifactId><version>7.7.0</version></dependency>
<!-- 3.junit--><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.12</version></dependency>
<!-- 4.lombok--><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><version>1.16.22</version></dependency>
public class EsClient {public static RestHighLevelClient getClient() {HttpHost host = new HttpHost("112.124.21.177",9200);RestClientBuilder builder = RestClient.builder(host);RestHighLevelClient client = new RestHighLevelClient(builder);return client;}
}
4.2 准备索引和文档
索引:sms-logs-index
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Emt0Be7H-1598429609407)(C:\Users\yangle\AppData\Roaming\Typora\typora-user-images\image-20200821122606706.png)]
4.2.1 索引
4.2.1.1 创建索引
- ES方式
PUT /sms-logs-index
{"settings": {"number_of_shards": 1, "number_of_replicas": 1},"mappings": {"properties": {"createDate": {"type": "date","format": ["yyyy-MM-dd"]},"sendDate": {"type": "date","format": ["yyyy-MM-dd"]},"longCode": {"type": "keyword"},"mobile": {"type": "keyword"},"cropName": {"type": "text","analyzer": "ik_max_word"},"smsContent": {"type": "text","analyzer": "ik_max_word"},"state": {"type": "integer"},"operatorId": {"type": "integer"},"province": {"type": "keyword"},"ipAddr": {"type": "ip"},"replyTotal": {"type": "integer"},"fee": {"type": "long"}}}
}
- java方式
public void createIndex() throws Exception {// 1.准备关于索引的settingSettings.Builder settings = Settings.builder().put("number_of_shards", 1).put("number_of_replicas", 1);// 2.准备关于索引的mappingXContentBuilder mappings = JsonXContent.contentBuilder().startObject().startObject("properties").startObject("corpName").field("type", "text").field("analyzer", "ik_max_word").endObject().startObject("createDate").field("type", "date").field("format", "yyyy-MM-dd").endObject().startObject("fee").field("type", "long").endObject().startObject("ipAddr").field("type", "ip").endObject().startObject("longCode").field("type", "keyword").endObject().startObject("mobile").field("type", "keyword").endObject().startObject("operatorId").field("type", "integer").endObject().startObject("province").field("type", "keyword").endObject().startObject("replyTotal").field("type", "integer").endObject().startObject("sendDate").field("type", "date").field("format", "yyyy-MM-dd").endObject().startObject("smsContent").field("type", "text").field("analyzer", "ik_max_word").endObject().startObject("state").field("type", "integer").endObject().endObject().endObject();// 3.将settings和mappings 封装到到一个Request对象中CreateIndexRequest request = new CreateIndexRequest(INDEX).settings(settings).mapping(mappings);// 4.使用client 去连接ESCreateIndexResponse response = client.indices().create(request, RequestOptions.DEFAULT);System.out.println("response:" + response.toString());}
4.2.1.2 检查索引是否存在
- ES方式
HEAD /sms-logs-index
- java方式
public void exists() throws IOException{GetIndexRequest request = new GetIndexRequest(INDEX);boolean exists = client.indices().exists(request, RequestOptions.DEFAULT);System.out.println(exists);}
4.2.1.3 删除索引
- ES方式
DELETE /test
- java方式
public void delete() throws IOException{DeleteIndexRequest request = new DeleteIndexRequest("test");AcknowledgedResponse delete = client.indices().delete(request,RequestOptions.DEFAULT);System.out.println(delete.isAcknowledged());}
4.2.2 文档
4.2.2.1添加文档
- ES方式
自动生成id
#添加文档,自动生成id
POST /book/_doc
{"name":"五三教辅","author":"黄云辉","count":100000,"on-sale":"2001-01-01","descr":"买我必上清华"
}#添加文档,手动指定id
PUT /book/_doc/1
{"name":"红楼梦","author":"曹雪芹","count":10000000,"on-sale":"2501-01-01","descr":"中国古代章回体长篇小说,中国古典四大名著之一,一般认为是清代作家曹雪芹所著。小说以贾、史、王、薛四大家族的兴衰为背景,以富贵公子贾宝玉为视角,以贾宝玉与林黛玉、薛宝钗的爱情婚姻悲剧为主线,描绘了一批举止见识出于须眉之上的闺阁佳人的人生百态,展现了真正的人性美和悲剧美"
}
- java方式
public void createDoc() throws IOException {Student student = new Student("2", "张三2", 22, new Date());String s = JSONObject.toJSONString(student);IndexRequest request = new IndexRequest(INDEX, "_doc", student.getId());request.source(s, XContentType.JSON);IndexResponse response = client.index(request, RequestOptions.DEFAULT);System.out.println(response.toString());}
4.2.2.2 批量添加
- ES方式
POST _bulk
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
- java方式
public void bulkCreateDoc() throws Exception {// 1.准备多个json 对象String longCode = "1008687";String mobile = "18659113636";List<String> companies = new ArrayList<>();companies.add("腾讯课堂");companies.add("阿里旺旺");companies.add("海尔电器");companies.add("海尔智家公司");companies.add("格力汽车");companies.add("苏宁易购");List<String> provinces = new ArrayList<>();provinces.add("北京");provinces.add("重庆");provinces.add("上海");provinces.add("晋城");BulkRequest bulkRequest = new BulkRequest();for (int i = 1; i < 16; i++) {Thread.sleep(1000);SmsLogs s1 = new SmsLogs();s1.setId(i);s1.setCreateDate(new Date());s1.setSendDate(new Date());s1.setLongCode(longCode + i);s1.setMobile(mobile + 2 * i);s1.setCorpName(companies.get(i % 5));s1.setSmsContent(SmsLogs.doc.substring((i - 1) * 100, i * 100));s1.setState(i % 2);s1.setOperatorId(i % 3);s1.setProvince(provinces.get(i % 4));s1.setIpAddr("127.0.0." + i);s1.setReplyTotal(i * 3);s1.setFee(i * 6 + "");String json1 = JSONObject.toJSONString(s1);bulkRequest.add(new IndexRequest(INDEX, "_doc", s1.getId().toString()).source(json1, XContentType.JSON));System.out.println("数据" + i + s1.toString());}// 3.client 执行BulkResponse responses = client.bulk(bulkRequest, RequestOptions.DEFAULT);// 4.输出结果System.out.println(responses.getItems().toString());}
4.2.2.3 修改文档
- ES方式
#覆盖式修改
PUT /test/_doc/1
{"name": "c++"
}#基于doc修改
POST /test/_update/1
{"doc": {"name": "c++"}
}
- java方式
public void updateDoc() throws IOException {Map<String, Object> map = new HashMap<>(16);map.put("name", "李四");UpdateRequest request = new UpdateRequest(INDEX, "1");request.doc(map);UpdateResponse response = client.update(request, RequestOptions.DEFAULT);System.out.println(response.toString());}
4.2.2.4 删除文档
- ES方式
DELETE /test/_doc/1
- java方式
public void delDoc() throws IOException {DeleteRequest request = new DeleteRequest(INDEX, "1");DeleteResponse delete = client.delete(request, RequestOptions.DEFAULT);System.out.println(delete.getResult());}
4.2.3 查询
4.2.3.1 term
term 查询是代表完全匹配,搜索之前不会对你搜索的关键字进行分词,直接拿关键字去文档分词库中匹配内容
- ES方式
POST /sms-logs-index/_search
{"from": 0, "size":5,"query": {"term": {"province": {"value": "北京"}}}
}
- java方式
public void SearchTermDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.from(0);builder.size(5);builder.query(QueryBuilders.termQuery("province", "北京"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {Map<String, Object> source = hit.getSourceAsMap();System.out.println(source);}}
4.2.3.2 terms
terms 和 term 查询的机制一样,搜索之前不会对你搜索的关键字进行分词,直接拿 关键字 去文档分词库中匹配内容
terms:是针对一个字段包含多个值
term : where province =北京
terms: where province = 北京 or province =? (类似于mysql 中的 in)
也可针对 text, 只是在分词库中查询的时候不会进行分词
- ES方式
POST /sms-logs-index/_search
{"query": {"terms": {"province": ["北京","晋城"]}}
}
- java方式
public void searchTermsDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.termsQuery("province", "北京", "重庆", "上海"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getIndex());System.out.println(hit.getId());System.out.println(hit.getFields());System.out.println(hit.getSourceAsMap());}}
4.2.3.3 match_all
查询全部内容,不指定查询条件
- ES方式
POST /sms-logs-index/_search
{"query":{"match_all": {}}
}
- java方式
public void searchMatchAllDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.matchAllQuery());request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHits hits1 = response.getHits();SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.4 match
match 查询属于高级查询,会根据你查询字段的类型不一样,采用不同的查询方式
查询的是日期或者数值,他会将你基于字符串的查询内容转换为日期或数值对待
如果查询的内容是一个不能被分词的内容(keyword),match 不会将你指定的关键字进行分词
如果查询的内容是一个可以被分词的内容(text),match 查询会将你指定的内容根据一定的方式进行分词,去分词库中匹配指定的内容
match 查询,实际底层就是多个term 查询,将多个term查询的结果给你封装到一起
- ES方式
POST /sms-logs-index/_search
{"query": {"match": {"smsContent": "伟大战士"}}
}#布尔match查询
POST /sms-logs-index/_search
{"query": {"match": {"smsContent": {# 既包含 战士 也包含 团队"query": "战士 团队","operator": "and"}}}
}
- java方式
public void searchMatchDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.matchQuery("smsContent", "在空地"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}#布尔match查询public void booleanMatchSearch() throws IOException {// 1.创建request对象SearchRequest request = new SearchRequest(index);// 2.创建查询条件SearchSourceBuilder builder = new SearchSourceBuilder();//--------------------------------------------------------------builder.query(QueryBuilders.matchQuery("smsContent","战士 团队").operator(Operator.AND));//--------------------------------------------------------------builder.size(20);request.source(builder);// 3.执行查询SearchResponse response = client.search(request, RequestOptions.DEFAULT);// 4.输出查询结果for (SearchHit hit : response.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}System.out.println(response.getHits().getHits().length);}
4.2.3.5 multi_match
match 针对一个field 做检索,multi_math 针对多个field 进行检索,多个field对应一个文本。
- ES方式
POST /sms-logs-index/_search
{"query":{"multi_match": {"query": "北京","fields": ["province","smsContent"]}}
}
- java方式
public void searchMultiMatch() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.multiMatchQuery("北京", "province", "smsContent"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.6 id
- ES方式
GET /sms-logs-index/_doc/1
- java方式
public void getById() throws IOException {GetRequest request = new GetRequest(INDEX, "1");GetResponse response = client.get(request, RequestOptions.DEFAULT);System.out.println(response.getSourceAsMap());}
4.2.3.7 ids
- ES方式
POST /sms-logs-index/_search
{"query": {"ids": {"values": ["1","2","3"]}}
}
- java方式
public void getByIds() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.idsQuery().addIds("1", "2", "3"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.8 prefix
前缀查询,可以通过一个关键字去指定一个field 的前缀,从而查询到指定文档
- ES方式
POST /sms-logs-index/_search
{"query": {"prefix": {"province": {"value": "上"}}}
}
- java方式
public void searchPrefixDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.prefixQuery("province", "上"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.9 fuzzy
模糊查询,我们可以输入一个字符的大概,ES可以根据输入的大概去匹配内容。查询结果不稳定
- ES方式
POST /sms-logs-index/_search
{"query": {"fuzzy": {"corpName": {"value": "海尔电气"}}}
}
- java方式
public void searchFuzzyDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.fuzzyQuery("corpName", "海尔电气"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.10 wildcard
通配查询,同mysql中的like 是一样的,可以在查询时,在字符串中指定通配符*和占位符?
- ES方式
POST /sms-logs-index/_search
{"query": {"wildcard": {"corpName": {"value": "海尔??"}}}
}
- java方式
public void searchWildCardDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.wildcardQuery("corpName", "海尔*"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.11 rang
范围查询,只针对数值类型,对一个field 进行大于或者小于的范围指定
- ES方式
POST /sms-logs-index/_search
{"query": {"range": {"fee": {"gte": 10,"lte": 20}}}
}
- java方式
public void searchRangDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.rangeQuery("fee").gte(10).lte(20));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.13 regexp
正则查询,通过编写的正则表达式去匹配内容
ps:prefix,fuzzy,wildcard,regexp查询效率比较低,要求效率比较高的时候,避免去使用。
- ES方式
POST /sms-logs-index/_search
{"query": {"regexp": {"mobile": "186[0-9]{9}"}}
}
- java方式
public void searchRegexpDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.regexpQuery("mobile","186[0-9]{9}"));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}}
4.2.3.13 深分页 scrol l
ES 对from +size时又限制的,from +size 之和 不能大于1W,超过后 效率会十分低下
原理:
from+size ES查询数据的方式,
第一步将用户指定的关键词进行分词,
第二部将词汇去分词库中进行检索,得到多个文档id,
第三步去各个分片中拉去数据, 耗时相对较长
第四步根据score 将数据进行排序, 耗时相对较长
第五步根据from 和size 的值 将部分数据舍弃,
第六步,返回结果。
scroll +size ES 查询数据的方式
第一步将用户指定的关键词进行分词,
第二部将词汇去分词库中进行检索,得到多个文档id,
第三步,将文档的id放在一个上下文中
第四步,根据指定的size去ES中检索指定个数数据,拿完数据的文档id,会从上下文中移除
第五步,如果需要下一页的数据,直接去ES的上下文中找后续内容。
第六步,循环第四步和第五步
scroll 不适合做实时查询。
- ES方式
#scroll 查询,返回第一页数据,并将文档id信息存放在ES上下文中,并指定生存时间
POST /sms-logs-index/_search?scroll=1m
{"query": {"match_all": {}},"size": 2,"sort": [{"fee": {"order": "desc"}}]
}#根据scroll 查询下一页数据
POST _search/scroll
{"scroll_id":"DnF1ZXJ5VGhlbkZldGNoAwAAAAAAABbqFk04VlZ1cjlUU2t1eHpsQWNRY1YwWWcAAAAAAAAW7BZNOFZWdXI5VFNrdXh6bEFjUWNWMFlnAAAAAAAAFusWTThWVnVyOVRTa3V4emxBY1FjVjBZZw==","scroll":"1m"
}#删除scroll上下文中的数据
DELETE _search/scroll/DnF1ZXJ5VGhlbkZldGNoAwAAAAAAABchFk04VlZ1cjlUU2t1eHpsQWNRY1YwWWcAAAAAAAAXIBZNOFZWdXI5VFNrdXh6bEFjUWNWMFlnAAAAAAAAFx8WTThWVnVyOVRTa3V4emxBY1FjVjBZZw==
- java方式
public void searchScrollDoc() throws IOException {SearchRequest request = new SearchRequest(INDEX);request.scroll(TimeValue.timeValueMinutes(1L));SearchSourceBuilder builder = new SearchSourceBuilder();builder.size(4);builder.sort("fee", SortOrder.DESC);builder.query(QueryBuilders.matchAllQuery());request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);String scrollId = response.getScrollId();System.out.println("-------------首页---------------------");SearchHit[] hits = response.getHits().getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsMap());}while (true) {SearchScrollRequest request1 = new SearchScrollRequest(scrollId);request1.scroll(TimeValue.timeValueMinutes(1L));SearchResponse scroll = client.scroll(request1, RequestOptions.DEFAULT);SearchHit[] hits1 = scroll.getHits().getHits();if (hits1 != null && hits1.length != 0) {System.out.println("-------------下一页数据---------------------");for (SearchHit hit : hits1) {System.out.println(hit.getSourceAsMap());}}else {System.out.println("-------------结束---------------------");break;}}ClearScrollRequest clearScrollRequest = new ClearScrollRequest();clearScrollRequest.addScrollId(scrollId);ClearScrollResponse clearScrollResponse = client.clearScroll(clearScrollRequest, RequestOptions.DEFAULT);System.out.println("删除scroll:"+clearScrollResponse.isSucceeded());}
4.2.3.14 delete-by-query
根据term,match 等查询方式去删除大量索引
PS:如果要删除的内容是index下的大部分数据,推荐创建一个新的index,然后把保留的文档内容,添加到全新的索引
- ES方式
POST /sms-logs-index/_delete_by_query
{"query": {"range": {"fee": {"gte": 10,"lte": 20}}}
}
- java方式
public void deleteByQuery() throws IOException {DeleteByQueryRequest request = new DeleteByQueryRequest(INDEX);request.setQuery(QueryBuilders.rangeQuery("fee").lt(50));BulkByScrollResponse response = client.deleteByQuery(request, RequestOptions.DEFAULT);System.out.println(response.toString());}
4.2.3.15 bool
复合过滤器,将你的多个查询条件 以一定的逻辑组合在一起
must:所有条件组合在一起,表示 and 的意思
must_not: 将must_not中的条件,全部都不匹配,表示not的意思
should:所有条件用should 组合在一起,表示or 的意思
- ES方式
POST /sms-logs-index/_search
{"query": {"bool": {"should": [{"term": {"province": {"value": "晋城"}}},{"term": {"province": {"value": "北京"}}}],"must_not": [{"term": {"operatorId": {"value": "2"}}}],"must": [{"match": {"smsContent": "战士"}},{"match": {"smsContent": "的"}}]}}
}
- java方式
public void boolSearch() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();boolQueryBuilder.should(QueryBuilders.termQuery("province","北京"));boolQueryBuilder.should(QueryBuilders.termQuery("province","晋城"));boolQueryBuilder.mustNot(QueryBuilders.termQuery("operatorId",2));boolQueryBuilder.must(QueryBuilders.matchQuery("smsContent","战士"));boolQueryBuilder.must(QueryBuilders.matchQuery("smsContent","的"));builder.query(boolQueryBuilder);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : response.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}}
4.2.3.16 boosting
boosting 查询可以帮助我们去影响查询后的score
positive:只有匹配上positive 查询的内容,才会被放到返回的结果集中
negative: 如果匹配上了positive 也匹配上了negative, 就可以降低这样的文档score.
negative_boost:指定系数,必须小于1
关于查询时,分数时如何计算的:
搜索的关键字再文档中出现的频次越高,分数越高
指定的文档内容越短,分数越高。
我们再搜索时,指定的关键字也会被分词,这个被分词的内容,被分词库匹配的个数越多,分数就越高。
- ES方式
POST /sms-logs-index/_search
{"query": {"boosting": {"positive": {"match": {"smsContent": "战士"}},"negative": {"match": {"smsContent": "实力"}},"negative_boost": 0.5}}
}
- java方式
public void boostSearch() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();BoostingQueryBuilder boost = QueryBuilders.boostingQuery(QueryBuilders.matchQuery("smsContent", "战士"),QueryBuilders.matchQuery("smsContent", "实力")).negativeBoost(0.2f);builder.query(boost);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : response.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}}
4.2.3.17 filter
query 查询:根据你的查询条件,去计算文档的匹配度得到一个分数,并根据分数排序,不会做缓存的。
filter 查询:根据查询条件去查询文档,不去计算分数,而且filter会对经常被过滤的数据进行缓存。
- ES方式
POST /sms-logs-index/_search
{"query": {"bool": {"filter": [{"term": {"corpName": "格力汽车"}},{"range": {"fee": {"gte": 50}}}]}}
}
- java方式
public void filterSearch() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();boolQueryBuilder.filter(QueryBuilders.termQuery("corpName","格力汽车"));boolQueryBuilder.filter(QueryBuilders.rangeQuery("fee").gte(50));builder.query(boolQueryBuilder);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : response.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}}
4.2.3.18 高亮查询
高亮查询就是用户输入的关键字,以一定特殊样式展示给用户,让用户知道为什么这个结果被检索出来
高亮展示的数据,本身就是文档中的一个field,单独将field以highlight的形式返回给用户
ES提供了一个highlight 属性,他和query 同级别。
frament_size: 指定高亮数据展示多少个字符回来
pre_tags:指定前缀标签
post_tags:指定后缀标签
- ES方式
POST /sms-logs-index/_search
{"query": {"match": {"smsContent": "战士"}},"highlight": {"fields": {"smsContent": {}},"pre_tags": "<font color='red'>","post_tags": "</font>", "fragment_size": 10}
}
- java方式
public void highLightSearch() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();builder.query(QueryBuilders.matchQuery("smsContent","战士"));HighlightBuilder highlightBuilder = new HighlightBuilder();highlightBuilder.field("smsContent",10).preTags("<font colr='red'>").postTags("</font>");builder.highlighter(highlightBuilder);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : response.getHits().getHits()) {System.out.println(hit.getHighlightFields().get("smsContent"));}}
4.2.3.19 聚合查询
ES的聚合查询和mysql的聚合查询类似,ES的聚合查询相比mysql要强大得多。ES提供的统计数据的方式多种多样。
#ES 聚合查询的RSTFul 语法
POST /index/type/_search
{"aggs":{"(名字)agg":{"agg_type":{"属性":"值"}}}
}
4.2.3.20 去重计数聚合查询
去重计数,cardinality 先将返回的文档中的一个指定的field进行去重,统计一共有多少条
- ES方式
POST /sms-logs-index/_search
{"aggs": {"provinceAggs": {"cardinality": {"field": "province"}}}
}
- java方式
public void aggCardinalityC() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();AggregationBuilder aggregationBuilder = AggregationBuilders.cardinality("provinceAggs").field("province");builder.aggregation(aggregationBuilder);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);Aggregations aggregations = response.getAggregations();Cardinality provinceAggs = aggregations.get("provinceAggs");System.out.println(provinceAggs.getValue());}
4.2.3.21 范围统计
统计一定范围内出现的文档个数,比如,针对某一个field 的值再0100,100200,200~300 之间文档出现的个数分别是多少
范围统计 可以针对 普通的数值,针对时间类型,针对ip类型都可以响应。
数值 rang
时间 date_rang
ip ip_rang
- ES方式
POST /sms-logs-index/_search
{"aggs": {"rangAggs": {"range": {"field": "fee","ranges": [{"to": 30 ##针对数值方式的范围统计 from 带等于效果 ,to 不带等于效果},{"from": 30,"to": 50},{"from": 50}]}}}
}POST /sms-logs-index/_search
{"aggs": {"rangAggs": {"date_range": {"field": "sendDate","format": "yyyy-MM-dd", "ranges": [{"to": "2020-08-25"},{"from": "2020-08-25","to": "2021-08-25"},{"from": "2021-08-25"}]}}}
}POST /sms-logs-index/_search
{"aggs": {"agg": {"ip_range": {"field": "ipAddr","ranges": [{"to": "127.0.0.8"},{"from": "127.0.0.8"}]}}}
}
- java方式
public void aggRangeC() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();AggregationBuilder aggregationBuilder = AggregationBuilders.range("feeAggs").field("fee").addUnboundedTo(30).addRange(30,60).addUnboundedFrom(60);builder.aggregation(aggregationBuilder);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);Aggregations aggregations = response.getAggregations();Range feeAggs = aggregations.get("feeAggs");for (Bucket bucket : feeAggs.getBuckets()) {System.out.println(bucket.getDocCount());}}
4.2.3.22 统计聚合
- ES方式
POST /sms-logs-index/_search
{"aggs": {"agg": {"extended_stats": {"field": "fee"}}}
}
- java方式
public void aggExtendedStatsC() throws IOException {SearchRequest request = new SearchRequest(INDEX);SearchSourceBuilder builder = new SearchSourceBuilder();AggregationBuilder aggregationBuilder = AggregationBuilders.extendedStats("feeAggs").field("fee");builder.aggregation(aggregationBuilder);request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);Aggregations aggregations = response.getAggregations();ExtendedStats feeAggs = aggregations.get("feeAggs");System.out.println("最大值:"+feeAggs.getMax()+"平均值:"+feeAggs.getAvg());}
4.2.4 地图经纬度搜索
4.2.4.1 准备数据
PUT /map
{"settings": {"number_of_shards": 1,"number_of_replicas": 1},"mappings": {"properties":{"name":{"type":"text"},"location":{"type":"geo_point"}}}
}PUT /map/_doc/1
{"name":"天安门","location":{"lon": 116.403694,"lat":39.914492}
}PUT /map/_doc/2
{"name":"百望山","location":{"lon": 116.26284,"lat":40.036576}
}PUT /map/_doc/3
{"name":"北京动物园","location":{"lon": 116.347352,"lat":39.947468}
}
4.2.4.2 ES 的地图检索方式
geo_distance :直线距离检索方式
geo_bounding_box: 以2个点确定一个矩形,获取再矩形内的数据
geo_polygon:以多个点,确定一个多边形,获取多边形的全部数据
- ES方式
#geo_distance
POST /map/_search
{"query": {"geo_distance": {"location": {"lon": 116.43438,"lat": 39.909816},"distance": 2700,"distance_type": "arc"}}
}#geo_bounding_box
POST /map/_search
{"query": {"geo_bounding_box": {"location": {"top_left": {"lon": 116.278722,"lat": 40.005937},"bottom_right":{"lon": 116.433661,"lat": 39.909705}}}}
}#geo_polygon
POST /map/_search
{"query":{"geo_polygon":{"location":{"points":[{"lon":116.220296,"lat":40.075013},{"lon":116.346777,"lat":40.044751},{"lon":116.236106,"lat":39.981533} ]}}}
}
- java方式
public void geoPoint() throws IOException {SearchRequest request = new SearchRequest("map");SearchSourceBuilder builder = new SearchSourceBuilder();GeoDistanceQueryBuilder location = QueryBuilders.geoDistanceQuery("location");location.distance("3000");location.point(39.909816,116.43438);builder.query(location);request.source(builder);SearchResponse search = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : search.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}}public void geoPoint() throws IOException {SearchRequest request = new SearchRequest("map");SearchSourceBuilder builder = new SearchSourceBuilder();GeoBoundingBoxQueryBuilder location = QueryBuilders.geoBoundingBoxQuery("location");location.topLeft().reset(40.005937,116.278722);location.bottomRight().reset(39.909705,116.433661);builder.query(location);request.source(builder);SearchResponse search = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : search.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}}public void geoPoint() throws IOException {SearchRequest request = new SearchRequest("map");SearchSourceBuilder builder = new SearchSourceBuilder();List<GeoPoint> points = new ArrayList<>();points.add(new GeoPoint(40.075013,116.220296));points.add(new GeoPoint(40.044751,116.346777));points.add(new GeoPoint(39.981533,116.236106));builder.query(QueryBuilders.geoPolygonQuery("location",points));request.source(builder);SearchResponse response = client.search(request, RequestOptions.DEFAULT);for (SearchHit hit : response.getHits().getHits()) {System.out.println(hit.getSourceAsMap());}}
五 elasticsearch 集群
集群配置中最重要的两项是node.name
与network.host
,每个节点都必须不同。其中node.name
是节点名称主要是在Elasticsearch自己的日志加以区分每一个节点信息。
discovery.zen.ping.unicast.hosts
是集群中的节点信息,可以使用IP地址、可以使用主机名(必须可以解析)。
elasticsearch.ymlcluster.name: my-els # 集群名称
node.name: els-node1 # 节点名称,仅仅是描述名称,用于在日志中区分
#node.master: true 是否参与master选举和是否存储数据
#node.data: true
path.data: /opt/elasticsearch/data # 数据的默认存放路径
path.logs: /opt/elasticsearch/log # 日志的默认存放路径network.host: 192.168.60.201 # 当前节点的IP地址
http.port: 9200 # 对外提供服务的端口,9300为集群服务的端口
#添加如下内容
#culster transport port
transport.tcp.port: 9300
transport.tcp.compress: truediscovery.zen.ping.unicast.hosts: ["192.168.60.201", "192.168.60.202","192.168.60.203"]
# 集群个节点IP地址,也可以使用els、els.shuaiguoxia.com等名称,需要各节点能够解析,分布式系统整个集群节点个数要为奇数个discovery.zen.minimum_master_nodes: 2 # master选举最少的节点数,这个一定要设置为N/2+1,其中N是:具有master资格的节点的数量,而不是整个集群节点个数。
五 elasticsearch 集群
集群配置中最重要的两项是node.name
与network.host
,每个节点都必须不同。其中node.name
是节点名称主要是在Elasticsearch自己的日志加以区分每一个节点信息。
discovery.zen.ping.unicast.hosts
是集群中的节点信息,可以使用IP地址、可以使用主机名(必须可以解析)。
elasticsearch.yml
cluster.name: my-els # 集群名称
node.name: els-node1 # 节点名称,仅仅是描述名称,用于在日志中区分
#node.master: true 是否参与master选举和是否存储数据
#node.data: true
path.data: /opt/elasticsearch/data # 数据的默认存放路径
path.logs: /opt/elasticsearch/log # 日志的默认存放路径network.host: 192.168.60.201 # 当前节点的IP地址
http.port: 9200 # 对外提供服务的端口,9300为集群服务的端口
#添加如下内容
#culster transport port
transport.tcp.port: 9300
transport.tcp.compress: truediscovery.zen.ping.unicast.hosts: ["192.168.60.201", "192.168.60.202","192.168.60.203"]
# 集群个节点IP地址,也可以使用els、els.shuaiguoxia.com等名称,需要各节点能够解析,分布式系统整个集群节点个数要为奇数个discovery.zen.minimum_master_nodes: 2 # master选举最少的节点数,这个一定要设置为N/2+1,其中N是:具有master资格的节点的数量,而不是整个集群节点个数。
Elasticsearch7.x学习相关推荐
- Elasticsearch-7.x学习笔记
本文转载自:阅读原文 文章目录 1. 单节点安装 2. ES安装head插件 3. Elasticsearch Rest基本操作 REST介绍 CURL创建索引库 查询索引-GET DSL查询 MGE ...
- ElasticSearch7.6学习——基础篇
说明 本人还是一名小白,由于跟公司大佬交流了一波,啥都不懂了我准备开始学习ElasticSearch,努力成为一名优秀的取数人(自嘲).由于b站有个关于ElasticSearch播放量超高的视频,故准 ...
- Elasticsearch7.17学习笔记
前言 本学习笔记主要基于 阅读Elasticsearch7.17版本官方文档和实操总结而来,官方文档地址https://www.elastic.co/guide/en/elasticsearch/re ...
- Elasticsearch7.x学习总结
一.官网和下载 官网: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html Spring Data E ...
- ElasticSearch7.3学习(二十一)----Filter与Query对比、使用explain关键字分析语法
1.数据准备 首先创建book索引 PUT /book/ {"settings": {"number_of_shards": 1,"number_of ...
- ElasticSearch7.3学习(二十七)----聚合概念(bucket和metric)及其示例
一.两个核心概念:bucket和metric 1.1 bucket 有如下数据 city name 北京 张三 北京 李四 天津 王五 天津 赵六 天津 王麻子 划分出来两个bucket,一个是北 ...
- 干货 | Elasticsearch7.X Scripting脚本使用详解
0.题记 除了官方文档,其他能找到的介绍Elasticsearch脚本(Scripting)的资料少之又少. 一方面:性能问题. 官方文档性能优化中明确指出使用脚本会导致性能低: 另一方面:使用场景相 ...
- Elastic Search一些用法
一.滚动查询 参考: 中国开源社区 /*** 滚动查询, 并批量保存** @param indexName* @return*/public int scrollIndexName(String in ...
- 2020 继续踏踏实实的做好自己
先来简单的总结下自己 2019 有意义的事情: 1.运营了一个 Flink 精进学习 的知识星球,结交了一帮热爱新技术的学习小伙伴. 2.坚持写了一系列的 Flink 文章,包括博客.知识星球系列.源 ...
- Elasticsearch7学习笔记(中)
Elasticsearch是实时全文搜索和分析引擎,提供搜集.分析.存储数据三大功能:是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统.它构建于Apache Lucene ...
最新文章
- php投票系统报告,投票系统设计
- 语句的输入和输出 数据类型 运算符
- [react] React中在哪捕获错误?
- tensorflow 模型可视化_基于tensorflow-2.x的yolov3实现
- java综合案例_综合实例 - Java House - BlogJava
- C/C++ OpenCV设置感兴趣区域ROI
- 吴恩达《深度学习》 学习笔记1
- 如何在几天时间内快速理解一个陌生行业?
- 博弈论重要算法:Sprague-Grundy 定理 (SRM 561 Div1 550)
- SQL SERVER数据库设计与现实
- 【软考】数据库系统工程师备考指南(一)
- We discovered one or more bugs in your app when reviewed on iPhone and iPad running iOS 14.1
- 全球及中国丝蛋白行业研究及十四五规划分析报告
- C#圆形卡尺测量程序基于halcon
- 什么是面向切面编程?
- python 3d图表_matplotlib 三维图表绘制方法简介
- 区块链会与io域名有什么关系
- mysql在哪关防火墙_MySQL的防火墙功能
- 主成分分析,聚类分析,因子分析的基本思想以及他们各自的优缺点
- 现在各行各业的人们越来越多地依靠计算机来解决各种难题.翻译英语,公共英语PETS三级阅读与翻译试题训练 四...