Elasticsearch源码分析—线程池(十一)

转自:https://www.felayman.com/articles/2017/11/10/1510291570687.html

线程池

每个节点都有一些线程池来优化线程内存的消耗,按节点来配置管理。有些线程池还拥有与之关联的队列配置,用来允许挂住一些未处理的请求,而不是丢弃它。

Elasticsearch对线程池的处理的源码在org.elasticsearch.node.Node中,核心代码为:

final ThreadPool threadPool = new ThreadPool(settings, executorBuilders.toArray(new ExecutorBuilder[0]));

其具体实现为:

super(settings);assert Node.NODE_NAME_SETTING.exists(settings);final Map<String, ExecutorBuilder> builders = new HashMap<>();final int availableProcessors = EsExecutors.boundedNumberOfProcessors(settings); final int halfProcMaxAt5 = halfNumberOfProcessorsMaxFive(availableProcessors); final int halfProcMaxAt10 = halfNumberOfProcessorsMaxTen(availableProcessors); final int genericThreadPoolMax = boundedBy(4 * availableProcessors, 128, 512); builders.put(Names.GENERIC, new ScalingExecutorBuilder(Names.GENERIC, 4, genericThreadPoolMax, TimeValue.timeValueSeconds(30))); builders.put(Names.INDEX, new FixedExecutorBuilder(settings, Names.INDEX, availableProcessors, 200)); builders.put(Names.BULK, new FixedExecutorBuilder(settings, Names.BULK, availableProcessors, 200)); // now that we reuse bulk for index/delete ops builders.put(Names.GET, new FixedExecutorBuilder(settings, Names.GET, availableProcessors, 1000)); builders.put(Names.SEARCH, new FixedExecutorBuilder(settings, Names.SEARCH, searchThreadPoolSize(availableProcessors), 1000)); builders.put(Names.MANAGEMENT, new ScalingExecutorBuilder(Names.MANAGEMENT, 1, 5, TimeValue.timeValueMinutes(5))); // no queue as this means clients will need to handle rejections on listener queue even if the operation succeeded // the assumption here is that the listeners should be very lightweight on the listeners side builders.put(Names.LISTENER, new FixedExecutorBuilder(settings, Names.LISTENER, halfProcMaxAt10, -1)); builders.put(Names.FLUSH, new ScalingExecutorBuilder(Names.FLUSH, 1, halfProcMaxAt5, TimeValue.timeValueMinutes(5))); builders.put(Names.REFRESH, new ScalingExecutorBuilder(Names.REFRESH, 1, halfProcMaxAt10, TimeValue.timeValueMinutes(5))); builders.put(Names.WARMER, new ScalingExecutorBuilder(Names.WARMER, 1, halfProcMaxAt5, TimeValue.timeValueMinutes(5))); builders.put(Names.SNAPSHOT, new ScalingExecutorBuilder(Names.SNAPSHOT, 1, halfProcMaxAt5, TimeValue.timeValueMinutes(5))); builders.put(Names.FETCH_SHARD_STARTED, new ScalingExecutorBuilder(Names.FETCH_SHARD_STARTED, 1, 2 * availableProcessors, TimeValue.timeValueMinutes(5))); builders.put(Names.FORCE_MERGE, new FixedExecutorBuilder(settings, Names.FORCE_MERGE, 1, -1)); builders.put(Names.FETCH_SHARD_STORE, new ScalingExecutorBuilder(Names.FETCH_SHARD_STORE, 1, 2 * availableProcessors, TimeValue.timeValueMinutes(5))); for (final ExecutorBuilder<?> builder : customBuilders) { if (builders.containsKey(builder.name())) { throw new IllegalArgumentException("builder with name [" + builder.name() + "] already exists"); } builders.put(builder.name(), builder); } this.builders = Collections.unmodifiableMap(builders); threadContext = new ThreadContext(settings); final Map<String, ExecutorHolder> executors = new HashMap<>(); for (@SuppressWarnings("unchecked") final Map.Entry<String, ExecutorBuilder> entry : builders.entrySet()) { final ExecutorBuilder.ExecutorSettings executorSettings = entry.getValue().getSettings(settings); final ExecutorHolder executorHolder = entry.getValue().build(executorSettings, threadContext); if (executors.containsKey(executorHolder.info.getName())) { throw new IllegalStateException("duplicate executors with name [" + executorHolder.info.getName() + "] registered"); } logger.debug("created thread pool: {}", entry.getValue().formatInfo(executorHolder.info)); executors.put(entry.getKey(), executorHolder); } executors.put(Names.SAME, new ExecutorHolder(DIRECT_EXECUTOR, new Info(Names.SAME, ThreadPoolType.DIRECT))); this.executors = unmodifiableMap(executors); this.scheduler = new ScheduledThreadPoolExecutor(1, EsExecutors.daemonThreadFactory(settings, "scheduler"), new EsAbortPolicy()); this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); this.scheduler.setRemoveOnCancelPolicy(true); TimeValue estimatedTimeInterval = ESTIMATED_TIME_INTERVAL_SETTING.get(settings); this.cachedTimeThread = new CachedTimeThread(EsExecutors.threadName(settings, "[timer]"), estimatedTimeInterval.millis()); this.cachedTimeThread.start(); 

从源码中可以看到,Elasticsearch的线程池基本有许多不同名称的线程池,这些线程池的命名都缓存在一个常量静态内部类ThreadPool.Names中,源码如下:~~~java
public static class Names { public static final String SAME = "same"; public static final String GENERIC = "generic"; public static final String LISTENER = "listener"; public static final String GET = "get"; public static final String INDEX = "index"; public static final String BULK = "bulk"; public static final String SEARCH = "search"; public static final String MANAGEMENT = "management"; public static final String FLUSH = "flush"; public static final String REFRESH = "refresh"; public static final String WARMER = "warmer"; public static final String SNAPSHOT = "snapshot"; public static final String FORCE_MERGE = "force_merge"; public static final String FETCH_SHARD_STARTED = "fetch_shard_started"; public static final String FETCH_SHARD_STORE = "fetch_shard_store"; } 

而且Elasticsearch还将这些线程池分成了三个类型,分别为direct,fixed,scaling,这些类别也缓存在改常量类中,源码为:

public enum ThreadPoolType {DIRECT("direct"),FIXED("fixed"),SCALING("scaling"); private final String type; //省略getter/setter 

默认地,Elasticsearch将上述的各个线程池采用不同的类型,源码如下:

static {HashMap<String, ThreadPoolType> map = new HashMap<>();map.put(Names.SAME, ThreadPoolType.DIRECT);map.put(Names.GENERIC, ThreadPoolType.SCALING);map.put(Names.LISTENER, ThreadPoolType.FIXED);map.put(Names.GET, ThreadPoolType.FIXED);map.put(Names.INDEX, ThreadPoolType.FIXED);map.put(Names.BULK, ThreadPoolType.FIXED);map.put(Names.SEARCH, ThreadPoolType.FIXED);map.put(Names.MANAGEMENT, ThreadPoolType.SCALING);map.put(Names.FLUSH, ThreadPoolType.SCALING);map.put(Names.REFRESH, ThreadPoolType.SCALING);map.put(Names.WARMER, ThreadPoolType.SCALING);map.put(Names.SNAPSHOT, ThreadPoolType.SCALING);map.put(Names.FORCE_MERGE, ThreadPoolType.FIXED);map.put(Names.FETCH_SHARD_STARTED, ThreadPoolType.SCALING);map.put(Names.FETCH_SHARD_STORE, ThreadPoolType.SCALING);THREAD_POOL_TYPES = Collections.unmodifiableMap(map);}

各线程池功能说明

  • GENERIC

    用于通用的操作(例如:后台节点发现),线程池类型为 scaling

  • INDEX

    用于index/delete操作,线程池类型为 fixed, 大小的为处理器数量,队列大小为200,最大线程数为 1 + 处理器数量

  • BULK

    用于bulk操作,线程池类型为 fixed, 大小的为处理器数量,队列大小为200,该池的最大线程数为 1 + 处理器数量

  • GET

    用于get操作。线程池类型为 fixed,大小的为处理器数量,队列大小为1000。

  • SEARCH

    用于count/search/suggest操作。线程池类型为 fixed, 大小的为 int((处理器数量 3) / 2) +1,队列大小为1000

  • MANAGEMENT

    官方暂未说明(新版本才有)

  • LISTENER

    主要用于Java客户端线程监听器被设置为true时执行动作。线程池类型为 scaling,最大线程数为min(10, (处理器数量)/2)

  • FLUSH

    用于flush操作。线程池类型为 scaling,线程空闲保持存活时间为5分钟,最大线程数为min(10, (处理器数量)/2)

  • REFRESH

    用于refresh操作。线程池类型为 scaling,线程空闲保持存活时间为5分钟,最大线程数为min(10, (处理器数量)/2)

  • WARMER

    用于segment warm-up操作。线程池类型为 scaling,线程保持存活时间为5分钟,最大线程数为min(5, (处理器数量)/2)

  • SNAPSHOT

    用于snaphost/restore操作。线程池类型为 scaling,线程保持存活时间为5分钟,最大线程数为min(5, (处理器数量)/2)

  • FETCH_SHARD_STARTED

    官方暂未说明(新版本才有)

  • FORCE_MERGE

    官方暂未说明(新版本才有)

  • FETCH_SHARD_STORE

    官方暂未说明(新版本才有)

  • SAME

    官方暂未说明(新版本才有)

各线程类型说明

  • direct

    此类线程是一种不支持关闭的线程,就意味着一旦使用,则会一直存活下去.

  • fixed

    此类线程池拥有固定数量的线程来处理请求,在没有空闲线程时请求将被挂在队列中(可选配)

  • scaling

    此类线程池拥有的线程数量是动态的。这个数字介于core和max参数的配置之间变化

这些线程池的创建如果在调试源码的时候日志级别更改为DEBUG,也是可以看出的,如下:

[2017-09-27T14:31:47,558][DEBUG][o.e.t.ThreadPool         ] [x2LMQHg] created thread pool: name [force_merge], size [1], queue size [unbounded]
[2017-09-27T14:31:47,560][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [fetch_shard_started], core [1], max [16], keep alive [5m] [2017-09-27T14:31:47,561][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [listener], size [4], queue size [unbounded] [2017-09-27T14:31:47,565][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [index], size [8], queue size [200] [2017-09-27T14:31:47,565][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [refresh], core [1], max [4], keep alive [5m] [2017-09-27T14:31:47,566][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [generic], core [4], max [128], keep alive [30s] [2017-09-27T14:31:47,566][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [warmer], core [1], max [4], keep alive [5m] [2017-09-27T14:31:47,566][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [search], size [13], queue size [1k] [2017-09-27T14:31:47,567][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [flush], core [1], max [4], keep alive [5m] [2017-09-27T14:31:47,567][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [fetch_shard_store], core [1], max [16], keep alive [5m] [2017-09-27T14:31:47,567][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [management], core [1], max [5], keep alive [5m] [2017-09-27T14:31:47,568][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [get], size [8], queue size [1k] [2017-09-27T14:31:47,568][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [bulk], size [8], queue size [200] [2017-09-27T14:31:47,568][DEBUG][o.e.t.ThreadPool ] [x2LMQHg] created thread pool: name [snapshot], core [1], max [4], keep alive [5m] 

参考

  • Thread Pool

转载于:https://www.cnblogs.com/bonelee/p/8417033.html

Elasticsearch源码分析—线程池(十一) ——就是从队列里处理请求相关推荐

  1. JUC源码分析-线程池篇(五):ForkJoinPool - 2

    通过上一篇(JUC源码分析-线程池篇(四):ForkJoinPool - 1)的讲解,相信同学们对 ForkJoinPool 已经有了一个大概的认识,本篇我们将通过分析源码的方式来深入了解 ForkJ ...

  2. 从源码分析线程池(池化技术)的实现原理

    线程池是一个非常重要的知识点,也是池化技术的一个典型应用,相信很多人都有使用线程池的经历,但是对于线程池的实现原理大家都了解吗?本篇文章我们将深入线程池源码来一探究竟. 线程池的起源 背景: 随着计算 ...

  3. licode源码分析-线程模型

    licode源码分析-线程模型 服务器一般都会服务于大量的用户,所以服务端程序的性能往往决定服务用户的多少.现在服务器上的CPU都是多核的,服务端程序为了充分发挥CPU的性能,会使用多进程或多线程.而 ...

  4. elasticsearch源码分析之search模块(server端)

    elasticsearch源码分析之search模块(server端) 继续接着上一篇的来说啊,当client端将search的请求发送到某一个node之后,剩下的事情就是server端来处理了,具体 ...

  5. elasticsearch源码分析之search模块(client端)

    elasticsearch源码分析之search模块(client端) 注意,我这里所说的都是通过rest api来做的搜索,所以对于接收到请求的节点,我姑且将之称之为client端,其主要的功能我们 ...

  6. nginx源码分析—内存池结构ngx_pool_t及内存管理

    本博客( http://blog.csdn.net/livelylittlefish)贴出作者(阿波)相关研究.学习内容所做的笔记,欢迎广大朋友指正! Content 0.序 1.内存池结构 1.1 ...

  7. Nginx源码分析-内存池

    本文转自淘宝平台http://www.tbdata.org/archives/1390,不是为了夺他人之美,只是觉得写得很好,怕淘宝万一删掉就找不到了,放在这里保存一下.大家可以直接链接过去,他们那个 ...

  8. v21.07 鸿蒙内核源码分析(线程概念) | 是谁在不断的折腾CPU | 百篇博客分析OpenHarmony源码

    子曰:"若圣与仁,则吾岂敢.抑为之不厌,诲人不倦,则可谓云尔已矣." <论语>:述而篇 百篇博客系列篇.本篇为: v21.xx 鸿蒙内核源码分析(线程概念篇) | 是谁 ...

  9. 鸿蒙轻内核M核源码分析:数据结构之任务就绪队列

    摘要:本文会给读者介绍鸿蒙轻内核M核源码中重要的数据结构,任务基于优先级的就绪队列Priority Queue. 本文分享自华为云社区<鸿蒙轻内核M核源码分析系列三 数据结构-任务就绪队列> ...

最新文章

  1. VC中的Attach和Detach
  2. python程序设计与应用教程鄂大伟_鄂大伟-从零进阶的Python教学与开发之路.pdf
  3. 10-Platform Interrupt Controller API
  4. DICOM医学图像处理:DICOM存储操作之“多幅BMP图像数据存入DCM文件”
  5. 用OpenJTAG烧写程序到Flash—— 韦东山嵌入式Linux视频学习笔记03
  6. db2 日期英式写法_英文日期的写法
  7. python3列表_Python3列表
  8. include引入php报错,如何解决引入php文件报错的问题
  9. theano 后端爆内存
  10. 蓝牙音乐之AVRCP常用指令介绍
  11. MySQL卸载不干净问题
  12. 计算机专业导论论文范文,计算机新导论论文范文
  13. 查询iphone邮箱服务器,iPhone上的各种邮箱设置
  14. 蓝桥杯算法训练VIP-调和数列问题
  15. 英语的计算机软件如何拼写,怎样记英语单词拼写最快零基础背单词软件
  16. php工程师的学习之道以及需要掌握的知识体系
  17. 但行好事莫问前程 学习笔记
  18. 三阶齐次线性方程求通解_非齐次线性方程通解求法------常数变易法.ppt
  19. 【转】VB6和VB.NET的区别
  20. Bootstrap typeahead自动补全插件的坑

热门文章

  1. Java常用的设计模式总结
  2. 处理器后面的字母含义_电脑天天用,但CPU后缀的一个字母你知道代表这什么吗?...
  3. python练手小游戏_Python小游戏练手EMS员工项目自学者练习
  4. 矿大计算机专硕和学硕,本硕矿大,考研期间我收获了什么?
  5. oracle 怎么创建约束,Oracle创建约束
  6. 私有5g网络_欧洲通过FUDGE5G的启动来支持工业4.0的云原生私有5G
  7. 大工18秋《计算机网络技术》在线作业1,大工18秋《专业英语(计算机英语)》在线作业3【标准答案】...
  8. java graphics 类_对于 Graphics 类的一点认识(转)
  9. android 关于页面,解析android中的帮助、about、关于作者、HELP等提示页面
  10. K-Means原理解析