druid 线程池监控
druid 线程池监控
grafana 引入dashboard
dashboard编号: 11157
配置
需要单独将 druid 线程池信息收集在 prometheus 中
引入依赖
<dependency><groupId>com.alibaba</groupId><artifactId>druid-spring-boot-starter</artifactId><version>1.2.12</version>
</dependency>
<dependency><groupId>com.baomidou</groupId><artifactId>mybatis-plus-boot-starter</artifactId><version>3.5.2</version>
</dependency>
<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-actuator</artifactId>
</dependency><dependency><groupId>io.micrometer</groupId><artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
DruidCollector
public class DruidCollector {private static final String LABEL_NAME = "pool";private final List<DruidDataSource> dataSources;private final MeterRegistry registry;DruidCollector(List<DruidDataSource> dataSources, MeterRegistry registry) {this.registry = registry;this.dataSources = dataSources;}void register() {this.dataSources.forEach((druidDataSource) -> {// basic configurationscreateGauge(druidDataSource, "druid_initial_size", "Initial size", (datasource) -> (double) druidDataSource.getInitialSize());createGauge(druidDataSource, "druid_min_idle", "Min idle", datasource -> (double) druidDataSource.getMinIdle());createGauge(druidDataSource, "druid_max_active", "Max active", datasource -> (double) druidDataSource.getMaxActive());// connection pool core metricscreateGauge(druidDataSource, "druid_active_count", "Active count", datasource -> (double) druidDataSource.getActiveCount());createGauge(druidDataSource, "druid_active_peak", "Active peak", datasource -> (double) druidDataSource.getActivePeak());createGauge(druidDataSource, "druid_pooling_peak", "Pooling peak", datasource -> (double) druidDataSource.getPoolingPeak());createGauge(druidDataSource, "druid_pooling_count", "Pooling count", datasource -> (double) druidDataSource.getPoolingCount());createGauge(druidDataSource, "druid_wait_thread_count", "Wait thread count", datasource -> (double) druidDataSource.getWaitThreadCount());// connection pool detail metricscreateGauge(druidDataSource, "druid_not_empty_wait_count", "Not empty wait count", datasource -> (double) druidDataSource.getNotEmptyWaitCount());createGauge(druidDataSource, "druid_not_empty_wait_millis", "Not empty wait millis", datasource -> (double) druidDataSource.getNotEmptyWaitMillis());createGauge(druidDataSource, "druid_not_empty_thread_count", "Not empty thread count", datasource -> (double) druidDataSource.getNotEmptyWaitThreadCount());createGauge(druidDataSource, "druid_logic_connect_count", "Logic connect count", datasource -> (double) druidDataSource.getConnectCount());createGauge(druidDataSource, "druid_logic_close_count", "Logic close count", datasource -> (double) druidDataSource.getCloseCount());createGauge(druidDataSource, "druid_logic_connect_error_count", "Logic connect error count", datasource -> (double) druidDataSource.getConnectErrorCount());createGauge(druidDataSource, "druid_physical_connect_count", "Physical connect count", datasource -> (double) druidDataSource.getCreateCount());createGauge(druidDataSource, "druid_physical_close_count", "Physical close count", datasource -> (double) druidDataSource.getDestroyCount());createGauge(druidDataSource, "druid_physical_connect_error_count", "Physical connect error count", datasource -> (double) druidDataSource.getCreateErrorCount());// sql execution core metricscreateGauge(druidDataSource, "druid_error_count", "Error count", datasource -> (double) druidDataSource.getErrorCount());createGauge(druidDataSource, "druid_execute_count", "Execute count", datasource -> (double) druidDataSource.getExecuteCount());// transaction metricscreateGauge(druidDataSource, "druid_start_transaction_count", "Start transaction count", datasource -> (double) druidDataSource.getStartTransactionCount());createGauge(druidDataSource, "druid_commit_count", "Commit count", datasource -> (double) druidDataSource.getCommitCount());createGauge(druidDataSource, "druid_rollback_count", "Rollback count", datasource -> (double) druidDataSource.getRollbackCount());// sql execution detailcreateGauge(druidDataSource, "druid_prepared_statement_open_count", "Prepared statement open count", datasource -> (double) druidDataSource.getPreparedStatementCount());createGauge(druidDataSource, "druid_prepared_statement_closed_count", "Prepared statement closed count", datasource -> (double) druidDataSource.getClosedPreparedStatementCount());createGauge(druidDataSource, "druid_ps_cache_access_count", "PS cache access count", datasource -> (double) druidDataSource.getCachedPreparedStatementAccessCount());createGauge(druidDataSource, "druid_ps_cache_hit_count", "PS cache hit count", datasource -> (double) druidDataSource.getCachedPreparedStatementHitCount());createGauge(druidDataSource, "druid_ps_cache_miss_count", "PS cache miss count", datasource -> (double) druidDataSource.getCachedPreparedStatementMissCount());createGauge(druidDataSource, "druid_execute_query_count", "Execute query count", datasource -> (double) druidDataSource.getExecuteQueryCount());createGauge(druidDataSource, "druid_execute_update_count", "Execute update count", datasource -> (double) druidDataSource.getExecuteUpdateCount());createGauge(druidDataSource, "druid_execute_batch_count", "Execute batch count", datasource -> (double) druidDataSource.getExecuteBatchCount());// none core metrics, some are static configurationscreateGauge(druidDataSource, "druid_max_wait", "Max wait", datasource -> (double) druidDataSource.getMaxWait());createGauge(druidDataSource, "druid_max_wait_thread_count", "Max wait thread count", datasource -> (double) druidDataSource.getMaxWaitThreadCount());createGauge(druidDataSource, "druid_login_timeout", "Login timeout", datasource -> (double) druidDataSource.getLoginTimeout());createGauge(druidDataSource, "druid_query_timeout", "Query timeout", datasource -> (double) druidDataSource.getQueryTimeout());createGauge(druidDataSource, "druid_transaction_query_timeout", "Transaction query timeout", datasource -> (double) druidDataSource.getTransactionQueryTimeout());});}private void createGauge(DruidDataSource weakRef, String metric, String help, ToDoubleFunction<DruidDataSource> measure) {Gauge.builder(metric, weakRef, measure).description(help).tag(LABEL_NAME, weakRef.getName()).register(this.registry);}
}
DruidMetricsConfiguration
@Configuration
@ConditionalOnClass({DruidDataSource.class, MeterRegistry.class})
@Slf4j
public class DruidMetricsConfiguration {private final MeterRegistry registry;public DruidMetricsConfiguration(MeterRegistry registry) {this.registry = registry;}@Autowiredpublic void bindMetricsRegistryToDruidDataSources(Collection<DataSource> dataSources) throws SQLException {List<DruidDataSource> druidDataSources = new ArrayList<>(dataSources.size());for (DataSource dataSource : dataSources) {DruidDataSource druidDataSource = dataSource.unwrap(DruidDataSource.class);if (druidDataSource != null) {druidDataSources.add(druidDataSource);}}DruidCollector druidCollector = new DruidCollector(druidDataSources, registry);druidCollector.register();log.info("finish register metrics to micrometer");}
}
druid 线程池监控相关推荐
- 阿里Druid连接池监控的两个坑
转载自 注意:阿里Druid连接池监控的两个坑 阿里的Druid大家都知道是最好的连接池,其强大的监控功能是我们追求的重要特性.但在实际情况中也有不少坑,说下最近遇到的一个坑吧! 问题1:不断打印er ...
- druid 连接池监控报错 Sorry, you are not permitted to view this page.
使用Druid连接池的时候,遇到一个奇怪的问题,在本地(localhost)可以直接打开Druid连接池监控,在其他机器上打开会报错: Sorry, you are not permitted to ...
- 线程池监控和动态配置
线程池 线程池是一种 "池化" 的线程使用模式,通过创建一定数量的线程,让这些线程处于就绪状态来提高系统响应速度,在线程使用完成后归还到线程池来达到重复利用的目标,从而降低系统资源 ...
- Druid线程池中的连接什么时候会关闭?
Druid线程池帮我们实现了应用程序和数据之间的长连接管理,一个线上变更引起了我的疑问,如果我们数据库切换到备用集群,怎么变更? 数据库连接,一般都是域名连接,现在将域名和IP的绑定关系变了,更新ng ...
- Tomcat线程池监控及线程池原理分析
目录 一.背景 二.tomcat线程池监控 三.tomcat线程池原理 四.总结 一.背景 我们都知道稳定性.高可用对于一个系统来讲 ...
- Hippo4J v1.3.1 发布,增加 Netty 监控上报、SpringCloud Hystrix 线程池监控等特性
文章首发在公众号(龙台的技术笔记),之后同步到 CSDN 和个人网站:xiaomage.info Hippo4J v1.3.1 正式发布,本次发布增加了 Netty 上传动态线程池监控数据.适配 Hy ...
- 阿里巴巴 Druid 数据库连接池监控界面配置的参数解读
阿里巴巴 Druid 数据库连接池监控界面配置的参数解读 1.可选的配置项 Property Name Default Value Remarks name 存在多个数据源的时候用于识别数据源 jdb ...
- 用了很多年Dubbo,连Dubbo线程池监控都不知道,觉得自己很厉害?
前言 micrometer 中自带了很多其他框架的指标信息,可以很方便的通过 prometheus 进行采集和监控,常用的有 JVM 的信息,Http 请求的信息,Tomcat 线程的信息等. 对于一 ...
- 注意:阿里Druid连接池监控的两个坑
image 阿里的Druid大家都知道是最好的连接池,其强大的监控功能是我们追求的重要特性.但在实际情况中也有不少坑,说下最近遇到的一个坑吧! 问题1:不断打印error级别的错误日志 session ...
最新文章
- 机器人控制算法——Bayes Filter贝叶斯滤波器
- 9条消除if...else的锦囊妙计,助你写出更优雅的代码
- 算法工程师面试必考项——链表
- python3初学者注意事项
- (Oracle学习笔记) Oracle概述
- mtk android 5.1 logo,Android ROM DIY之MTK平台手机通用移植
- Ubuntu系统截图
- 计算机应用基础多媒体应用试题,计算机等级考试:计算机应用基础复习题
- BigDecimal的8种精度取舍方式
- python---保留两位小数
- [转]如何阅读systemstate dump
- python遇到异常跳过_教你使用Python遇到的异常的处理方式!
- 软件测试电商web项目如何描述,测试web项目实战
- android设置背景图片透明
- macOS 终端打开提示:zsh compinit: insecure directories
- 19个免费的UI界面设计工具及资源
- 疫情数据汇总为csv文件
- 8脚51单片机DIY时间显示+闹钟技术分享(一)
- Epic League 推出支持 Free to Earn 的 RPG 游戏 Dark Throne
- windows下使用控制台打开conda虚拟环境
热门文章
- 华为云云速建站,助力企业搭建网站省心又省力
- SpringBoot+Mysql实现在线旅游订票系统
- 利用老毛桃WinPE制作启动U盘安装系统
- relate to与associate with的区别
- 初出茅庐的小李第86篇博客之Modbus协议总结
- HCNR200二三事
- 210_Python+OpenCV_05_EPF边缘保留滤波
- python编写restful接口_Python开发之路系列:RESTful 接口开发
- 计算机考试都是60分合格吗,bim考试显示考评结果通过是达到60分了吗?
- H5大行其道,微信应用号的个人看法