一、前言

最近在做mysql innodb cluster的监控,发现mysql innodb cluster的集群状态没有对应的指标能采集到,索性看一波源码,魔改一把吧。源码链接奉上: https://github.com/prometheus/mysqld_exporter

二、源码阅读

1、入口(main函数)

可以看到,MySQL Exporter提供了两个URL供访问,一个是 /,用于打印一些基本的信息,另一个就是用于收集metrics的 /metrics 链接。

func main() {// Generate ON/OFF flags for all scrapers.scraperFlags := map[collector.Scraper]*bool{}for scraper, enabledByDefault := range scrapers {defaultOn := "false"if enabledByDefault {defaultOn = "true"}f := kingpin.Flag("collect."+scraper.Name(),scraper.Help(),).Default(defaultOn).Bool()scraperFlags[scraper] = f}// Parse flags.promlogConfig := &promlog.Config{}flag.AddFlags(kingpin.CommandLine, promlogConfig)kingpin.Version(version.Print("mysqld_exporter"))kingpin.HelpFlag.Short('h')kingpin.Parse()logger := promlog.New(promlogConfig)// landingPage contains the HTML served at '/'.// TODO: Make this nicer and more informative.var landingPage = []byte(`<html>
<head><title>MySQLd exporter</title></head>
<body>
<h1>MySQLd exporter</h1>
<p><a href='` + *metricPath + `'>Metrics</a></p>
</body>
</html>
`)level.Info(logger).Log("msg", "Starting msqyld_exporter", "version", version.Info())level.Info(logger).Log("msg", "Build context", version.BuildContext())dsn = os.Getenv("DATA_SOURCE_NAME")if len(dsn) == 0 {var err errorif dsn, err = parseMycnf(*configMycnf); err != nil {level.Info(logger).Log("msg", "Error parsing my.cnf", "file", *configMycnf, "err", err)os.Exit(1)}}// Register only scrapers enabled by flag.enabledScrapers := []collector.Scraper{}for scraper, enabled := range scraperFlags {if *enabled {level.Info(logger).Log("msg", "Scraper enabled", "scraper", scraper.Name())enabledScrapers = append(enabledScrapers, scraper)}}handlerFunc := newHandler(collector.NewMetrics(), enabledScrapers, logger)http.Handle(*metricPath, promhttp.InstrumentMetricHandler(prometheus.DefaultRegisterer, handlerFunc))http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {w.Write(landingPage)})level.Info(logger).Log("msg", "Listening on address", "address", *listenAddress)srv := &http.Server{Addr: *listenAddress}if err := web.ListenAndServe(srv, *webConfig, logger); err != nil {level.Error(logger).Log("msg", "Error starting HTTP server", "err", err)os.Exit(1)}
}

2. metrics 对应的handler

我们进去看看 /metrics 对应的handler,它是由 newHandler 生成的:

func newHandler(metrics collector.Metrics, scrapers []collector.Scraper, logger log.Logger) http.HandlerFunc {return func(w http.ResponseWriter, r *http.Request) {filteredScrapers := scrapersparams := r.URL.Query()["collect[]"]// Use request context for cancellation when connection gets closed.ctx := r.Context()// If a timeout is configured via the Prometheus header, add it to the context.if v := r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds"); v != "" {timeoutSeconds, err := strconv.ParseFloat(v, 64)if err != nil {level.Error(logger).Log("msg", "Failed to parse timeout from Prometheus header", "err", err)} else {if *timeoutOffset >= timeoutSeconds {// Ignore timeout offset if it doesn't leave time to scrape.level.Error(logger).Log("msg", "Timeout offset should be lower than prometheus scrape timeout", "offset", *timeoutOffset, "prometheus_scrape_timeout", timeoutSeconds)} else {// Subtract timeout offset from timeout.timeoutSeconds -= *timeoutOffset}// Create new timeout context with request context as parent.var cancel context.CancelFuncctx, cancel = context.WithTimeout(ctx, time.Duration(timeoutSeconds*float64(time.Second)))defer cancel()// Overwrite request with timeout context.r = r.WithContext(ctx)}}level.Debug(logger).Log("msg", "collect[] params", "params", strings.Join(params, ","))// Check if we have some "collect[]" query parameters.if len(params) > 0 {filters := make(map[string]bool)for _, param := range params {filters[param] = true}filteredScrapers = nilfor _, scraper := range scrapers {if filters[scraper.Name()] {filteredScrapers = append(filteredScrapers, scraper)}}}registry := prometheus.NewRegistry()registry.MustRegister(collector.New(ctx, dsn, metrics, filteredScrapers, logger))gatherers := prometheus.Gatherers{prometheus.DefaultGatherer,registry,}// Delegate http serving to Prometheus client library, which will call collector.Collect.h := promhttp.HandlerFor(gatherers, promhttp.HandlerOpts{})h.ServeHTTP(w, r)}
}

3. 接口

而关键就在于 registry.MustRegister 要求给的参数是符合 Collector 接口的实现,也就是说,每次需要收集信息的时候,就会调用 Collector 接口的 Collect 方法:

type Collector interface {Describe(chan<- *Desc)Collect(chan<- Metric)
}

我们不难发现,收集器并发收集所有指标,每个具体指标都会实现 Scraper 这个接口:

// Scraper is minimal interface that let's you add new prometheus metrics to mysqld_exporter.
type Scraper interface {// Name of the Scraper. Should be unique.Name() string// Help describes the role of the Scraper.// Example: "Collect from SHOW ENGINE INNODB STATUS"Help() string// Version of MySQL from which scraper is available.Version() float64// Scrape collects data from database connection and sends it over channel as prometheus metric.Scrape(ctx context.Context, db *sql.DB, ch chan<- prometheus.Metric) error
}

那就简单了,我们如果想要实现一个指标的采集只要实现该接口就行,而具体的指标,就在 Scrape 这个接口里,从数据库里查出来,并且利用 各种方式把需要的数据提取出来,例如文本解析,正则等等。我们来看一个简单的收集器:

slave_status.go

// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.// Scrape `SHOW SLAVE STATUS`.package collectorimport ("context""database/sql""fmt""strings""github.com/go-kit/kit/log""github.com/prometheus/client_golang/prometheus"
)const (// Subsystem.slaveStatus = "slave_status"
)var slaveStatusQueries = [2]string{"SHOW ALL SLAVES STATUS", "SHOW SLAVE STATUS"}
var slaveStatusQuerySuffixes = [3]string{" NONBLOCKING", " NOLOCK", ""}func columnIndex(slaveCols []string, colName string) int {for idx := range slaveCols {if slaveCols[idx] == colName {return idx}}return -1
}func columnValue(scanArgs []interface{}, slaveCols []string, colName string) string {var columnIndex = columnIndex(slaveCols, colName)if columnIndex == -1 {return ""}return string(*scanArgs[columnIndex].(*sql.RawBytes))
}// ScrapeSlaveStatus collects from `SHOW SLAVE STATUS`.
type ScrapeSlaveStatus struct{}// Name of the Scraper. Should be unique.
func (ScrapeSlaveStatus) Name() string {return slaveStatus
}// Help describes the role of the Scraper.
func (ScrapeSlaveStatus) Help() string {return "Collect from SHOW SLAVE STATUS"
}// Version of MySQL from which scraper is available.
func (ScrapeSlaveStatus) Version() float64 {return 5.1
}// Scrape collects data from database connection and sends it over channel as prometheus metric.
func (ScrapeSlaveStatus) Scrape(ctx context.Context, db *sql.DB, ch chan<- prometheus.Metric, logger log.Logger) error {var (slaveStatusRows *sql.Rowserr             error)// Try the both syntax for MySQL/Percona and MariaDBfor _, query := range slaveStatusQueries {slaveStatusRows, err = db.QueryContext(ctx, query)if err != nil { // MySQL/Percona// Leverage lock-free SHOW SLAVE STATUS by guessing the right suffixfor _, suffix := range slaveStatusQuerySuffixes {slaveStatusRows, err = db.QueryContext(ctx, fmt.Sprint(query, suffix))if err == nil {break}}} else { // MariaDBbreak}}if err != nil {return err}defer slaveStatusRows.Close()slaveCols, err := slaveStatusRows.Columns()if err != nil {return err}for slaveStatusRows.Next() {// As the number of columns varies with mysqld versions,// and sql.Scan requires []interface{}, we need to create a// slice of pointers to the elements of slaveData.scanArgs := make([]interface{}, len(slaveCols))for i := range scanArgs {scanArgs[i] = &sql.RawBytes{}}if err := slaveStatusRows.Scan(scanArgs...); err != nil {return err}masterUUID := columnValue(scanArgs, slaveCols, "Master_UUID")masterHost := columnValue(scanArgs, slaveCols, "Master_Host")channelName := columnValue(scanArgs, slaveCols, "Channel_Name")       // MySQL & PerconaconnectionName := columnValue(scanArgs, slaveCols, "Connection_name") // MariaDBfor i, col := range slaveCols {if value, ok := parseStatus(*scanArgs[i].(*sql.RawBytes)); ok { // Silently skip unparsable values.ch <- prometheus.MustNewConstMetric(prometheus.NewDesc(prometheus.BuildFQName(namespace, slaveStatus, strings.ToLower(col)),"Generic metric from SHOW SLAVE STATUS.",[]string{"master_host", "master_uuid", "channel_name", "connection_name"},nil,),prometheus.UntypedValue,value,masterHost, masterUUID, channelName, connectionName,)}}}return nil
}// check interface
var _ Scraper = ScrapeSlaveStatus{}

4. 采集的集合

通过上边的代码,我们已经知道了,mysql exporter采集指标的方式就是一个个实现接口就行,我们可以很方便扩展。我们可以用集合来表示监控参数的范围。首先exporter中利用scrapers常量记录了一个默认的采集范围集合A。

var scrapers = map[collector.Scraper]bool{collector.ScrapeGlobalStatus{}:                        true,collector.ScrapeGlobalVariables{}:                     true,collector.ScrapeSlaveStatus{}:                         true,collector.ScrapeProcesslist{}:                         false,collector.ScrapeUser{}:                                false,collector.ScrapeTableSchema{}:                         false,collector.ScrapeInfoSchemaInnodbTablespaces{}:         false,collector.ScrapeInnodbMetrics{}:                       false,collector.ScrapeAutoIncrementColumns{}:                false,collector.ScrapeBinlogSize{}:                          false,collector.ScrapePerfTableIOWaits{}:                    false,collector.ScrapePerfIndexIOWaits{}:                    false,collector.ScrapePerfTableLockWaits{}:                  false,collector.ScrapePerfEventsStatements{}:                false,collector.ScrapePerfEventsStatementsSum{}:             false,collector.ScrapePerfEventsWaits{}:                     false,collector.ScrapePerfFileEvents{}:                      false,collector.ScrapePerfFileInstances{}:                   false,collector.ScrapePerfMemoryEvents{}:                    false,collector.ScrapePerfReplicationGroupMembers{}:         false,collector.ScrapePerfReplicationGroupMemberStats{}:     false,collector.ScrapePerfReplicationApplierStatsByWorker{}: false,collector.ScrapeUserStat{}:                            false,collector.ScrapeClientStat{}:                          false,collector.ScrapeTableStat{}:                           false,collector.ScrapeSchemaStat{}:                          false,collector.ScrapeInnodbCmp{}:                           true,collector.ScrapeInnodbCmpMem{}:                        true,collector.ScrapeQueryResponseTime{}:                   true,collector.ScrapeEngineTokudbStatus{}:                  false,collector.ScrapeEngineInnodbStatus{}:                  false,collector.ScrapeHeartbeat{}:                           false,collector.ScrapeSlaveHosts{}:                          false,collector.ScrapeReplicaHost{}:                         false,
}

exporter也允许在exporter启动的时候,通过设置启动参数来设置采集范围B。当集合B不存在时,集合A生效;当集合B存在时,集合B生效,集合A失效。Prometheus在采集exporter的数据时,可以携带一个collect[]参数设定采集范围C。当集合C不存在时,Prometheus最终的采集范围是A或者B(取决于哪个集合生效);当集合C存在时,Prometheus最终的采集范围时C和A或者B(取决于哪个集合生效)的交集。

三、参考资料

1.https://jiajunhuang.com/articles/2018_12_16-prometheus_mysqld_exporter.md.html

2. https://github.com/prometheus/mysqld_exporter

【MySQL】——mysql exporter源码分析相关推荐

  1. php,mysql 项目--商城--源码分析。

    php项目–商城–源码分析. 此商城网站源码下载地址:https://github.com/jeremywong1992/nuomi_shop php项目---商城--源码分析 网站目录下包含六个文件 ...

  2. 《源码分析转载收藏向—数据库内核月报》

    月报原地址: 数据库内核月报 现在记录一下,我可能需要参考的几篇文章吧,不然以后还得找: MySQL · 代码阅读 · MYSQL开源软件源码阅读小技巧 MySQL · 源码分析 · 聚合函数(Agg ...

  3. pbp 读取 mysql数据_SqlAlchemy 中操作数据库时session和scoped_session的区别(源码分析)...

    原生session: from sqlalchemy.orm import sessionmaker from sqlalchemy import create_engine from sqlalch ...

  4. 《MySQL 8.0.22执行器源码分析(3.2)关于HashJoinIterator》

    在本文章之前,应该了解的概念: 连接的一些概念.NLJ.BNL.HashJoin算法. 目录 关于join连接 probe行保存概念 Hashjoin执行流程(十分重要) HashJoinIterat ...

  5. mysql源码分析——InnoDB引擎启动分析

    一.InnoDB启动 在MySql中,InnoDB的启动流程其实是很重要的.一些更细节的问题,就藏在了这其中.在前面分析过整个数据库启动的流程,本篇就具体分析一下InnoDB引擎启动所做的各种动作.在 ...

  6. mysql bulkupdate_django_bulk_update源码分析

    ## django_bulk_update源码分析 这个第三方插件的体量几乎只相当于工作时两三天的代码量了,是一个比较容易开始进行源代码阅读的模块,阅读完这个代码对自定义的进行django拓展也是一个 ...

  7. 转 MySQL源码分析

    看到一个不错的介绍,原址如下: http://software.intel.com/zh-cn/blogs/2010/08/20/mysql0/ MySQL源码分析(0):编译安装及调试 作者: Yu ...

  8. MySQL 源码分析 binlog 编号上限

    MySQL 源码分析 binlog 编号上限 更新时间:2022-10-30 文章目录 MySQL 源码分析 binlog 编号上限 内容声明 问题描述 测试想法 问题测试 源码说明 MAX_LOG_ ...

  9. mysql源码分析——索引的数据结构

    引子 说几句题外话,在京被困三个月之久,不能回家,所以这个源码分析就中断了.之所以在家搞这个数据库的源码分析,主要是在家环境齐全,公司的电脑老旧不堪.意外事件往往打断正常的习惯和运行轨迹,但这却是正常 ...

最新文章

  1. 关注多云安全性的7个问题
  2. 7.3.3 多路复用IO(IO multiplexing)
  3. python 几种点积运算方式效率分析
  4. (十一)boost库之多线程间通信
  5. AVC sequence header AAC sequence header
  6. metasploit终端命令大全 MSF
  7. sx1268 中文_STM32开发笔记85: SX1268驱动程序设计(芯片唤醒)
  8. 粉刷匠(bzoj 1296)
  9. ESS And 迅雷5 让我不能上网
  10. 实战教你刷显卡BIOS
  11. Internet网络行为学
  12. Markdown符号
  13. element中el-select实现拼音搜索(el-autocomplete等下拉框搜索都可添加)
  14. 2021全国职业技能大赛-网络安全赛题解析总结②(超详细)
  15. excel怎么启用宏_轻便免费的Excel合并工具,支持wps和office全系统
  16. 前嗅百科 | 这10个科学常识竟然都不是真的?
  17. 英语学习APP开发解决方案
  18. OMAPL138 + SPARTAN6 DSP+ARM+FPGA开发例程
  19. 刘强东的代码水平如何?网友:当年一晚赚5万
  20. C后端设计开发 - 第7章-真气-遗失的网络IO

热门文章

  1. 计算机组成原理笔记及思维导图汇总附复习建议
  2. 什么时候需要@RequestBody注解
  3. 磁盘备份拷贝,系统拷贝相关(diskgenius,UltraISO )
  4. spring学习之将类交给spring管理,bean的注入,scope,集合注入,自动装配,生命周期,迟加载
  5. 小程序如何指定按钮分享指定内容
  6. 2021年最全App全渠道推广统计数据方案
  7. 一个比较笨笨的方法,可以定时发微信(部分转自知乎)
  8. 牛听听 总是获取音频流出错_冲刺一甲|普通话考试备考资料分享(易错字词归纳、朗读短文60篇 文字+音频、32篇命题说话)...
  9. 互联网思维体系--史上最全的互联网思维精髓总结
  10. STM32移相全桥电源方案