利用Docker配置influxdb集群

–by leiyong 2022-4-27

1、文档历史

2、安装influxdb

docker run -d --name influxdb-server1 \
-e INFLUXDB_ADMIN_USER_PASSWORD=password123 \
-e INFLUXDB_USER=my_user \
-e INFLUXDB_USER_PASSWORD=my_password \
-e INFLUXDB_DB=my_database \
-e INFLUXDB_ADMIN_USER=my_admin \
-e INFLUXDB_ADMIN_USER_PASSWORD=my_password \
-e INFLUXDB_ADMIN_USER_TOKEN=token123 \
--env INFLUXDB_HTTP_AUTH_ENABLED=true \
--net="bridge" \
-p 8086:8086/tcp \
-p 8088:8088/tcp \
bitnami/influxdb:1.8.5-debian-10-r257
docker run -d --name influxdb-server2 \
-e INFLUXDB_ADMIN_USER_PASSWORD=password123 \
-e INFLUXDB_USER=my_user \
-e INFLUXDB_USER_PASSWORD=my_password \
-e INFLUXDB_DB=my_database \
-e INFLUXDB_ADMIN_USER=my_admin \
-e INFLUXDB_ADMIN_USER_PASSWORD=my_password \
-e INFLUXDB_ADMIN_USER_TOKEN=token123 \
--env INFLUXDB_HTTP_AUTH_ENABLED=true \
--net="bridge" \
-p 8090:8086/tcp \
-p 8092:8088/tcp \
bitnami/influxdb:1.8.5-debian-10-r257

3、安装syncflux

syncflux用于

SyncFlux 是一个带有 HTTP API 接口的开源 InfluxDB 数据同步和复制工具,主要目标是手工从HA influxDB 1.X 集群中恢复丢失的数据

建立配置文件

nano /mylocal/conf/syncflux.toml

# -*- toml -*-# -------GENERAL SECTION ---------
# syncflux could work in several ways,
# not all General config parameters works on all modes.
#  modes
#  "hamonitor" => enables syncflux as a daemon to sync
#                2 Influx 1.X OSS db and sync data between them
#                when needed (does active monitoring )
#  "copy" => executes syncflux as a new process to copy data
#            between master and slave databases
#  "replicashema" => executes syncflux as a new process to create
#             the database/s and all its related retention policies
#  "fullcopy" => does database/rp replication and after does a data copy[General]
# ------------------------
# logdir ( only valid on hamonitor action)
#  the directory where to place logs
#  will place the main log "
#  logdir = "./log"# ------------------------
# loglevel ( valid only for hamonitor actions )
#  set the log level , valid values are:
#  fatal,error,warn,info,debug,trace
# on copy/fullcopy actions  loglevel is mapped with
#  (nothing) = Warning
#  -v = Info
#  -vv =  debug
#  -vvv = traceloglevel = "debug"# -----------------------------
# sync-mode (only valid on hamonitor action)
#  NOTE: rigth now only  "onlyslave" (one way sync ) is valied
#  (planned sync in two ways in the future)sync-mode = "onlyslave"# ---------------------------
# master-db choose one of the configured InfluxDB as a SlaveDB
# this parameter will be override by the command line -master parametermaster-db = "influxdb01"# ---------------------------
# slave-db choose one of the configured InfluxDB as a SlaveDB
# this parameter will be override by the command line -slave parameterslave-db = "influxdb02"# ------------------------------
# check-interval
# the inteval for health cheking for both master and slave databasescheck-interval = "10s"# ------------------------------
# min-sync-interval
# the inteval in which HA monitor will check both are ok and change
# the state of the cluster if not, making all needed recovery actionsmin-sync-interval = "20s"# ---------------------------------------------
# initial-replication
# tells syncflux if needed some type of replication
# on slave database from master database on initialize
# (only valid on hamonitor action)
#
# none:  no replication
# schema: database and retention policies will be recreated on the slave database
# data: data for all retention policies will be replicated
#      be carefull: this full data copy could take hours,days.
# both:  will replicate first the schema and them the full data initial-replication = "none"#
# monitor-retry-durtion
#
# syncflux only can begin work when master and slave database are both up,
# if some of them is down synflux will retry infinitely each monitor-retry-duration to work.monitor-retry-interval = "1m"#
# data-chuck-duration
#
# duration for each small, read  from master -> write to slave, chuck of data
# smaller chunks of data will use less memory on the syncflux process
# and also less resources on both master and slave databases
# greater chunks of data will improve sync speed data-chuck-duration = "5m"#
#  max-retention-interval
#
# for infinite ( or bigger ) retention policies full replication should begin somewhere in the time
# this parameter set the max retention.max-retention-interval = "8760h" # 1 year#
#  rw-max-retries
#
#  If any of the read ( from master) or write ( to slave ) querys fails ,
#  the query will be repeated at leas rw-max-retriesrw-max-retries = 5#  If any of the read ( from master) or write ( to slave ) querys fails ,
#  the query will be repeated at leas rw-max-retries and we can force a pause from at least rw-retry-delayrw-retry-delay = "10s"# Num paralel  workers querying and writting at time on both databases (master & slave)
# num-workers = 4# syncflux splits  all chunk data  to write into multiple writes of max-points-on-single-write
# enables limitation on HTTP BODY REQUEST, avoiding errors like "Request Entity Too Large"max-points-on-single-write = 20000# ---- HTTP API SECTION (Only valid on hamonitor action)
# Enables an HTTP API endpoint to check the cluster health[http]name = "example-http-influxdb"bind-addr = "127.0.0.1:4090"admin-user = "admin"admin-passwd = "admin"cookie-id = "mysupercokie"# ---- INFLUXDB  SECTION
# Sets a list of available DB's that can be used
# as master or slaves db's on any of the posible actions[[influxdb]]release = "1x"name = "influxdb01"
#配置集群地址、管理员名称和账号location = "http://192.168.83.139:8086/"admin-user = "my_admin"admin-passwd = "my_password"timeout = "10s"tls-insecure-skip-verify = truetls-ca = ""tls-cert = ""tls-key = ""[[influxdb]]release = "1x"name = "influxdb02"
#配置集群地址、管理员名称和账号location = "http://192.168.83.139:8090/"admin-user = "my_admin"admin-passwd = "my_password"timeout = "10s"tls-insecure-skip-verify = truetls-ca = ""tls-cert = ""tls-key = ""

运行容器

docker run -d --restart always --name=syncflux_instance00 -p 4090:4090 -v /mylocal/conf:/opt/syncflux/conf -v /mylocal/log:/opt/syncflux/log tonimoreno/syncflux

来源

https://github.com/toni-moreno/syncflux

4、集群

建立集群配置文件

nano /mylocal/influxdb-srelay.conf

内容如下

###############################
##
## InfluxDB Single instances Config
##
################################ InfluxDB Backend InfluxDB01
[[influxdb]]name = "myinfluxdb01"location = "http://192.168.83.139:8086/"timeout = "10s"# InfluxDB Backend InfluxDB02
[[influxdb]]name = "myinfluxdb02"location = "http://192.168.83.139:8090/"timeout = "10s"#################################
##
## InfluxDB Cluster Configs as a set
## of influxdb Single Instances
##
################################# # Cluster for linux Metrics
[[influxcluster]]# name = cluster id for route configs and logsname  = "ha_cluster"# members = array of influxdb backendsmembers = ["myinfluxdb01","myinfluxdb02"]# where to write logs for all  operations in the clusterlog-file = "ha_cluster.log"# log level could be# "panic","fatal","Error","warn","info","debug"log-level = "info"# mode = of query and send data# * HA : #       input write data will be sent to all members#       queries will be sent  on the active node with #       data if query-router-endpoint-api config exist, else the first# * Single:#       input write data will be sent on the first member ( should be only one)#     query will be sent on  the only first member (should be only one)# * LB:  // NOT IMPLEMENTED YET //type = "HA"# query-router-endpoint-api: #  List of API url which give us the name of the influxdb backend available with all available data (when recovery process)#   use any available sync tool as in https://github.com/toni-moreno/syncflux if needed ##syncflux的配置地址query-router-endpoint-api = ["http://localhost:4090/api/queryactive","http://localhost:4090/api/queryactive"]# HTTP Server
[[http]]name = "ha-relay"bind-addr = "0.0.0.0:9096"log-file = "http_harelay_9096.log"log-level = "info"access-log = "ha_access.log"rate-limit = 1000000burst-limit = 2000000## Define endpoints base config## endpoints can have read and write properties## Example: /query endpoint## There are 2 routes that must do a read query  against a cluster## It expects that each HTTP request tries each route. If it fits the filter it will be enrouted## All requests that doesn't pass through  the filter tries the next route## IQL /query Endpoint#[[http.endpoint]]uri=["/query"]# type#  * RD = http for query db#  * WR = http for send data to the dbtype="RD"# source_format # Supported formats# * ILP = influxdb-line-protocol# * prom-write = prometheus write format# * IQL = Influx Query Languagesource_format="IQL"## READ request - linux_METRICS[[http.endpoint.route]]name="any_read"# level:#   * http => all following rules will work only with http params#   * data => any of the following rules will need data inspectionlevel="http" # http or data# true => will use the endpoint log as this route log# false => will use its own log file ,  if not set the name <logdir>/http_route_<route_name>.loglog-inherit = false#log-file = "query_route_linux_metrics.log"log-level = "info"## Filter only the request with db param = linux_metrics[[http.endpoint.route.filter]]name="pass_all"#------------------------------------------------------------------------------------# Key for filter usage could be only at http level parameters (header/query)#  Header based#  -------------#    * authorization: Authorization Header #    * remote-address: Remote Address Header#    * referer: Referer Header#    * user-agent: User-Agent Header#  Query Based #  -------------#   (https://docs.influxdata.com/influxdb/v1.7/guides/querying_data/)#   (https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint)#    * db [r/w]: InfluxDB to read / Write#    * q [r]: InfluxQL query#    * epoch [r]: precision on read queries#    * precision [w] : precision on write queries#    * chunked [r]: (see referenced doc)#    * chunksize[r]: (see referenced doc)#    * pretty[r]:(see referenced doc)#    * u [r/w]: read/write user#    * p [r/w]: read/write password#    * rp[w]: retention policy#    * consistency[w]: (see referenced doc)#  Computed#    * username: computed from authorization header or u parameters# Key for Rule Usage (not this level) could be also data level parameters#   * measurement: match the measurement name#   * tag: match the tag value with tag key in key_aux#   * field: match the field value with field key in key_aux (always as string!!!! at this level)# ----------------------------------------------------------------------------------------------key="db" #availabe http paramsmatch=".*"## Rule to route to cluster_linux[[http.endpoint.route.rule]]name="route_all"# Action Route#     * route:#       If key value (usually http level key) match with match parameter, The complete #       request will be sent to the cluster configured in to_cluster param#       Next rule step will have untouched data available for any other process#   * route_db_from_data (enable multitenancy)#       Will rename de db parameter depending for each point in data depending on#       the matching with one point parameter , by example one tag, enable write data#       to several databases (split data) from the same source.#       with this rule 1 HTTP request will become N HTTP request to our backends#       HTTP response will be logged without feedback with the original request#       Next rule step will have untouched data available for any other process#   * rename_http#   * rename_data#   * drop_data#   * break"action="route"# See key="db"match=".*"to_cluster="ha_cluster"[[http.endpoint]]uri=["/write"]source_format="ILP"type = "WR"## WRITE request - windows[[http.endpoint.route]]name="any_write"level="http"[[http.endpoint.route.filter]]name="pass_all"key="db"match=".*"## Send to PMEREASP15 cluster[[http.endpoint.route.rule]]name="route_all"action="route"key="db"match=".*"to_cluster="ha_cluster"

运行容器

docker run -d --restart always -p 9096:9096 --volume /mylocal/influxdb-srelay.conf:/etc/influxdb-srelay/influxdb-srelay.conf     -v /mylocal/conf:/opt/influxdb-srelay/conf -v /mylocal/log:/opt/influxdb-srelay/log         tonimoreno/influxdb-srelay:latest

来源

https://github.com/toni-moreno/influxdb-srelay

5、测试

集群情况

curl -l http://localhost:9096/status/ha_cluster

写入数据

curl -i -XPOST 'http://localhost:9096/write?u=my_admin&p=my_password&db=my_database' --data-binary 'cpu_load_short,host=server02 value=0.67
cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257
cpu_load_short,direction=in,host=server01,region=us-west value=2.0 1422568543702900257'

查询数据

curl -G "http://localhost:9096/query?u=my_admin&p=my_password&db=my_database" --data-urlencode "q=select * from cpu_load_short"

导入测试数据

curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt -o NOAA_data.txt

docker cp NOAA_data.txt ead4c271ebd5:/

docker exec -it ead4c271ebd5 /bin/bash

influx -host ‘localhost’ -port ‘8086’ -username ‘my_admin’ -password ‘my_password’ -import -path=NOAA_data.txt -precision=ns

利用Docker配置influxdb集群相关推荐

  1. 【docker】利用docker配置Nginx集群实现负载均衡

    目录 1.启动docker服务 2.拉取Nginx.Tomcat 3.启动三个容器 4.配置 5.验证 1.启动docker服务 [root@mgr1 ~]# systemctl start dock ...

  2. Docker配置Redis集群

    Docker配置Redis集群 Docker中的Redis 1.修改配置文件 2.测试集群是否启动 Docker中的Redis 在6台虚拟机中启动Redis后,依次测试是否可用. 可以使用后才能开始配 ...

  3. 利用Docker搭建Redis集群

    Redis集群搭建 运行Redis镜像 分别使用以下命令启动3个Redis docker run --name redis-6379 -p 6379:6379 -d hub.c.163.com/lib ...

  4. 利用docker搭建服务器集群并部署大数据生态软件

    1.集群搭建与配置 本来想使用centos镜像搭建服务器集群,但最小化安装版的镜像也需要1G左右,如果后面再部署一些大数据软件,单是多台服务器环境部署就会占用大量空间,加上此版本镜像在不同电脑环境的安 ...

  5. Docker配置Kafka集群

    使用docker-compose创建kafka集群 因为 kafka 需要用到 zookeeper(3.0之后就可以使用 kraft 而不用 zookeeper 了),并且还要创建多个 kafka 容 ...

  6. docker搭建zookeeper集群

    docker配置zookeeper集群: 我们需要的 docker-compose.yml 文件 ,这个需要我们自己先安装好,docker以及docker-compose,这个我在之前的教程中写到了, ...

  7. 在Docker上快速配置PerconaXtraDBCluster集群

    在Docker上快速配置PerconaXtraDBCluster集群 创建Docker内部网络 # Docker创建内部网络 Create network $ docker network creat ...

  8. Win8下安装配置Docker ToolBox并制作镜像配置Handoop集群

    1.安装Docker ToolBox 参考1:https://www.cnblogs.com/weschen/p/6697926.html Oracle   VirtualBox.Git和Kitema ...

  9. docker 实现redis集群搭建

    摘要:接触docker以来,似乎养成了一种习惯,安装什么应用软件都想往docker方向做,今天就想来尝试下使用docker搭建redis集群. 首先,我们需要理论知识:Redis Cluster是Re ...

最新文章

  1. Redis实现分布式锁全局锁—Redis客户端Redisson中分布式锁RLock实现
  2. hive sqoop 分区导入_使用sqoop将hive分区表的数据导入到mysql的解决方案:shell脚本循环...
  3. 迭代器和反向迭代器,常量迭代器和非常量迭代器
  4. java B2B2C springmvc mybatis多租户电子商城系统-Spring Cloud Ribbon
  5. python中zip的使用_浅谈Python中的zip()与*zip()函数详解
  6. 电子计算机发展为第五代,电子计算机的发展历程是怎样的?
  7. 能在手机播放的Flash代码
  8. javascript匿名函数及闭包深入理解及应用
  9. L1-004 计算摄氏温度 (5 分)—团体程序设计天梯赛
  10. 《剑指offer》面试题19——二叉树的镜像(C++)
  11. 网站维护不给提,问客服就说维护结束会给通知
  12. java执行脚本命令 学习笔记
  13. 基金登记过户系统相关
  14. qq游戏不显示登陆服务器,QQ游戏怎么登陆? qq游戏不能登录怎么办?
  15. 时区缩写与UTC(GMT)时差对照表
  16. 人寿保险的十大真相 保险不是什么时候都能买的
  17. 电脑网络正常,浏览器连不上网的解决办法
  18. uni-app 自定义table-demo 左右列固定冻结
  19. 关于电脑右键缺少office三件套的解决方案
  20. Profinet现场总线耦合器模拟量扩展IO

热门文章

  1. TS 判断字符串是否为手机号码
  2. 微搭低代码入门教程03
  3. 如何高效学习-随意信息处理-信息的记忆
  4. MySQL要监控什么?
  5. 【网络基础概念】: 网络相关硬件、TCP/IP四层协议模型、OSI七层协议模型。
  6. 十二、有一个排好序的列表3 9 12 24 33 41 48 56 69 72 88 90,输入一个数,插入到列表中,输出新的列表,要求保持从小到大的顺序
  7. C++报错 sprintf': This function or variable may be unsafe. Consider using sprintf_s
  8. linux系统英伟达gpu驱动卸载_Ubuntu 16.04 卸载Nvidia显卡驱动和cuda
  9. 简单的查找与排序的算法实现(python)
  10. 疯狂html附源码,科技常识:HTML5 直播疯狂点赞动画实现代码 附源码