目录:

  1. Influxdb安装 1
    1.1. Influxdb下载 2
    1.2. 解压tar包 2
    1.3. 修改配置文件 2
    1.4. 配置文件参数说明 3
    1.5. 实际配置(主要是修改路径和端口) 10
    1.6. 启动 33
    1.7. 启动influxdb控制台,第一次启动会较慢,耐心等待 34
  2. 使用 34
    2.1. 常用命令 34
    2.2. Influx命令使用 36
    2.3. Influx-sql使用举例 37
    2.4. Influxdb的数据格式 37
  3. Influxdb客户端工具 38

1.Influxdb安装

https://blog.csdn.net/weixin_43786255/article/details/107133899

1.1.Influxdb下载

获取tar包

wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0_linux_amd64.tar.gz

1.2.解压tar包

cd /root/software
tar -zxvf influxdb-1.8.0_linux_amd64.tar.gz -C /root/installed/cd /root/installed/
mv influxdb-1.8.0-1/ influxdb

1.3.修改配置文件

可执行文件路径为:{influxdb目录}/usr/bin
安装InfluxDB之后,在/usr/bin下会有如下几个文件:

influxd          influxdb服务器
influx           influxdb命令行客户端
influx_inspect   查看工具
influx_stress    压力测试工具
influx_tsm     数据库转换工具(将数据库从b1或bz1格式转换为tsm1格式)

默认配置文件所在路径:{influxdb目录}/etc/influxdb/influxdb.conf

mkdir -p /data/influxdb/conf
cp /root/installed/influxdb/etc/influxdb/influxdb.conf  /data/influxdb/conf/
cd /data/influxdb
mkdir meta
mkdir data
mkdir wal

然后修改配置:

1.4.配置文件参数说明

# reporting-disabled = false                                 # 该选项用于上报influxdb的使用信息给InfluxData公司,默认值为false
bind-address = "127.0.0.1:8085"                             # 备份恢复时使用,默认值为8088
### [meta]
[meta]dir = "/data/influxdb/meta"                              # meta数据存放目录# retention-autocreate = true                            # 用于控制默认存储策略,数据库创建时,会自动生成autogen的存储策略,默认值:true# logging-enabled = true                                 # 是否开启meta日志,默认值:true
### [data]
[data]dir = "/data/influxdb/data"                              # 最终数据(TSM文件)存储目录wal-dir = "/data/influxdb/wal"                           # 预写日志存储目录# wal-fsync-delay = "0s"                                 # 在同步写入之前等待的总时间,默认0s# index-version = "inmem"                                # 用于新碎片的切分索引的类型。# trace-logging-enabled = false                          # 跟踪日志记录在tsm引擎周围提供了更详细的输出# query-log-enabled = true                               # 是否开启tsm引擎查询日志,默认值: true# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).# cache-max-memory-size = "1g"                           # 用于限定shard最大值,大于该值时会拒绝写入,默认值:1000MB,单位:byte# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).# cache-snapshot-memory-size = "25m"                     # 用于设置快照大小,大于该值时数据会刷新到tsm文件,默认值:25MB,单位:byte# cache-snapshot-write-cold-duration = "10m"             # tsm引擎 snapshot写盘延迟,默认值:10Minute# compact-full-write-cold-duration = "4h"                # tsm文件在压缩前可以存储的最大时间,默认值:4Hour# max-concurrent-compactions = 0                         # 压缩并发的最大数量,默认设置为0表示runtime.GOMAXPROCS(0)*50% ,否则以设置的非零值为准# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).# max-index-log-file-size = "1m"                         # 限制索引日志文件大小# max-series-per-database = 1000000                      # 限制数据库的级数,该值为0时取消限制,默认值:1000000# max-values-per-tag = 100000                            # 一个tag最大的value数,0取消限制,默认值:100000# tsm-use-madv-willneed = false                          # 如果为true,mmap的建议值MADV_WILLNEED会被提供给内核
### [coordinator]
[coordinator]# write-timeout = "10s"                                  # 写操作超时时间,默认值: 10s# max-concurrent-queries = 0                             # 最大并发查询数,0无限制,默认值: 0# query-timeout = "0s"                                   # 查询操作超时时间,0无限制,默认值:0s# log-queries-after = "0s"                               # 慢查询超时时间,0无限制,默认值:0s# max-select-point = 0                                   # SELECT语句可以处理的最大点数(points),0无限制,默认值:0# max-select-series = 0                                  # SELECT语句可以处理的最大级数(series),0无限制,默认值:0# max-select-buckets = 0                                 # SELECT语句可以处理的最大"GROUP BY time()"的时间周期,0无限制,默认值:0
### [retention]
[retention]# enabled = true                                         # 是否启用该模块,默认值 : true# check-interval = "30m"                                 # 检查时间间隔,默认值 :"30m"
### [shard-precreation]
[shard-precreation]# enabled = true                                         # 是否启用该模块,默认值 : true# check-interval = "10m"                                 # 检查时间间隔,默认值 :"10m"# advance-period = "30m"                                 # 预创建分区的最大提前时间,默认值 :"30m"
[monitor]# store-enabled = true                                   # 是否启用该模块,默认值 :true# store-database = "_internal"                           # 默认数据库:"_internal"# store-interval = "10s"                                 # 统计间隔,默认值:"10s"
### [http]
[http]# enabled = true                                         # 是否启用该模块,默认值 :true# bind-address = ":8086"                                 # 绑定地址,默认值 :":8086"# auth-enabled = false                                   # 是否开启认证,默认值:false# realm = "InfluxDB"                                     # 配置JWT realm,默认值: "InfluxDB"# log-enabled = true                                     # 是否开启日志,默认值:true# suppress-write-log = false                             # 在启用日志时是否抑制HTTP写请求日志# access-log-path = ""                                   # 当启用HTTP请求日志时,该选项指定了路径。如influxd不能访问指定的路径,它将记录一个错误并将请求日志写入stderr# write-tracing = false                                  # 是否开启写操作日志,如果置成true,每一次写操作都会打日志,默认值:false# pprof-enabled = true                                   # 是否开启pprof,默认值:true# debug-pprof-enabled = false                            # 是否开启pprof,默认值:true# https-enabled = false                                  # 是否开启https ,默认值 :false# https-certificate = "/etc/ssl/influxdb.pem"            # 设置https证书路径,默认值:"/etc/ssl/influxdb.pem"# https-private-key = ""                                 # 设置https私钥,无默认值# shared-secret = ""                                     # 用于JWT签名的共享密钥,无默认值# max-row-limit = 0                                      # 配置查询返回最大行数,0无限制,默认值:0# max-connection-limit = 0                               # 配置最大连接数,0无限制,默认值:0# unix-socket-enabled = false                            # 是否使用unix-socket,默认值:false# bind-socket = "/var/run/influxdb.sock"                 # unix-socket路径,默认值:"/var/run/influxdb.sock"# max-body-size = 25000000                               # 客户端请求主体的最大值,以字节为单位。0无限制,默认值0# max-concurrent-write-limit = 0                         # 并发处理的最大写入次数,0无限制,默认值0# max-enqueued-write-limit = 0                           # 排队等待处理的最大数量,0无限制,默认值0# enqueued-write-timeout = 0                             # 在队列中等待处理的最长时间,0或者setting max-concurrent-write-limit=0无限制,默认值0
### [ifql]
[ifql]# enabled = true                                         # 是否启用该模块,默认值 :true# log-enabled = true                                     # 是否开启日志,默认值:true# bind-address = ":8082"                                 # ifql RPC服务使用的绑定地址默认是8082
### [logging]
[logging]# format = "auto"                                        # 日志格式,默认是自动# level = "info"                                         # 日志级别默认info# suppress-logo = false                                  # 当程序启动时,会抑制打印出来的logo输出
### [subscriber]
[subscriber]# enabled = true                                          # 是否启用该模块,默认值 :true# http-timeout = "30s"                                    # http超时时间,默认值:"30s"# insecure-skip-verify = false                             # 是否允许不安全的证书# ca-certs = ""                                          # 设置CA证书# write-concurrency = 40                                 # 设置并发数目,默认值:40# write-buffer-size = 1000                                  # 设置buffer大小,默认值:1000
### [[graphite]]
[[graphite]]# enabled = false                                        # 是否启用该模块,默认值 :false# database = "graphite"                                  # 数据库名称,默认值:"graphite"# retention-policy = ""                                  # 存储策略,无默认值# bind-address = ":2003"                                 # 绑定地址,默认值:":2003"# protocol = "tcp"                                       # 协议,默认值:"tcp"# consistency-level = "one"                              # 一致性级别,默认值:"one# batch-size = 5000                                      # 批量size,默认值:5000# batch-pending = 10                                     # 配置在内存中等待的batch数,默认值:10# batch-timeout = "1s"                                   # 超时时间,默认值:"1s"# udp-read-buffer = 0                                    # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0# separator = "."                                        # 多个measurement间的连接符,默认值: "."# tags = ["region=us-east", "zone=1c"]                   # 将被添加到所有指标的默认标签。这些可以在模板级别上覆盖或者从指标中提取的标签# templates = [#   "*.app env.service.resource.measurement",#   # Default template#   "server.*",# ]
### [collectd]
[[collectd]]# enabled = false                                        # 是否启用该模块,默认值 :false# bind-address = ":25826"                                # 绑定地址,默认值: ":25826"# database = "collectd"                                  # 数据库名称,默认值:"collectd"# retention-policy = ""                                  # 存储策略,无默认值# typesdb = "/usr/local/share/collectd"                  # 路径,默认值:"/usr/share/collectd/types.db"# security-level = "none"                                # 安全级别# auth-file = "/etc/collectd/auth_file"# batch-size = 5000                                      # 从缓存中批量获取数据的量,默认值:5000# batch-pending = 10                                     # 可能在内存中等待的批次的数量,默认值:10# batch-timeout = "10s"                                  # 即使没有达到缓冲区的限制,至少要刷新一下,默认值:"10s"# read-buffer = 0                                        # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。默认值:0# parse-multivalue-plugin = "split"                      # 两种处理方式split和join,split会分到不同的表中,join会将记录作为一个单独的记录处理。默认是split
### [opentsdb]
[[opentsdb]]# enabled = false                                        # 是否启用该模块,默认值 :false# bind-address = ":4242"                                 # 绑定地址,默认值:":4242"# database = "opentsdb"                                  # 默认数据库:"opentsdb"# retention-policy = ""                                  # 存储策略,无默认值# consistency-level = "one"                              # 一致性级别,默认值:"one"# tls-enabled = false                                    # 是否开启tls,默认值:false# certificate= "/etc/ssl/influxdb.pem"                   # 证书路径,默认值:"/etc/ssl/influxdb.pem"# log-point-errors = true                                # 出错时是否记录日志,默认值:true# batch-size = 1000                                      # 从缓存中批量获取数据的量,默认值:1000# batch-pending = 5                                      # 可能在内存中等待的批次的数量,默认值:5# batch-timeout = "1s"                                   # 即使没有达到缓冲区的限制,至少要刷新一下,默认值:"1s"
### [[udp]]
[[udp]]# enabled = false                                        # 是否启用该模块,默认值 :false# bind-address = ":8089"                                 # 绑定地址,默认值:":8089"# database = "udp"                                       # 数据库名称,默认值:"udp"# retention-policy = ""                                  # 存储策略,无默认值# precision = ""                                         # 接收点的时间点的精度("" or "n", "u", "ms", "s", "m", "h")# batch-size = 5000                                      # 从缓存中批量获取数据的量,默认值:5000# batch-pending = 10                                     # 可能在内存中等待的批次的数量,默认值:10# batch-timeout = "1s"                                   # 即使没有达到缓冲区的限制,至少要刷新一下,默认值:"1s"# read-buffer = 0                                        # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0 
### [continuous_queries]
[continuous_queries]# enabled = true                                         # 是否启用该模块,默认值 :true# log-enabled = true                                     # 是否开启日志,默认值:true# query-stats-enabled = false                            # 控制查询是否被记录到自我监控数据存储中# run-interval = "1s"                                    # 时间间隔,默认值:"1s"
### [tls]
[tls]# ciphers = [#   "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",#   "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",# ]# min-version = "tls1.2"# max-version = "tls1.2"

1.5.实际配置(主要是修改路径和端口)

### Welcome to the InfluxDB configuration file.# The values in this file override the default values used by the system if
# a config option is not specified. The commented out lines are the configuration
# field and the default value used. Uncommenting a line and changing the value
# will change the value used at runtime when the process is restarted.# Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
# The data includes a random ID, os, arch, version, the number of series and other
# usage data. No data from user databases is ever transmitted.
# Change this option to true to disable reporting.
# reporting-disabled = false# Bind address to use for the RPC service for backup and restore.
# bind-address = ":8088"bind-address = ":8085"###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###[meta]# Where the metadata/raft database is storeddir = "/data/influxdb/meta"# Automatically create a default retention policy when creating a database.# retention-autocreate = true# If log messages are printed for the meta service# logging-enabled = true###
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
###[data]# The directory where the TSM storage engine stores TSM files.dir = "/data/influxdb/data"# The directory where the TSM storage engine stores WAL files.wal-dir = "/data/influxdb/wal"# The amount of time that a write will wait before fsyncing.  A duration# greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower# disks or when WAL write contention is seen.  A value of 0s fsyncs every write to the WAL.# Values in the range of 0-100ms are recommended for non-SSD disks.# wal-fsync-delay = "0s"# The type of shard index to use for new shards.  The default is an in-memory index that is# recreated at startup.  A value of "tsi1" will use a disk based index that supports higher# cardinality datasets.# index-version = "inmem"# Trace logging provides more verbose output around the tsm engine. Turning# this on can provide more useful output for debugging tsm engine issues.# trace-logging-enabled = false# Whether queries should be logged before execution. Very useful for troubleshooting, but will# log any sensitive data contained within a query.# query-log-enabled = true# Validates incoming writes to ensure keys only have valid unicode characters.# This setting will incur a small overhead because every key must be checked.# validate-keys = false# Settings for the TSM engine# CacheMaxMemorySize is the maximum size a shard's cache can# reach before it starts rejecting writes.# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).# Values without a size suffix are in bytes.# cache-max-memory-size = "1g"# CacheSnapshotMemorySize is the size at which the engine will# snapshot the cache and write it to a TSM file, freeing up memory# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).# Values without a size suffix are in bytes.# cache-snapshot-memory-size = "25m"# CacheSnapshotWriteColdDuration is the length of time at# which the engine will snapshot the cache and write it to# a new TSM file if the shard hasn't received writes or deletes# cache-snapshot-write-cold-duration = "10m"# CompactFullWriteColdDuration is the duration at which the engine# will compact all TSM files in a shard if it hasn't received a# write or delete# compact-full-write-cold-duration = "4h"# The maximum number of concurrent full and level compactions that can run at one time.  A# value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime.  Any number greater# than 0 limits compactions to that value.  This setting does not apply# to cache snapshotting.# max-concurrent-compactions = 0# CompactThroughput is the rate limit in bytes per second that we# will allow TSM compactions to write to disk. Note that short bursts are allowed# to happen at a possibly larger value, set by CompactThroughputBurst# compact-throughput = "48m"# CompactThroughputBurst is the rate limit in bytes per second that we# will allow TSM compactions to write to disk.# compact-throughput-burst = "48m"# If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to# TSM files. This setting has been found to be problematic on some kernels, and defaults to off.# It might help users who have slow disks in some cases.# tsm-use-madv-willneed = false# Settings for the inmem index# The maximum series allowed per database before writes are dropped.  This limit can prevent# high cardinality issues at the database level.  This limit can be disabled by setting it to# 0.# max-series-per-database = 1000000# The maximum number of tag values per tag that are allowed before writes are dropped.  This limit# can prevent high cardinality tag values from being written to a measurement.  This limit can be# disabled by setting it to 0.# max-values-per-tag = 100000# Settings for the tsi1 index# The threshold, in bytes, when an index write-ahead log file will compact# into an index file. Lower sizes will cause log files to be compacted more# quickly and result in lower heap usage at the expense of write throughput.# Higher sizes will be compacted less frequently, store more series in-memory,# and provide higher write throughput.# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).# Values without a size suffix are in bytes.# max-index-log-file-size = "1m"# The size of the internal cache used in the TSI index to store previously# calculated series results. Cached results will be returned quickly from the cache rather# than needing to be recalculated when a subsequent query with a matching tag key/value# predicate is executed. Setting this value to 0 will disable the cache, which may# lead to query performance issues.# This value should only be increased if it is known that the set of regularly used# tag key/value predicates across all measurements for a database is larger than 100. An# increase in cache size may lead to an increase in heap usage.series-id-set-cache-size = 100###
### [coordinator]
###
### Controls the clustering service configuration.
###[coordinator]# The default time a write request will wait until a "timeout" error is returned to the caller.# write-timeout = "10s"# The maximum number of concurrent queries allowed to be executing at one time.  If a query is# executed and exceeds this limit, an error is returned to the caller.  This limit can be disabled# by setting it to 0.# max-concurrent-queries = 0# The maximum time a query will is allowed to execute before being killed by the system.  This limit# can help prevent run away queries.  Setting the value to 0 disables the limit.# query-timeout = "0s"# The time threshold when a query will be logged as a slow query.  This limit can be set to help# discover slow or resource intensive queries.  Setting the value to 0 disables the slow query logging.# log-queries-after = "0s"# The maximum number of points a SELECT can process.  A value of 0 will make# the maximum point count unlimited.  This will only be checked every second so queries will not# be aborted immediately when hitting the limit.# max-select-point = 0# The maximum number of series a SELECT can run.  A value of 0 will make the maximum series# count unlimited.# max-select-series = 0# The maximum number of group by time bucket a SELECT can create.  A value of zero will max the maximum# number of buckets unlimited.# max-select-buckets = 0###
### [retention]
###
### Controls the enforcement of retention policies for evicting old data.
###[retention]# Determines whether retention policy enforcement enabled.# enabled = true# The interval of time when retention policy enforcement checks run.# check-interval = "30m"###
### [shard-precreation]
###
### Controls the precreation of shards, so they are available before data arrives.
### Only shards that, after creation, will have both a start- and end-time in the
### future, will ever be created. Shards are never precreated that would be wholly
### or partially in the past.[shard-precreation]# Determines whether shard pre-creation service is enabled.# enabled = true# The interval of time when the check to pre-create new shards runs.# check-interval = "10m"# The default period ahead of the endtime of a shard group that its successor# group is created.# advance-period = "30m"###
### Controls the system self-monitoring, statistics and diagnostics.
###
### The internal database for monitoring data is created automatically if
### if it does not already exist. The target retention within this database
### is called 'monitor' and is also created with a retention period of 7 days
### and a replication factor of 1, if it does not exist. In all cases the
### this retention policy is configured as the default for the database.[monitor]# Whether to record statistics internally.# store-enabled = true# The destination database for recorded statistics# store-database = "_internal"# The interval at which to record statistics# store-interval = "10s"###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###[http]# Determines whether HTTP endpoint is enabled.# enabled = true# Determines whether the Flux query endpoint is enabled.# flux-enabled = false# Determines whether the Flux query logging is enabled.# flux-log-enabled = false# The bind address used by the HTTP service.# bind-address = ":8086"# Determines whether user authentication is enabled over HTTP/HTTPS.# auth-enabled = false# The default realm sent back when issuing a basic auth challenge.# realm = "InfluxDB"# Determines whether HTTP request logging is enabled.# log-enabled = true# Determines whether the HTTP write request logs should be suppressed when the log is enabled.# suppress-write-log = false# When HTTP request logging is enabled, this option specifies the path where# log entries should be written. If unspecified, the default is to write to stderr, which# intermingles HTTP logs with internal InfluxDB logging.## If influxd is unable to access the specified path, it will log an error and fall back to writing# the request log to stderr.# access-log-path = ""# Filters which requests should be logged. Each filter is of the pattern NNN, NNX, or NXX where N is# a number and X is a wildcard for any number. To filter all 5xx responses, use the string 5xx.# If multiple filters are used, then only one has to match. The default is to have no filters which# will cause every request to be printed.# access-log-status-filters = []# Determines whether detailed write logging is enabled.# write-tracing = false# Determines whether the pprof endpoint is enabled.  This endpoint is used for# troubleshooting and monitoring.# pprof-enabled = true# Enables authentication on pprof endpoints. Users will need admin permissions# to access the pprof endpoints when this setting is enabled. This setting has# no effect if either auth-enabled or pprof-enabled are set to false.# pprof-auth-enabled = false# Enables a pprof endpoint that binds to localhost:6060 immediately on startup.# This is only needed to debug startup issues.# debug-pprof-enabled = false# Enables authentication on the /ping, /metrics, and deprecated /status# endpoints. This setting has no effect if auth-enabled is set to false.# ping-auth-enabled = false# Determines whether HTTPS is enabled.# https-enabled = false# The SSL certificate to use when HTTPS is enabled.# https-certificate = "/etc/ssl/influxdb.pem"# Use a separate private key location.# https-private-key = ""# The JWT auth shared secret to validate requests using JSON web tokens.# shared-secret = ""# The default chunk size for result sets that should be chunked.# max-row-limit = 0# The maximum number of HTTP connections that may be open at once.  New connections that# would exceed this limit are dropped.  Setting this value to 0 disables the limit.# max-connection-limit = 0# Enable http service over unix domain socket# unix-socket-enabled = false# The path of the unix domain socket.# bind-socket = "/var/run/influxdb.sock"# The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.# max-body-size = 25000000# The maximum number of writes processed concurrently.# Setting this to 0 disables the limit.# max-concurrent-write-limit = 0# The maximum number of writes queued for processing.# Setting this to 0 disables the limit.# max-enqueued-write-limit = 0# The maximum duration for a write to wait in the queue to be processed.# Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.# enqueued-write-timeout = 0###
### [logging]
###
### Controls how the logger emits logs to the output.
###[logging]# Determines which log encoder to use for logs. Available options# are auto, logfmt, and json. auto will use a more a more user-friendly# output format if the output terminal is a TTY, but the format is not as# easily machine-readable. When the output is a non-TTY, auto will use# logfmt.# format = "auto"# Determines which level of logs will be emitted. The available levels# are error, warn, info, and debug. Logs that are equal to or above the# specified level will be emitted.# level = "info"# Suppresses the logo output that is printed when the program is started.# The logo is always suppressed if STDOUT is not a TTY.# suppress-logo = false###
### [subscriber]
###
### Controls the subscriptions, which can be used to fork a copy of all data
### received by the InfluxDB host.
###[subscriber]# Determines whether the subscriber service is enabled.# enabled = true# The default timeout for HTTP writes to subscribers.# http-timeout = "30s"# Allows insecure HTTPS connections to subscribers.  This is useful when testing with self-# signed certificates.# insecure-skip-verify = false# The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used# ca-certs = ""# The number of writer goroutines processing the write channel.# write-concurrency = 40# The number of in-flight writes buffered in the write channel.# write-buffer-size = 1000###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
###[[graphite]]# Determines whether the graphite endpoint is enabled.# enabled = false# database = "graphite"# retention-policy = ""# bind-address = ":2003"# protocol = "tcp"# consistency-level = "one"# These next lines control how batching works. You should have this enabled# otherwise you could get dropped metrics or poor performance. Batching# will buffer points in memory if you have many coming in.# Flush if this many points get buffered# batch-size = 5000# number of batches that may be pending in memory# batch-pending = 10# Flush at least this often even if we haven't hit buffer limit# batch-timeout = "1s"# UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.# udp-read-buffer = 0### This string joins multiple matching 'measurement' values providing more control over the final measurement name.# separator = "."### Default tags that will be added to all metrics.  These can be overridden at the template level### or by tags extracted from metric# tags = ["region=us-east", "zone=1c"]### Each template line requires a template pattern.  It can have an optional### filter before the template and separated by spaces.  It can also have optional extra### tags following the template.  Multiple tags should be separated by commas and no spaces### similar to the line protocol format.  There can be only one default template.# templates = [#   "*.app env.service.resource.measurement",#   # Default template#   "server.*",# ]###
### [collectd]
###
### Controls one or many listeners for collectd data.
###[[collectd]]# enabled = false# bind-address = ":25826"# database = "collectd"# retention-policy = ""## The collectd service supports either scanning a directory for multiple types# db files, or specifying a single db file.# typesdb = "/usr/local/share/collectd"## security-level = "none"# auth-file = "/etc/collectd/auth_file"# These next lines control how batching works. You should have this enabled# otherwise you could get dropped metrics or poor performance. Batching# will buffer points in memory if you have many coming in.# Flush if this many points get buffered# batch-size = 5000# Number of batches that may be pending in memory# batch-pending = 10# Flush at least this often even if we haven't hit buffer limit# batch-timeout = "10s"# UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.# read-buffer = 0# Multi-value plugins can be handled two ways.# "split" will parse and store the multi-value plugin data into separate measurements# "join" will parse and store the multi-value plugin as a single multi-value measurement.# "split" is the default behavior for backward compatibility with previous versions of influxdb.# parse-multivalue-plugin = "split"
###
### [opentsdb]
###
### Controls one or many listeners for OpenTSDB data.
###[[opentsdb]]# enabled = false# bind-address = ":4242"# database = "opentsdb"# retention-policy = ""# consistency-level = "one"# tls-enabled = false# certificate= "/etc/ssl/influxdb.pem"# Log an error for every malformed point.# log-point-errors = true# These next lines control how batching works. You should have this enabled# otherwise you could get dropped metrics or poor performance. Only points# metrics received over the telnet protocol undergo batching.# Flush if this many points get buffered# batch-size = 1000# Number of batches that may be pending in memory# batch-pending = 5# Flush at least this often even if we haven't hit buffer limit# batch-timeout = "1s"###
### [[udp]]
###
### Controls the listeners for InfluxDB line protocol data via UDP.
###[[udp]]# enabled = false# bind-address = ":8089"# database = "udp"# retention-policy = ""# InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")# precision = ""# These next lines control how batching works. You should have this enabled# otherwise you could get dropped metrics or poor performance. Batching# will buffer points in memory if you have many coming in.# Flush if this many points get buffered# batch-size = 5000# Number of batches that may be pending in memory# batch-pending = 10# Will flush at least this often even if we haven't hit buffer limit# batch-timeout = "1s"# UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.# read-buffer = 0###
### [continuous_queries]
###
### Controls how continuous queries are run within InfluxDB.
###[continuous_queries]# Determines whether the continuous query service is enabled.# enabled = true# Controls whether queries are logged when executed by the CQ service.# log-enabled = true# Controls whether queries are logged to the self-monitoring data store.# query-stats-enabled = false# interval for how often continuous queries will be checked if they need to run# run-interval = "1s"###
### [tls]
###
### Global configuration settings for TLS in InfluxDB.
###[tls]# Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants# for a list of available ciphers, which depends on the version of Go (use the query# SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses# the default settings from Go's crypto/tls package.# ciphers = [#   "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",#   "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",# ]# Minimum version of the tls protocol that will be negotiated. If not specified, uses the# default settings from Go's crypto/tls package.# min-version = "tls1.2"# Maximum version of the tls protocol that will be negotiated. If not specified, uses the# default settings from Go's crypto/tls package.# max-version = "tls1.3"

其中,修改端口的是:

bind-address = "127.0.0.1:8085"

1.6.启动

cd /root/installed/influxdb/usr/bin
nohup ./influxd -config /data/influxdb/conf/influxdb.conf  > influxdb.log 2>&1 &

启动日志如下:

[root@node1 bin]# pwd
/root/installed/influxdb/usr/bin
[root@node1 bin]# ls
influx  influxd  influxdb.log  influx_inspect  influx_stress  influx_tsm
[root@node1 bin]# tail -f influxdb.log
ts=2021-08-10T09:40:34.625789Z lvl=info msg="Starting precreation service" log_id=0Vsro8Hl000 service=shard-precreation check_interval=10m advance_period=30m
ts=2021-08-10T09:40:34.626089Z lvl=info msg="Starting snapshot service" log_id=0Vsro8Hl000 service=snapshot
ts=2021-08-10T09:40:34.627613Z lvl=info msg="Starting continuous query service" log_id=0Vsro8Hl000 service=continuous_querier
ts=2021-08-10T09:40:34.626355Z lvl=info msg="Storing statistics" log_id=0Vsro8Hl000 service=monitor db_instance=_internal db_rp=monitor interval=10s
ts=2021-08-10T09:40:34.627886Z lvl=info msg="Starting HTTP service" log_id=0Vsro8Hl000 service=httpd authentication=false
ts=2021-08-10T09:40:34.628627Z lvl=info msg="opened HTTP access log" log_id=0Vsro8Hl000 service=httpd path=stderr
ts=2021-08-10T09:40:34.628868Z lvl=info msg="Listening on HTTP" log_id=0Vsro8Hl000 service=httpd addr=[::]:8086 https=false
ts=2021-08-10T09:40:34.629726Z lvl=info msg="Starting retention policy enforcement service" log_id=0Vsro8Hl000 service=retention check_interval=30m
ts=2021-08-10T09:40:34.629991Z lvl=info msg="Listening for signals" log_id=0Vsro8Hl000
ts=2021-08-10T09:40:34.630847Z lvl=info msg="Sending usage statistics to usage.influxdata.com" log_id=0Vsro8Hl000

1.7.启动influxdb控制台,第一次启动会较慢,耐心等待

[root@node1 bin]# ./influx -port 8085
Connected to http://localhost:8085 version 1.8.0
InfluxDB shell version: 1.8.0
> create user "tuzq" with password '123456' with all privileges
> show users
user admin
---- -----
tuzq true
>

2.使用

2.1.常用命令

[root@node1 bin]# ./influx -help
Usage of influx:-versionDisplay the version and exit.-path-prefix 'url path'Path that follows the host and port-host 'host name'Host to connect to.-port 'port #'Port to connect to.-socket 'unix domain socket'Unix socket to connect to.-database 'database name'Database to connect to the server.-password 'password'Password to connect to the server.  Leaving blank will prompt for password (--password '').-username 'username'Username to connect to the server.-sslUse https for requests.-unsafeSslSet this when connecting to the cluster using https and not use SSL verification.-execute 'command'Execute command and quit.-type 'influxql|flux'Type specifies the query language for executing commands or when invoking the REPL.-format 'json|csv|column'Format specifies the format of the server responses:  json, csv, or column.-precision 'rfc3339|h|m|s|ms|u|ns'Precision specifies the format of the timestamp:  rfc3339, h, m, s, ms, u or ns.-consistency 'any|one|quorum|all'Set write consistency level: any, one, quorum, or all-prettyTurns on pretty print for the json format.-importImport a previous database export from file-ppsHow many points per second the import will allow.  By default it is zero and will not throttle importing.-pathPath to file to import-compressedSet to true if the import file is compressedExamples:# Use influx in a non-interactive mode to query the database "metrics" and pretty print json:$ influx -database 'metrics' -execute 'select * from cpu' -format 'json' -pretty# Connect to a specific database on startup and set database context:$ influx -database 'metrics' -host 'localhost' -port '8086'

2.2.Influx命令使用

Influx是连接influxdb的一个CLI程序,在本地成功安装influxdb后,influxdb启动,influx命令就可以直接使用。Influxdb默认在8086端口运行。Influx cli默认使用localhost:8085连接。可以使用-host和-port选项,连接到其他机器或者端口。

[root@node1 bin]# ./influx -port 8085
Connected to http://localhost:8085 version 1.8.0
InfluxDB shell version: 1.8.0
>

2.3.Influx-sql使用举例

Influx命令连接数据库后,可以使用类sql语句查询数据

命令 解释
CREATE DATABASE database_name 建立数据库
SHOW DATABASES 查询所有数据库
USE database_name USE database_name
INSERT cpu,host=host_name,region=cn value=66 插数据入cpu measurement,标签host=host_name,region=cn, field是value=66
SELECT “host”, “region”, “value” FROM “cpu” 从cpu measurement查询数据
SELECT * FROM “cpu” 从cpu measurement查询所有数据

2.4.Influxdb的数据格式

Influxdb是一个时序数据库,其保存metric的在某一系列时间点的状态。这些点状态包含时间戳,一个所谓的“measurement”名称,至少一个称为filed的键值对,以及0到多个标签(如 host=bigdata01)。

在influxdb中可以把measurement看为关系数据库中的table,tag和field是其中的列。其中Tag都加了索引、field不加。和关系数据库不同的是,不用预先定义schema,没有value的点不会被保存。其格式如下:

<measurement>[,<tag-key>=<tag-value>...] <field-key>=<field-value>[,<field2-key>=<field2-value>...] [unix-nano-timestamp]

举例:

cpu,host=bigdata01,region=cn value=66payment,device=mobile,product=Notepad,method=credit billed=88,licenses=2i 1434067469156329323

3.Influxdb客户端工具

下载InfluxDBStudio-0.2.0,解压,然后点击:InfluxDBStudio.exe , 配置好参数配置:

示例:

Influxdb安装、启动influxdb控制台、常用命令、Influx命令使用、Influx-sql使用举例、Influxdb的数据格式、Influxdb客户端工具相关推荐

  1. MySQL客户端工具mysqladmin常用参数和命令总结

    文章目录 1.1 mysqladmin工具如何安装 1.2 mysqladmin工具有何作用 1.3 mysqladmin工具语法说明 1.4 mysqladmin读取文件获取连接信息 1.5 mys ...

  2. InfluxDB安装及使用

    1 安装 1.1 Tar包安装   (1)获取tar包 wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0_linux_am ...

  3. Redis入门【安装,常用类型,常用命令行命令】

    目录 1.安装 1.1 windows版 1.2 Linux版 2.Redis命令行客户端 3.Redis常见命令 3.1 通用命令 3.2 String类型 3.3 Hash类型 3.4 List类 ...

  4. debian安装 Debian的一些常用命令

    本篇文章主要介绍了"debian安装 Debian的一些常用命令",主要涉及到debian安装方面的内容,对于Linux教程感兴趣的同学可以参考一下: dpkg学习:(1)dpkg ...

  5. InfluxDB安装和简介

    InfluxDB是一个当下比较流行的时序数据库,InfluxDB使用 Go 语言编写,无需外部依赖,安装配置非常方便,适合构建大型分布式系统的监控系统. 一.InfluxDB 简介 InfluxDB ...

  6. MySQL 安装 启动命令总结

    MySQL 安装 启动 基本语法概述 MySQL安装和配置 我是直接使用安装包:mysql-installer-community-5.6.10.1.msi 安装的时候其中有几点要注意: 1.记住端口 ...

  7. brew 镜像_Docker牛刀小试:安装及常用的镜像命令和容器命令

    在上一篇我们对Docker做了一个简单介绍,有了一个犹抱琵琶半遮面的认识,这篇文章就揭开这半面黑纱,让Docker安装在我们的电脑上,根据官方文档,我们去操作它,去驾驭它! 我的电脑是Mac,可以通过 ...

  8. [转]supervisor 安装、配置、常用命令

    原文: http://www.cnblogs.com/xueweihan/p/6195824.html ------------------------------------------------ ...

  9. 虚拟机集群启动,Hadoop常用命令

    虚拟机集群启动,Hadoop常用命令 (一).启动hdfs 1,namenode节点第一次初始化 在namenode安装节点执行命令: hadoop namenode -format 2,单节点启动. ...

最新文章

  1. Angular路由——子路由
  2. 【2017年第4期】数据管理能力成熟度模型
  3. self-attention的作用,理解
  4. ROS机器人程序设计(原书第2版)3.1.2 ROS节点启动时调用gdb调试器
  5. android p下载地址,Android P发布,4款国产手机可体验,附下载地址
  6. echarts地图下钻可缩放
  7. Google浏览器并行下载设置
  8. MATLAB的.fig文件打不开——有效解决
  9. 攻防世界 Reverse logmein
  10. 【Mesh】关于Mesh中Seq+IV与RPL分析
  11. Ubuntu双系统卸载教程
  12. Excel的使用-查看公式引用的单元格【跬步】
  13. linux下的oracle安装
  14. 关于电信重组(华为中兴观察员出品,必属精品)
  15. 问卷星破除输入框粘贴限制的两种方法
  16. JavaSE综合项目演练
  17. SpringCloud Gateway API接口安全设计(加密 、签名)
  18. listdir() 方法的使用
  19. 一篇文章搞定IO流(超级详细!!!)
  20. 【百科】萨布利亚·坦贝肯

热门文章

  1. python变量保存在哪里_python小技巧——将变量保存在本地及读取
  2. Python多任务(1.多任务的介绍、并发和并行概念及小例子)
  3. 算法与数据结构(python):递归
  4. 高斯赛尔德、牛顿拉尔逊matlab潮流计算
  5. C++学习笔记2[表达式与语句]
  6. pyqt5讲解8:容器QTabWidget,QStackedWidget,QDockWidget
  7. VTK:绘制带有rgb的单元格颜色用法实战
  8. boost::mpl模块实现inherit相关的测试程序
  9. boost::make_iterator_range用法的测试程序
  10. boost::detail::lexical_cast_stream_traits用法的测试程序