ELK日志分析平台

  • logstash数据采集
    • 1.logstash简介
    • 2.Logstash安装
    • 3.标准输入到标准输出
    • 4.标准输入到文件
    • 5.标准输入到es主机
    • 6.指定文件输入到es主机
    • 7.sincedb
    • 8.远程日志输入
    • 9.多行过滤插件
    • 10.grok过滤
    • 11.apache服务日志过滤实战

logstash数据采集

1.logstash简介

Logstash是一个开源的服务器端数据处理管道。

logstash拥有200多个插件,能够同时从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的 “存储库” 中。(大多都是 Elasticsearch。)

Logstash管道有两个必需的元素,输入和输出,以及一个可选元素过滤器。

输入:采集各种样式、大小和来源的数据

  • Logstash 支持各种输入选择 ,同时从众多常用来源捕捉事件。
  • 能够以连续的流式传输方式,轻松地从您的日志、指标、Web 应用、数据存储以及各种 AWS 服务采集数据。

过滤器:实时解析和转换数据

  • 数据从源传输到存储库的过程中,Logstash 过滤器能够解析各个事件,识别已命名的字段以构建结构,并将它们转换成通用格式,以便更轻松、更快速地分析和实现商业价值。

    • 利用 Grok 从非结构化数据中派生出结构
    • 从 IP 地址破译出地理坐标
    • 将 PII 数据匿名化,完全排除敏感字段
    • 简化整体处理,不受数据源、格式或架构的影响

输出:选择您的存储库,导出您的数据

  • 尽管 Elasticsearch 是我们的首选输出方向,能够为我们的搜索和分析带来无限可能,但它并非唯一选择。
  • Logstash 提供众多输出选择,您可以将数据发送到您要指定的地方,并且能够灵活地解锁众多下游用例

2.Logstash安装

logstash安装

[root@server6 ~]# ls
anaconda-ks.cfg  jdk-8u171-linux-x64.rpm  logstash-7.6.1.rpm
[root@server6 ~]# rpm -ivh jdk-8u171-linux-x64.rpm
Preparing...                          ################################# [100%]
Updating / installing...1:jdk1.8-2000:1.8.0_171-fcs        ################################# [100%]
Unpacking JAR files...tools.jar...plugin.jar...javaws.jar...deploy.jar...rt.jar...jsse.jar...charsets.jar...localedata.jar...
[root@server6 ~]# rpm -ivh logstash-7.6.1.rpm
warning: logstash-7.6.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:logstash-1:7.6.1-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/pleaserun-0.0.30/lib/pleaserun/platform/base.rb:112: warning: constant ::Fixnum is deprecated
Successfully created system startup script for Logstash

logstash命令

[root@server6 ~]# cd /usr/share/logstash/
[root@server6 logstash]# ls
bin           Gemfile       LICENSE.txt               modules     vendor
CONTRIBUTORS  Gemfile.lock  logstash-core             NOTICE.TXT  x-pack
data          lib           logstash-core-plugin-api  tools
[root@server6 logstash]# cd bin/
[root@server6 bin]# ls
benchmark.sh         logstash               logstash.lib.sh      pqrepair
cpdump               logstash.bat           logstash-plugin      ruby
dependencies-report  logstash-keystore      logstash-plugin.bat  setup.bat
ingest-convert.sh    logstash-keystore.bat  pqcheck              system-install
[root@server6 bin]# pwd
/usr/share/logstash/bin
[root@server6 bin]# /usr/share/logstash/bin/logstash
benchmark.sh           logstash-keystore      pqrepair
cpdump                 logstash-keystore.bat  ruby
dependencies-report    logstash.lib.sh        setup.bat
ingest-convert.sh      logstash-plugin        system-install
logstash               logstash-plugin.bat
logstash.bat           pqcheck

3.标准输入到标准输出

 /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
[root@server6 bin]# /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-08-13 22:18:22.870 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2021-08-13 22:18:22.898 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2021-08-13 22:18:23.483 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-08-13 22:18:23.500 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.1"}
[INFO ] 2021-08-13 22:18:23.542 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"3ef2b8c0-2d53-4d04-b99a-e82f2699855f", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2021-08-13 22:18:25.724 [Converge PipelineAction::Create<main>] Reflections - Reflections took 55 ms to scan 1 urls, producing 20 keys and 40 values
[WARN ] 2021-08-13 22:18:27.485 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2021-08-13 22:18:27.487 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x5b034ce4 run>"}
[INFO ] 2021-08-13 22:18:28.949 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-08-13 22:18:28.989 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
The stdin plugin is now waiting for input:
[INFO ] 2021-08-13 22:18:29.573 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
hello world
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{"@version" => "1","host" => "server6","message" => "hello world","@timestamp" => 2021-08-14T02:18:47.164Z
}
lalal
{"@version" => "1","host" => "server6","message" => "lalal","@timestamp" => 2021-08-14T02:18:51.972Z
}
^C[WARN ] 2021-08-13 22:18:55.198 [SIGINT handler] runner - SIGINT received. Shutting down.
[INFO ] 2021-08-13 22:18:55.457 [Converge PipelineAction::Stop<main>] javapipeline - Pipeline terminated {"pipeline.id"=>"main"}
[INFO ] 2021-08-13 22:18:55.555 [LogStash::Runner] runner - Logstash shut down.

4.标准输入到文件

执行文本,指定输出位置/tmp/testfile和输出格式

[root@server6 bin]# cat test.conf
input {stdin {}
}output {file {path => "/tmp/testfile"codec => line { format => "custom format: %{message}"}}
}
[root@server6 bin]# /usr/share/logstash/bin/logstash -f test.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2021-08-13 22:20:32.191 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-08-13 22:20:32.212 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.1"}
[INFO ] 2021-08-13 22:20:34.637 [Converge PipelineAction::Create<main>] Reflections - Reflections took 69 ms to scan 1 urls, producing 20 keys and 40 values
[WARN ] 2021-08-13 22:20:35.445 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2021-08-13 22:20:35.467 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/bin/test.conf"], :thread=>"#<Thread:0x1126f599 run>"}
[INFO ] 2021-08-13 22:20:36.655 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-08-13 22:20:36.716 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
The stdin plugin is now waiting for input:
[INFO ] 2021-08-13 22:20:37.080 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
redhat
[INFO ] 2021-08-13 22:20:46.481 [[main]>worker0] file - Opening file {:path=>"/tmp/testfile"}
zhangyi
[INFO ] 2021-08-13 22:21:05.421 [[main]>worker0] file - Closing file /tmp/testfile
^C[WARN ] 2021-08-13 22:21:06.948 [SIGINT handler] runner - SIGINT received. Shutting down.
[INFO ] 2021-08-13 22:21:07.192 [Converge PipelineAction::Stop<main>] javapipeline - Pipeline terminated {"pipeline.id"=>"main"}
[INFO ] 2021-08-13 22:21:07.476 [LogStash::Runner] runner - Logstash shut down.

查看输出文件

[root@server6 bin]# cat /tmp/testfile
custom format: redhat
custom format: zhangyi

5.标准输入到es主机

文本内容

[root@server6 bin]# cat es.conf
input {stdin {}
}output {stdout {}elasticsearch {hosts => ["172.25.3.3:9200"]    #输出到的ES主机与端口index => "logstash-%{+yyyy.MM.dd}"   #定制索引名称}
}

执行文件

[root@server6 bin]# /usr/share/logstash/bin/logstash -f es.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2021-08-13 22:23:02.202 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-08-13 22:23:02.210 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.1"}
[INFO ] 2021-08-13 22:23:04.610 [Converge PipelineAction::Create<main>] Reflections - Reflections took 66 ms to scan 1 urls, producing 20 keys and 40 values
[INFO ] 2021-08-13 22:23:06.522 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.25.3.3:9200/]}}
[WARN ] 2021-08-13 22:23:06.842 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://172.25.3.3:9200/"}
[INFO ] 2021-08-13 22:23:07.153 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-08-13 22:23:07.155 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-08-13 22:23:07.328 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.25.3.3:9200"]}
[INFO ] 2021-08-13 22:23:07.459 [Ruby-0-Thread-6: :1] elasticsearch - Using default mapping template
[WARN ] 2021-08-13 22:23:07.485 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2021-08-13 22:23:07.512 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/bin/es.conf"], :thread=>"#<Thread:0x798305c5 run>"}
[INFO ] 2021-08-13 22:23:07.569 [Ruby-0-Thread-6: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2021-08-13 22:23:07.637 [Ruby-0-Thread-6: :1] elasticsearch - Installing elasticsearch template to _template/logstash
[INFO ] 2021-08-13 22:23:09.064 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-08-13 22:23:09.122 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
The stdin plugin is now waiting for input:
[INFO ] 2021-08-13 22:23:09.518 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
hello
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{"host" => "server6","@version" => "1","message" => "hello","@timestamp" => 2021-08-14T02:23:14.157Z
}
zhangyi
{"host" => "server6","@version" => "1","message" => "zhangyi","@timestamp" => 2021-08-14T02:23:18.376Z
}
^C[WARN ] 2021-08-13 22:24:26.428 [SIGINT handler] runner - SIGINT received. Shutting down.
[INFO ] 2021-08-13 22:24:26.707 [Converge PipelineAction::Stop<main>] javapipeline - Pipeline terminated {"pipeline.id"=>"main"}
[INFO ] 2021-08-13 22:24:27.415 [LogStash::Runner] runner - Logstash shut down.

head插件内查看输出

6.指定文件输入到es主机

指定输入文件/var/log/messages,输出到172.25.3.3:9200

/usr/share/logstash/bin/logstash -f  /etc/logstash/conf.d/file-es.conf
[root@server6 conf.d]# cat file-es.conf
input {file {path => "/var/log/messages"start_position => "beginning"}
}output {stdout {}elasticsearch {hosts => ["172.25.3.3:9200"]index => "logstash-%{+yyyy.MM.dd}"}
}

查看输出内容

7.sincedb

logstash如何区分设备、文件名、文件的不同版本

  • logstash会把进度保存到sincedb文件中
  • 想要从头重新输入,需要删除sincedb
# find / -name .sincedb*
/usr/share/logstash/data/plugins/inputs/file/.sincedb_45290
5a167cf4509fd08acb964fdb20c
# cd /usr/share/logstash/data/plugins/inputs/file/
# cat .sincedb_452905a167cf4509fd08acb964fdb20c
20297 0 64768 119226 1551859343.6468308
/var/log/messages
# ls -i /var/log/messages
20297 /var/log/messages

sincedb文件内容解释

 # cat .sincedb_452905a167cf4509fd08acb964fdb20c
20297 0 64768 119226 1551859343.6468308   /var/log/messages

sincedb文件一共6个字段

  1. inode编号
  2. 文件系统的主要设备号
  3. 文件系统的次要设备号
  4. 文件中的当前字节偏移量
  5. 最后一个活动时间戳(浮点数)
  6. 与此记录匹配的最后一个已知路径

8.远程日志输入

logstash可以伪装成日志服务器,直接接受远程日志

配置server3/4,开放514端口,指向172.25.3.6

$ModLoad imudp
$UDPServerRun 514
*.* @@172.25.3.6:514
systemctl  restart  rsyslog.service

查看514端口

[root@server6 ~]# netstat  -antlp|grep :514
tcp6       0      0 :::514                  :::*                    LISTEN      21934/java
tcp6       0      0 172.25.3.6:514          172.25.3.4:43182        ESTABLISHED 21934/java
tcp6       0      0 172.25.3.6:514          172.25.3.3:54166        ESTABLISHED 21934/java

log.conf 指定syslog输入

[root@server6 conf.d]# cat log.conf
input {syslog {port => 514}
}output {stdout {}elasticsearch {hosts => ["172.25.3.3:9200"]index => "syslog-%{+yyyy.MM.dd}"}
}

执行文件,查看syslog

9.多行过滤插件

输入信息一般一行为一段内容,但对于错误信息,需要多行归于一个日志信息

[2021-08-14T00:53:42,821][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [server3] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: Failed to parse value [fals] as only [true] or [false] are allowed.at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.6.1.jar:7.6.1]at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.6.1.jar:7.6.1]
Caused by: java.lang.IllegalArgumentException: Failed to parse value [fals] as only [true] or [false] are allowed.at org.elasticsearch.common.Booleans.parseBoolean(Booleans.java:73) ~[elasticsearch-core-7.6.1.jar:7.6.1]at org.elasticsearch.common.settings.Setting.parseBoolean(Setting.java:1279) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.common.settings.Setting.lambda$boolSetting$24(Setting.java:1256) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.common.settings.Setting.get(Setting.java:433) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.common.settings.Setting.get(Setting.java:427) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.cluster.node.DiscoveryNode.isDataNode(DiscoveryNode.java:69) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:321) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.node.Node.<init>(Node.java:277) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.node.Node.<init>(Node.java:257) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.6.1.jar:7.6.1]at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.6.1.jar:7.6.1]

单行输入展示

[root@server6 conf.d]# cat es-log.conf
input {file {path => "/var/log/my-es.log"start_position => "beginning"#codec => multiline {#pattern => "^\["#negate => "true"#what => "previous"#}}
}output {stdout {}elasticsearch {hosts => ["172.25.3.3:9200"]index => "eslog-%{+yyyy.MM.dd}"}}
 /usr/share/logstash/bin/logstash -f es-log.conf

添加多行过滤,再次执行

[root@server6 conf.d]# cat es-log.conf
input {file {path => "/var/log/my-es.log"start_position => "beginning"codec => multiline {pattern => "^\["negate => "true"what => "previous"}}
}output {stdout {}elasticsearch {hosts => ["172.25.3.3:9200"]index => "eslog-%{+yyyy.MM.dd}"}}

在执行前要删除sincedb,重新录入

[root@server6 file]# ls
[root@server6 file]# ls -a
.  ..  .sincedb_13f094911fdac7ab3fa6f4c93fee6639
[root@server6 file]# rm -rf .sincedb_13f094911fdac7ab3fa6f4c93fee6639
[root@server6 file]# pwd
/usr/share/logstash/data/plugins/inputs/file
 /usr/share/logstash/bin/logstash -f es-log.conf

查看多行过滤效果

10.grok过滤

grok过滤文本

[root@server6 conf.d]# cat grok.conf
input {stdin {}
}filter {grok {match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }}
}output {stdout {}
}

执行grok.conf

 /usr/share/logstash/bin/logstash -f grok.conf

输入内容,查看切片信息

55.3.244.1 GET /index.html 15824 0.043/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{"message" => "55.3.244.1 GET /index.html 15824 0.043","client" => "55.3.244.1","method" => "GET","host" => "server6","@timestamp" => 2021-08-14T05:53:23.078Z,"duration" => "0.043","@version" => "1","request" => "/index.html","bytes" => "15824"
}

11.apache服务日志过滤实战

grok http过滤模块

[root@server6 conf.d]# cd /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns
[root@server6 patterns]# ls
aws     bind  exim       grok-patterns  httpd  junos         maven        mcollective-patterns  nagios      rails  ruby
bacula  bro   firewalls  haproxy        java   linux-syslog  mcollective  mongodb               postgresql  redis  squid
[root@server6 patterns]# vim httpd
[root@server6 patterns]# cat httpd
HTTPDUSER %{EMAILADDRESS}|%{USER}
HTTPDERROR_DATE %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}# Log formats
HTTPD_COMMONLOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
HTTPD_COMBINEDLOG %{HTTPD_COMMONLOG} %{QS:referrer} %{QS:agent}# Error logs
HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:message}
HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel}\] \[pid %{POSINT:pid}(:tid %{NUMBER:tid})?\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_message}:)?( \[client %{IPORHOST:clientip}:%{POSINT:clientport}\])?( %{DATA:errorcode}:)? %{GREEDYDATA:message}
HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}# Deprecated
COMMONAPACHELOG %{HTTPD_COMMONLOG}
COMBINEDAPACHELOG %{HTTPD_COMBINEDLOG}

查看http access_log日志内容

[root@server6 conf.d]# cat /var/log/httpd/access_log
172.25.3.6 - - [14/Aug/2021:01:56:11 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:12 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:13 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:27 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:28 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:28 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:43 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"
172.25.3.6 - - [14/Aug/2021:01:56:48 -0400] "GET / HTTP/1.1" 200 11 "-" "curl/7.29.0"

为目录添加755权限

[root@server6 patterns]# chmod 755 /var/log/httpd/
[root@server6 patterns]#  ll -d  /var/log/httpd
drwxr-xr-x 2 root root 41 Aug 14 01:55 /var/log/httpd

apache.conf 内容

[root@server6 conf.d]# cat apache.conf
input {file {path => "/var/log/httpd/access_log"start_position => "beginning"}}filter {grok {match => { "message" => "%{HTTPD_COMBINEDLOG}" }}
}output {stdout {}elasticsearch {hosts => ["172.25.0.3:9200"]index => "apachelog-%{+yyyy.MM.dd}"}}

执行

 /usr/share/logstash/bin/logstash -f apache.conf

查看切片内容

{"agent" => "\"curl/7.29.0\"","auth" => "-","host" => "server6","@version" => "1","referrer" => "\"-\"","httpversion" => "1.1","ident" => "-","bytes" => "11","verb" => "GET","@timestamp" => 2021-08-14T06:08:50.340Z,"path" => "/var/log/httpd/access_log","request" => "/","message" => "172.25.3.6 - - [14/Aug/2021:01:56:48 -0400] \"GET / HTTP/1.1\" 200 11 \"-\" \"curl/7.29.0\"","clientip" => "172.25.3.6","timestamp" => "14/Aug/2021:01:56:48 -0400","response" => "200"
}

企业项目实战---ELK日志分析平台之logstash数据采集(二)相关推荐

  1. ELK 日志分析平台 —— Logstash

    ELK 日志分析平台 -- Logstash 文章目录 ELK 日志分析平台 -- Logstash Logstash 简介 Logstash的工作原理 [注]:Logstash file插件 sin ...

  2. ELK日志分析平台.1-搭建

    ELK日志分析平台.1-搭建 2017-12-28 | admin 一.简介 1.核心组成     ELK由Elasticsearch.Logstash和Kibana三部分组件组成:     Elas ...

  3. 搭建ELK日志分析平台(上)—— ELK介绍及搭建 Elasticsearch 分布式集群

    笔记内容:搭建ELK日志分析平台(上)-- ELK介绍及搭建 Elasticsearch 分布式集群 笔记日期:2018-03-02 27.1 ELK介绍 27.2 ELK安装准备工作 27.3 安装 ...

  4. 搭建ELK日志分析平台(下)—— 搭建kibana和logstash服务器

    27.6 安装kibana 27.7 安装logstash 27.8 配置logstash 27.9 kibana上查看日志 27.10 收集nginx日志 27.11 使用beats采集日志 本文是 ...

  5. ELK日志分析平台-Elasticsearch搭建和异常处理

    一.介绍 1.1. ELK日志分析平台 (1)不是一款软件,而是一整套解决方案,是三个软件产品的首字母缩写. ELK分别代表: Elasticsearch:负责日志检索和储存 Logstash:负责日 ...

  6. 携程ELK日志分析平台深耕之路

    源起 日志,看似简单简单的文本,在网站运维人员眼里却似一座蕴含丰富的宝藏.通常以下运维任务都或多或少需要运维人员和日志打交道: 系统健康状况监控 查找故障根源 系统瓶颈诊断和调优 追踪安全相关问题 技 ...

  7. logstash 吞吐量优化_1002-谈谈ELK日志分析平台的性能优化理念

    在生产环境中,我们为了更好的服务于业务,通常会通过优化的手段来实现服务对外的性能最大化,节省系统性能开支:关注我的朋友们都知道,前段时间一直在搞ELK,同时也记录在了个人的博客篇章中,从部署到各个服务 ...

  8. 在Windows系统下搭建ELK日志分析平台

    2018年07月11日 22:29:45 民国周先生 阅读数:35 再记录一下elk的搭建,个人觉得挺麻烦的,建议还是在linux系统下搭建,性能会好一些,但我是在windows下搭建的,还是记录一下 ...

  9. ELK日志分析平台的搭建

    1  ELK平台简介 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安全 ...

最新文章

  1. 加速产业生态算力升级,华为鲲鹏展翅福州
  2. MIPS 通用寄存器
  3. c++primer plus笔记
  4. 通过apizza生成python接口测试代码
  5. AlphaGo Zero算法简介
  6. 这位日本网友和谷歌街景的故事,感动了58万人
  7. clickhouse优缺点总结
  8. 动词ing形式的5种用法_动词-ing形式用法归纳
  9. Cesium 添加边界墙边界线
  10. 11 | 二进制编码:“手持两把锟斤拷,口中疾呼烫烫烫”?
  11. com.alibaba.nacos.shaded.io.grpc.StatusRuntimeException UNAVAILABLE io exception
  12. 使用certbot工具制作免费https证书
  13. ListView中如何优化图片
  14. java怎么求平方怎么求指数?
  15. 科研实习 | 北京大学万小军老师课题组招收NLP方向实习生和访问学生
  16. vs2019 fatal error C1090: PDB API “3“
  17. zotero 使用总结
  18. 什么是HTTP协议?什么是HTTPS协议?
  19. 开发效率提升300%,Vue3新特性已成气候!
  20. 六维/轴力传感器Set-Bias的使用

热门文章

  1. “三无”(无病毒、无木马、无插件)标志是什么样子?
  2. 网络中的计算机显示不全,查看工作组计算机时计算机显示不全
  3. go语言中“...“3个点用法
  4. 服务器磁盘阵列RAID
  5. 9首小虎队歌曲为样本做的人工智能音乐两首
  6. latex中的重音命令和特殊字母(部分)
  7. intro是啥意思_intro-是什么意思_intro-怎么读_intro-翻译_用法_发音_词组_同反义词_表示“向内_在内”之义-新东方在线英语词典...
  8. Linux安全审计 auditd 服务
  9. java8 jce无限制权,JDK JCE 限制和无限制
  10. 用python刷网页浏览量_Python 刷点击量的代码详解