1. 下载和安装

# windows 下载,本文以windows命令演示
https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.2-windows-x86_64.zip# linux 下载
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.2-darwin-x86_64.tar.gz

1.1. 配置filebeat.yml

完整的配置放在文章末尾

1.2. ES模板fields.yml

直连ES时可以自动导模板,否则需要手动导入

# 手动导入
.\filebeat.exe setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
# 删除旧的模板文档
curl -XDELETE 'http://localhost:9200/filebeat-*'

1.3. 设置kibana仪表盘

# 初次使用可以跳过此步,可以熟悉了之后再配置Kibana
.\filebeat.exe setup --dashboards

1.4. 为Logstash输出设置仪表板

# 初次使用可以跳过此步,可以熟悉了之后再配置Kibana
.\filebeat.exe setup -e `-E output.logstash.enabled=false `-E output.elasticsearch.hosts=['140.143.206.106:9200'] `-E output.elasticsearch.username=filebeat_internal `-E output.elasticsearch.password=YOUR_PASSWORD `-E setup.kibana.host=localhost:5601

2. 启动

2.1. 安装

如果是windows上, 可以修改一下install-service-filebeat.ps1里的安装路径(E:\filebeat-6.6.2-windows-x86_64)

# Delete and stop the service if it already exists.
if (Get-Service filebeat -ErrorAction SilentlyContinue) {$service = Get-WmiObject -Class Win32_Service -Filter "name='filebeat'"$service.StopService()Start-Sleep -s 1$service.delete()
}$workdir = Split-Path $MyInvocation.MyCommand.Path# Create the new service.
New-Service -name filebeat `-displayName Filebeat `-binaryPathName "`"$workdir\filebeat.exe`" -c `"$workdir\filebeat.yml`" -path.home `"$workdir`" -path.data `"E:\filebeat-6.6.2-windows-x86_64\datas`" -path.logs `"E:\filebeat-6.6.2-windows-x86_64\logs`""

2.2. 启动

# mac and linux 执行:
sudo chown root filebeat.yml
sudo ./filebeat -e
sudo ./filebeat -e -c filebeat.yml -d "publish"# windows 以管理员身份执行,记得修改ps1里的路径
# 安装,没有报错则成功
PS E:\filebeat-6.6.2-windows-x86_64> .\install-service-filebeat.ps1# 卸载
PS E:\filebeat-6.6.2-windows-x86_64> .\uninstall-service-filebeat.ps1# 启动
PS E:\filebeat-6.6.2-windows-x86_64> Start-Service filebeat

启动成功后可以查看日志,看是否正常采集
测试阶段,采集的日志偏移量存在filebeat注册表中,删除datas目录下的registry相关文件后,可以从头再次采集

3. 配置多行采集

3.1. multiline在Filebeat中的配置实现(第一种方式,推荐):

filebeat.prospectors:-paths:- /home/project/elk/logs/test.loginput_type: log multiline:pattern: '^\['negate: truematch: after
output:logstash:hosts: ["localhost:5044"]

pattern:正则表达式
negate:默认为false,表示匹配pattern的行合并到上一行;true表示不匹配pattern的行合并到上一行
match:after表示合并到上一行的末尾,before表示合并到上一行的行首

# 该配置表示将不匹配pattern模式的行合并到上一行的末尾
pattern: '\['
negate: true
match: after

3.2. multiline在Logstash中的配置实现方式(第二种方式)

input {beats {port => 5044}
}filter {multiline {pattern => "%{LOGLEVEL}\s*\]"negate => truewhat => "previous"}
}output {elasticsearch {hosts => "localhost:9200"}
}

(1)Logstash中配置的what属性值为previous,相当于Filebeat中的after,Logstash中配置的what属性值为next,相当于Filebeat中的before。

(2)pattern => “%{LOGLEVEL}\s*]” 中的LOGLEVEL是Logstash预制的正则匹配模式,预制的还有好多常用的正则匹配模式,详细请看:https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns

3.3. 日志的时间字段替换为日志信息中的时间

https://juejin.im/entry/5a2761446fb9a045186a9b36

3.4. 标识不同系统模块的字段或根据不同模块建ES索引

fileBeat 通过新增:log_from字段来标识不同的系统模块日志

filebeat.prospectors:-paths:- /home/project/elk/logs/account.loginput_type: log multiline:pattern: '^\['negate: truematch: afterfields: # 新增log_from字段log_from: account-paths:- /home/project/elk/logs/customer.loginput_type: log multiline:pattern: '^\['negate: truematch: afterfields:log_from: customer
output:logstash:hosts: ["localhost:5044"]

然后修改Logstash中output的配置内容,
在output中增加index属性,%{type}表示按不同的document_type值建ES索引:

output {elasticsearch {hosts => "localhost:9200"index => "%{type}"}
}

以下是完整的测试配置,注意修改日志路径:

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- D:\IDEA\infrastructure\logs\*\*.log#- /var/log/*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1### Multiline optionsmultiline:pattern: '^\['negate: truematch: after# Multiline can be used for log messages spanning multiple lines. This is common# for Java Stack Traces or C-Line Continuation# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#multiline.negate: false# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern# that was (not) matched before or after or as long as a pattern is not matched based on negate.# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#multiline.match: afterfields: log_from: api-gateway-application#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 3#index.codec: best_compression#_source.enabled: false#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:#============================== Kibana =====================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:#============================= Elastic Cloud ==================================# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:#================================ Outputs =====================================# Configure what output to use when sending the data collected by the beat.#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:# Array of hosts to connect to.#hosts: ["140.143.206.106:9200"]# Enabled ilm (beta) to use index lifecycle management instead daily indices.#ilm.enabled: false# Optional protocol and basic auth credentials.#protocol: "https"#username: "elastic"#password: "changeme"#----------------------------- Logstash output --------------------------------
output.logstash:# The Logstash hostshosts: ["localhost:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"#================================ Processors =====================================# Configure processors to enhance or manipulate events generated by the beat.processors:- add_host_metadata: ~- add_cloud_metadata: ~#================================ Logging =====================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

FileBeat笔记(一)——安装和采集Springboot日志相关推荐

  1. 通过filebeat、logstash、rsyslog采集nginx日志的几种方式

    由于nginx功能强大,性能突出,越来越多的web应用采用nginx作为http和反向代理的web服务器.而nginx的访问日志不管是做用户行为分析还是安全分析都是非常重要的数据源之一.如何有效便捷的 ...

  2. 日志文件转运工具Filebeat笔记|万字长文

    一.概述与简介 Filebeat是一个日志文件转运工具,在服务器上以轻量级代理的形式安装客户端后,Filebeat会监控日志目录或者指定的日志文件,追踪读取这些文件(追踪文件的变化,不停的读),并将来 ...

  3. k8s笔记22--使用fluent-bit采集集群日志

    k8s笔记22--使用fluent-bit采集集群日志 1 介绍 2 部署 & 测试 2.1 获取安装 fluent-bit 2.2 直接采集日志到 es 集群 2.3 直接采集日志到 kaf ...

  4. 2021年大数据ELK(十九):使用FileBeat采集Kafka日志到Elasticsearch

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 使用FileBeat采集Kafka日志到Elasticsearch 一.需求分 ...

  5. filebeat采集nginx日志

    背景 我们公司项目组用的是elastic的一整套技术栈,es,kibana,filebeat和apm,目前已经可以采集网关+各个微服务的日志. 架构图 现在需要在原来的基础上把nginx这的日志也采集 ...

  6. 采集Nginx日志的几种方式

    点击上方蓝色"程序猿DD",选择"设为星标" 回复"资源"获取独家整理的学习资料! 来源 | https://dwz.cn/ofiCxRK0 ...

  7. filebeat 收集json格式_集群日志收集架构ELK

    欢迎关注头条号:老顾聊技术 精品原创技术分享,知识的组装工 前言 前几篇我们介绍了项目中如何使用logback组件记录系统的日志情况:现在我们的系统都是分布式的,集群化的,那就代表着我们的应用会分布在 ...

  8. 闭眼入!采集 Nginx 日志的几种方式!

    作者:xiejava 来源:cnblogs.com/xiejava/p/12452434.html 由于nginx功能强大,性能突出,越来越多的web应用采用nginx作为http和反向代理的web服 ...

  9. filebeat 笔记

    认识Beats Beats是用于单用途数据托运人的平台.它们以轻量级代理的形式安装,并将来自成百上千台机器的数据发送到Logstash或Elasticsearch. (画外音:通俗地理解,就是采集数据 ...

  10. ELK安装配置及nginx日志分析

    一.ELK简介 1.组成 ELK是Elasticsearch.Logstash.Kibana三个开源软件的组合.在实时数据检索和分析场合,三者通常是配合使用,而且又都先后归于 Elastic.co 公 ...

最新文章

  1. Android ProgressBar 加载中界面实现(loading 动画) 实现菊花的效果
  2. 独家 | 带你认识机器学习的的本质(附资料)
  3. 计算距离torch.nn.PairwiseDistance
  4. 中国联通李福昌:探索无线连接的未来
  5. html dom节点类型,浅谈Javascript中的12种DOM节点类型
  6. golang之包和锁的机制
  7. 在线阅读!!机器学习数学精华:线性代数
  8. 十.激光SLAM框架学习之LeGO-LOAM框架---算法原理和改进、项目工程代码
  9. 初创企业如何实现2天快速上线?
  10. ElasticSearch常用API操作示例
  11. 1.SpringDataJPA (查询:主键或其他字段、增加/修改、分页) 2021最新技术~方便快捷 博主可答疑
  12. uni-app 连接蓝牙打印机
  13. Google Colab 详细注册教程
  14. .NET前后分离解决方案
  15. 棋牌游戏开发制做花费,您知多少呢?
  16. 揭秘收入中常见的避税方法
  17. 戴尔通过F12一次性引导菜单刷新BIOS
  18. Ubuntu离线安装软件包
  19. 【粉丝福利,限时免费】【千里之行,始于脚下】我在CSDN上的精品博文汇总,收藏起来慢慢看
  20. ChinaSoft 论坛巡礼 | 优秀博士生论坛

热门文章

  1. Android 私有权限白名单
  2. song -接小球游戏1
  3. 阿塞拜疆对加密货币收入及利润征税
  4. pandas入门与数据准备与简单筛选统计
  5. 快速上手efficient(keras)
  6. 摄像头相关控制器集合
  7. 【一句日历】2019年9月
  8. 后端给base64码,在前端显示成图片
  9. Permission is only granted to system apps解决方法
  10. 洛谷 P3957 跳房子