Filebeat是beats的一种产品,是一款轻量级的日志采集器,它比logstash消耗资源少很多,通常在需要日志采集的服务上部署一个filebeat进行日志数据采集。filebeat可以配置从日志文件、redis、tcp等多种数据源输入采集数据,输出到ogstash、elasticsearch、redis、kafka、文件等多种介质。filebeat采集的数据,以及一些其他参数,会封装成一个json对象结构进行output,可以配置添加或移出字段。

下载filebeat:

curl -o filebeat.tar.gz https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.0-linux-x86_64.tar.gz

filebeat提供了一系列模块配置模板,位于modules.d目录下,也可以进行手工配置,如下采用手工配置模拟从一个文件采集日志数据,输出到另外一个文件。

修改 filebeat.yml ,开启input,指定采集日志文件路径,配置输出文件路径、名称(只能有一个output配置),配置processors过滤掉多于的字段。

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- /usr/local/java/elk/a.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1### Multiline options# Multiline can be used for log messages spanning multiple lines. This is common# for Java Stack Traces or C-Line Continuation# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#multiline.negate: false# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern# that was (not) matched before or after or as long as a pattern is not matched based on negate.# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#multiline.match: after#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:#============================== Kibana =====================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601#host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:#============================= Elastic Cloud ==================================# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:#================================ Outputs =====================================# Configure what output to use when sending the data collected by the beat.output.file:path: "/usr/local/java/elk/filebeat-test"filename: filebeat-test.log#rotate_every_kb: 10000#  #number_of_files: 7#    #permissions: 0600#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:# Array of hosts to connect to.#hosts: ["localhost:9200"]# Optional protocol and basic auth credentials.#protocol: "https"#username: "elastic"#password: "changeme"#----------------------------- Logstash output --------------------------------
#output.logstash:# The Logstash hosts#hosts: ["localhost:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"#================================ Processors =====================================# Configure processors to enhance or manipulate events generated by the beat.processors:- add_host_metadata: ~- add_cloud_metadata: ~- drop_fields:fields: ["beat", "input", "source", "offset"]#================================ Logging =====================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:#================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

启动filebeat

./filebeat -e -c filebeat.yml

在源日志文件 a.log模拟写入一行数据,会发现控制台上打印出发布的采集事件,以及json对象内容,查看目标文件filebeat-test.log已经被写入采集的测试数据。

Filebeat 轻量级日志采集器相关推荐

  1. 闲谈日志采集器FileBeat

    为什么使用FileBeat? 日志采集器有很多,比如Logstash,虽然Logstash的功能强大,但是它依赖java并且在数据量大的时候进程会消耗过多的系统资源,会严重影响业务系统的性能. 而fi ...

  2. 开源日志采集器如何选择?

    开源日志采集器如何选择? 一.简介 日志采集是整个日志基础设施中最基础最关键的组件之一,影响着企业内部数据的完整性以及实时性.采集器作为数据链路的前置环节,其可靠性.扩展性.灵活性以及资源(CPU 和 ...

  3. 网易基于Filebeat的日志采集服务设计与实践

    - 背景 - 云原生技术大潮已经来临,技术变革迫在眉睫. 在这股技术潮流之中,网易推出了 轻舟微服务平台,集成了微服务.Service Mesh.容器云.DevOps等组件,已经广泛应用于公司内部,同 ...

  4. Filebeat日志采集器实例

    目录 1 概述 2 安装Filebeat 2.1 配置Filebeat 2.2 配置Filebeat以使用Logstash 3 案例 3.1 流程说明 3.2 日志环境介绍 3.3 配置Filebea ...

  5. 轻量型日志采集器 Filebeat基本使用

    文章目录 1 介绍 2 安装 2.1 下载安装 3 收集日志配置 4 启动 5 举例 5.1 Filebeat收集日志并输出到控制台 5.2 Filebeat收集Nginx运行日志并输出到es 5.3 ...

  6. filebeat+logstash日志采集Invalid version of beats protocol错误

    项目使用filebeat采集Nginx日志送到Logstash进行格式化后送到ElasticSearch,filebeat和Logstash成功启动后logstash报了一个错误(Invalid ve ...

  7. Springboot/Springcloud整合ELK平台,(Filebeat方式)日志采集及管理(Elasticsearch+Logstash+Filebeat+Kibana)

    前言 最近在搞一套完整的云原生框架,详见 spring-cloud-alibaba专栏,目前已经整合的log4j2,但是想要一套可以实时观察日志的系统,就想到了ELK,然后上一篇文章是socket异步 ...

  8. Filebeat+Kafka+ELK日志采集(二)——Filebeat

    1.Filebeat概述 Filebeat用于日志采集,将采集的日志做简单处理(多行合并)发送至Kafka.Logstash.Elasticsearch等. 2.快速开始 先以最简模型快速开始再讲原理 ...

  9. mysql查看日志命令_面对成百上千台服务器产生的日志,试试这款轻量级日志搬运神器!

    之前我们搭建的ELK日志收集系统,主要是用来收集SpringBoot应用的日志.其原理是应用通过Logstash插件,使用TCP向Logstash传输日志,从而存储到Elasticsearch中去.但 ...

  10. mysql查看日志命令_面对成百上千台服务器产生的日志,试试这款轻量级日志搬运神器!...

    Filebeat简介 Filebeat是一款轻量级日志采集器,可用于转发和汇总日志与文件.Filebeat内置有多种模块(Nginx.MySQL.Redis.Elasticsearch.Logstas ...

最新文章

  1. 基因课 15天入门生物信息(2021年) 第三天 Linux基础命令(3)
  2. 一个简单的例子教会您使用javap
  3. php提示是否运行,php运行错误提示
  4. 比尔盖茨:反垄断案让我分心,不然微软定能打败安卓
  5. Wirkshark表达式
  6. pytorch基础API介绍
  7. 神经网络动态可视化工具
  8. 计算机网络波特率公式,传输速率——比特率和波特率
  9. 做高通平台安卓驱动感言
  10. Mac 苹果系统没有WIFI选项自检出现-1005D
  11. 微信小程序入门教程学习笔记
  12. 诗歌(1)—定风波(常羡)
  13. mysql 用户表结构设计_MySQL数据表结构设计
  14. 【Akka】Akka Actor生命周期
  15. 如何面试Java中级开发(16k)试题讲解和Java学习
  16. 【代码随想录】数组刷题
  17. 神经网络 c++ 源码 可以直接复制运行,提供数据集,操作简单,最少仅需4行代码
  18. 判断访问来源是pc端还是手机端
  19. matplotlib中设置窗口尺寸大小
  20. 《冰与火之歌(a song of ice and fire)》

热门文章

  1. 2023北京眼镜展览会暨首届智能眼镜展览会
  2. 泰坦尼克号——完美主义的杰作
  3. mysql数据库地址 名称_数据库地址和名称是什么?怎么知道自己地址和名称?
  4. android dlna uri,android DLNA投屏
  5. dns劫持 dns污染 http劫持
  6. java ee在线聊天室_基于jsp的网络聊天室-JavaEE实现网络聊天室 - java项目源码
  7. 车联网智能终端GB/T 32960国标协议规范 、国标新能源车联网终端GB/T32960标准T-BOX应用
  8. python np float_Python astype(np.float)函数使用方法解析
  9. Machine Learning A-Z学习笔记16-Thompson抽样算法
  10. 在LINUX环境下怎样设置无线网络配置