docker 容器日志集中 ELK

ELK 基于 ovr 网络下

docker-compose.yaml

version: '2'
networks:network-test:external:name: ovr0
services:elasticsearch:image: elasticsearchnetwork-test:external:hostname: elasticsearchcontainer_name: elasticsearchrestart: alwaysvolumes:- /opt/elasticsearch/data:/usr/share/elasticsearch/datakibana:image: kibananetwork-test:external:    hostname: kibanacontainer_name: kibanarestart: alwaysenvironment:ELASTICSEARCH_URL: http://elasticsearch:9200/ports:- 5601:5601logstash:image: logstashnetwork-test:external:hostname: logstashcontainer_name: logstashrestart: alwaysvolumes:- /opt/logstash/conf:/opt/logstash/confcommand: logstash -f /opt/logstash/conf/filebeat:image: prima/filebeatnetwork-test:external:hostname: filebeatcontainer_name: filebeatrestart: alwaysvolumes:- /opt/filebeat/conf/filebeat.yml:/filebeat.yml- /opt/upload:/data/logs- /opt/filebeat/registry:/etc/registry

filebeat说明:
filebeat.yml 挂载为 filebeat 的配置文件
logs 为 容器挂载日志的目录
registry 读取日志的记录,防止filebeat 容器挂掉,需要重新读取所有日志

logstash 配置文件如下:

第一种: 使用 patterns .

logstash.conf: (配置了二种输入模式, filebeats, syslog)

input {beats {port => 5044type => beats}tcp {port => 5000type => syslog}}filter {if[type] == "tomcat-log" {multiline {patterns_dir => "/opt/logstash/conf/patterns"pattern => "(^%{TOMCAT_DATESTAMP})|(^%{CATALINA_DATESTAMP})"negate => truewhat => "previous"}if "ERROR" in [message] {    #如果消息里有ERROR字符则将type改为自定义的标记mutate { replace => { type => "tomcat_catalina_error" } }}else if "WARN" in [message] {mutate { replace => { type => "tomcat_catalina_warn" } }}else if "DEBUG" in [message] {mutate { replace => { type => "tomcat_catalina_debug" } }}else {mutate { replace => { type => "tomcat_catalina_info" } }}grok{patterns_dir => "/opt/logstash/conf/patterns"match => [ "message", "%{TOMCATLOG}", "message", "%{CATALINALOG}" ]remove_field => ["message"]    #这表示匹配成功后是否删除原始信息,这个看个人情况,如果为了节省空间可以考虑删除}date {match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS Z", "MMM dd, yyyy HH:mm:ss a" ]}}    if[type] == "nginx-log" {if '"status":"404"' in [message] {mutate { replace => { type => "nginx_error_404" } }}else if '"status":"500"' in [message] {mutate { replace => { type => "nginx_error_500" } }}else if '"status":"502"' in [message] {mutate { replace => { type => "nginx_error_502" } }}else if '"status":"403"' in [message] {mutate { replace => { type => "nginx_error_403" } }}else if '"status":"504"' in [message] {mutate { replace => { type => "nginx_error_504" } }}else if '"status":"200"' in [message] {mutate { replace => { type => "nginx_200" } }}grok {remove_field => ["message"]    #这表示匹配成功后是否删除原始信息,这个看个人情况,如果为了节省空间可以考虑删除}}
}output {elasticsearch { hosts => ["elasticsearch:9200"]}#stdout { codec => rubydebug }    #输出到屏幕上
}

/opt/logstash/conf/patterns  下面存放 grok 文件

grok-patterns:

USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\bPOSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
IP (?:%{IPV6}|%{IPV4})
HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT (?:%{IPORHOST=~/\./}:%{POSINT})# paths
PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[\w_%!$@:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn't turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%_\-]*)+
#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?# Months: January, Feb, 3, 03, 12, December
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)# Years?
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# '60' is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}# Shortcuts
QS %{QUOTEDSTRING}# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}# Log Levels
LOGLEVEL ([A-a]lert|ALERT|[T|t]race|TRACE|[D|d]ebug|DEBUG|[N|n]otice|NOTICE|[I|i]nfo|INFO|[W|w]arn?(?:ing)?|WARN?(?:ING)?|[E|e]rr?(?:or)?|ERR?(?:OR)?|[C|c]rit?(?:ical)?|CRIT?(?:ICAL)?|[F|f]atal|FATAL|[S|s]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)# Java Logs
JAVATHREAD (?:[A-Z]{2}-Processor[\d]+)
JAVACLASS (?:[a-zA-Z0-9-]+\.)+[A-Za-z0-9$]+
JAVAFILE (?:[A-Za-z0-9_.-]+)
JAVASTACKTRACEPART at %{JAVACLASS:class}\.%{WORD:method}\(%{JAVAFILE:file}:%{NUMBER:line}\)
JAVALOGMESSAGE (.*)
# MMM dd, yyyy HH:mm:ss eg: Jan 9, 2014 7:13:13 AM
CATALINA_DATESTAMP %{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM)
# yyyy-MM-dd HH:mm:ss,SSS ZZZ eg: 2014-01-09 17:32:25,527 -0800
TOMCAT_DATESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) %{ISO8601_TIMEZONE}
CATALINALOG %{CATALINA_DATESTAMP:timestamp} %{JAVACLASS:class} %{JAVALOGMESSAGE:logmessage}
# 2014-01-09 20:03:28,269 -0800 | ERROR | com.example.service.ExampleService - something compeletely unexpected happened...
TOMCATLOG %{TOMCAT_DATESTAMP:timestamp} \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}

下面我们配置 容器日志输出:

docker里,标准的日志方式是用Stdout, docker 里面配置标准输出,只需要指定: syslog 就可以了。

对于 stdout 标准输出的 docker 日志,我们使用 logstash 来收集日志就可以。

我们在 docker-compose 中配置如下既可:

      logging:driver: syslogoptions:syslog-address: 'tcp://logstash:5000'

但是一般来说我们都是文件日志,那么我们就可以直接用filebeat

对于 filebeat 我们使用 官方的 dockerhub 的 prima/filebeat 镜像。

官方的镜像中,我们需要编译一个filebeat.yml 文件, 官方说明中有两种方案:

第一是 -v 挂载 -v /path/filebeat.yml:/filebeat.yml

第二是 dockerfile 的时候

FROM prima/filebeatCOPY filebeat.yml /filebeat.yml

编译一个 filebeat.yml 文件。

filebeat.yml 支持单一路径的 prospector, 也支持多个 prospector或者每个prospector多个路径。

paths 可使用多层匹配, 如: /var/log/messages* , /var/log/* , /opt/nginx/*/*.log

例:

filebeat:prospectors:-paths:- "/data/logs/catalina.*.out"input_type: filebeat-logdocument_type: tomcat-log-  paths:- "/data/logs/nginx*/logs/*.log"input_type: filebeat-logdocument_type: nginx-logregistry_file: /etc/registry/markoutput:logstash:hosts: ["logstash:5044"]logging:files:rotateeverybytes: 10485760 # = 10MB

filebeat 需要在每台需要采集的机器上面都启动一个容器。

执行 docker-compose up -d 查看启动的 容器

加载 filebeat 模板进入 elasticsearch 容器中  (  docker exec -it elasticsearch bash  )curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.jsoncurl -XPUT 'http://elasticsearch:9200/_template/filebeat?pretty' -d@filebeat-index-template.json


filebeat-index-template.json
{"mappings": {"_default_": {"_all": {"enabled": true,"norms": {"enabled": false}},"dynamic_templates": [{"template1": {"mapping": {"doc_values": true,"ignore_above": 1024,"index": "not_analyzed","type": "{dynamic_type}"},"match": "*"}}],"properties": {"@timestamp": {"type": "date"},"message": {"type": "string","index": "analyzed"},"offset": {"type": "long","doc_values": "true"},"geoip"  : {"type" : "object","dynamic": true,"properties" : {"location" : { "type" : "geo_point" }}}}}},"settings": {"index.refresh_interval": "5s"},"template": "filebeat-*"
}

访问 http://kibana-IP:5601 可以看到已经出来 kibana 了,但是还没有数据

启动一个 nginx 容器

docker-compose

  nginx:image: alpine-nginxnetworks:network-test:hostname: nginxcontainer_name: nginxrestart: alwaysports:- 80:80volumes:- /opt/upload/nginx/conf/vhost:/etc/nginx/vhost- /opt/upload/nginx/logs:/opt/nginx/logs

本地目录 /opt/upload/nginx   必须挂载到 filebeat 容器里面,让filebeat 可以采集到。

可以看到 kibana 已经有数据出来了

转载于:https://www.cnblogs.com/jicki/p/5913622.html

docker 容器日志集中 ELK + filebeat相关推荐

  1. docker容器日志采集EFK日志分析系统的搭建与应用

    前言 docker容器中的日志会随着docker的关闭而消失,需要一个持久化的日志落地方案 本编介绍elasticsearch+kibana+filebeat的搭建如何收集各docker容器的日志 一 ...

  2. docker 容器 日志_如何为Docker容器设置日志轮换

    docker 容器 日志 by Ying Kit Yuen 英杰苑 如何为Docker容器设置日志轮换 (How to setup log rotation for a Docker containe ...

  3. docker 日志_解决docker容器日志导致主机磁盘空间满了的情况

    日志文件在 /var/lib/docker/containers// 目录下 查看日志大小 vim /opt/docker_log_size.sh #!/bin/shecho "====== ...

  4. docker容器日志清理

    1.先查看磁盘空间 df -h 2.找到容器的containerId-json.log文件,并清理(治标不治本,log迟早还会大的) 查看各个容器的log文件大小 find /var/lib/dock ...

  5. docker logs 查看docker容器日志详解

    docker logs 查看docker容器日志详解 通过docker logs命令可以查看容器的日志. 命令格式: $ docker logs [OPTIONS] CONTAINEROptions: ...

  6. Docker容器日志清理方式

    文章目录 1. 为什么要清理? 2. 日志说明 3. 日志查看方式 3.1 docker log 3.2 docker-compose logs 4. 清理 5. 防患于未然 1. 为什么要清理? 在 ...

  7. 解决Docker容器日志占用空间过大

    目录 问题描述 查看容器日志大小 解决方案 脚本方案(不推荐) 针对单个容器方案 针对全局容器方案 注意 问题描述 docker容器日志导致主机磁盘空间满了.docker logs -f contai ...

  8. docker logs 查看docker容器日志

    命令格式 $ docker logs [OPTIONS] CONTAINEROptions:--details 显示更多的信息-f, --follow 跟踪实时日志--since string 显示自 ...

  9. Docker容器日志集中收集(client-server模式)

    四个docker容器,其中三个作为client,一个作为server,每个容器都安装了fluentd,client监控本地日志文件,每次将文件末尾新添加的日志信息转发到server,server接收后 ...

最新文章

  1. 派生类参数初始化列表和基类构造函数顺序
  2. 基于注意力机制的图卷积网络预测药物-疾病关联
  3. mysql服务在tcp6_为什么 netstat 对某些服务只显示了 tcp6 监听端口
  4. python怎么创建虚拟环境_anaconda怎么创建python虚拟环境
  5. iofd:文件描述符_文字很重要:谈论设计时18个有意义的描述符
  6. JavaScript中九九乘法表制作
  7. android fragment学习4-底部布局扩展TabLayout
  8. 【板栗糖GIS】如何将3dmax数据导入到超图软件中
  9. php程序员未来前景,PHP程序员有前景吗?3个角度为你详解!
  10. 响应“上上下下左左右右”按键键事件 “按两次返回键退出”
  11. asp.net网站负载测试
  12. 黑猫带你学eMMC协议第1篇:全网最全emmc协议中文详讲,这份学习框架图,你值得拥有!!!(持续更新中...)
  13. Facebook和Ins即将推出NFT项目?Meta的元宇宙计划实现ing
  14. 有激励果效的座右铭大全
  15. mongodb count查询记录条数
  16. 作为 PM,我从小就对那些奇葩需求充满好奇
  17. 苹果 python 返回32512_(FOCUS)有容量限制的 VRP 模型的分支切割法的实现 by python3 Gurobi...
  18. SDUT实验七编程题7-4 计算圆柱体的体积
  19. 诺基亚E71使用技巧大全 转帖 感谢作者
  20. 论文写作 X: 不可原谅的低级错误

热门文章

  1. 【机器视觉】 else算子
  2. 【嵌入式】Modbus TCP功能码
  3. python素材库_python的JSON库
  4. selenium调用js文件_selenium肿么调用执行这两个js函数
  5. loop指令 c语言,arm汇编loop指令
  6. 1015 德才论 (25 分)(c语言)
  7. 用python实现TCP协议传输功能(服务端代码)
  8. 编写ShellCode
  9. 很长很真实!但会对你有所帮助的(关于职业规划)
  10. 有关推挽输出、开漏输出、复用开漏输出、复用推挽输出以及上拉输入、下拉输入、浮空输入、模拟输入区别