在k8s上安装fluentd收集日志
fluentd用于收集k8s容器中的日志;
收集后的日志写入es中,我的es直接搭建在服务器上,要经过多方测试再决定是否要将es放在k8s上;
fluentd-es-configmap.yaml:此文件为fluentd的配置文件
kind: ConfigMap apiVersion: v1 metadata:name: fluentd-es-config-v0.2.0namespace: efk-loglabels:addonmanager.kubernetes.io/mode: Reconcile data:system.conf: |-<system>root_dir /tmp/fluentd-buffers/</system>containers.input.conf: |-# This configuration file for Fluentd / td-agent is used# to watch changes to Docker log files. The kubelet creates symlinks that# capture the pod name, namespace, container name & Docker container ID# to the docker logs for pods in the /var/log/containers directory on the host.# If running this fluentd configuration in a Docker container, the /var/log# directory should be mounted in the container.## These logs are then submitted to Elasticsearch which assumes the# installation of the fluent-plugin-elasticsearch & the# fluent-plugin-kubernetes_metadata_filter plugins.# See https://github.com/uken/fluent-plugin-elasticsearch &# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for# more information about the plugins.## Example# =======# A line in the Docker log file might look like this JSON:## {"log":"2014/09/25 21:15:03 Got request with path wombat\n",# "stream":"stderr",# "time":"2014-09-25T21:15:03.499185026Z"}## The time_format specification below makes sure we properly# parse the time format produced by Docker. This will be# submitted to Elasticsearch and should appear like:# $ curl 'http://elasticsearch-logging:9200/_search?pretty'# ...# {# "_index" : "logstash-2014.09.25",# "_type" : "fluentd",# "_id" : "VBrbor2QTuGpsQyTCdfzqA",# "_score" : 1.0,# "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",# "stream":"stderr","tag":"docker.container.all",# "@timestamp":"2014-09-25T22:45:50+00:00"}# },# ...## The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log# record & add labels to the log record if properly configured. This enables users# to filter & search logs on any metadata.# For example a Docker container's logs might be in the directory:## /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b## and in the file:## 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log## where 997599971ee6... is the Docker ID of the running container.# The Kubernetes kubelet makes a symbolic link to this file on the host machine# in the /var/log/containers directory which includes the pod name and the Kubernetes# container name:## synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log# -># /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log## The /var/log directory on the host is mapped to the /var/log directory in the container# running this instance of Fluentd and we end up collecting the file:## /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## This results in the tag:## var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name# which are added to the log message as a kubernetes field object & the Docker container ID# is also added under the docker field object.# The final tag is:## kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## And the final log record look like:## {# "log":"2014/09/25 21:15:03 Got request with path wombat\n",# "stream":"stderr",# "time":"2014-09-25T21:15:03.499185026Z",# "kubernetes": {# "namespace": "default",# "pod_name": "synthetic-logger-0.25lps-pod",# "container_name": "synth-lgr"# },# "docker": {# "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"# }# }## This makes it easier for users to search for logs by pod name or by# the name of the Kubernetes container regardless of how many times the# Kubernetes pod has been restarted (resulting in a several Docker container IDs).# Json Log Example:# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}# CRI Log Example:# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here<source>@id fluentd-containers.log@type tailpath /var/log/containers/*.logpos_file /var/log/es-containers.log.postag raw.kubernetes.*read_from_head true<parse>@type multi_format<pattern>format jsontime_key timetime_format %Y-%m-%dT%H:%M:%S.%NZ</pattern><pattern>format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/time_format %Y-%m-%dT%H:%M:%S.%N%:z</pattern></parse></source># Detect exceptions in the log output and forward them as one log entry.<match raw.kubernetes.**>@id raw.kubernetes@type detect_exceptionsremove_tag_prefix rawmessage logstream streammultiline_flush_interval 5max_bytes 500000max_lines 1000</match># Concatenate multi-line logs<filter **>@id filter_concat@type concatkey messagemultiline_end_regexp /\n$/separator ""</filter># Enriches records with Kubernetes metadata<filter kubernetes.**>@id filter_kubernetes_metadata@type kubernetes_metadata</filter># Fixes json fields in Elasticsearch<filter kubernetes.**>@id filter_parser@type parserkey_name logreserve_data trueremove_key_name_field true<parse>@type multi_format<pattern>format json</pattern><pattern>format none</pattern></parse></filter>system.input.conf: |-# Logs from systemd-journal for interesting services.# TODO(random-liu): Remove this after cri container runtime rolls out.<source>@id journald-docker@type systemdmatches [{ "_SYSTEMD_UNIT": "docker.service" }]<storage>@type localpersistent truepath /var/log/journald-docker.pos</storage>read_from_head truetag docker</source><source>@id journald-container-runtime@type systemdmatches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]<storage>@type localpersistent truepath /var/log/journald-container-runtime.pos</storage>read_from_head truetag container-runtime</source><source>@id journald-etcd@type systemdmatches [{ "_SYSTEMD_UNIT": "etcd.service" }]<storage>@type localpersistent truepath /var/log/journald-etcd.pos</storage>read_from_head truetag etcd</source><source>@id journald-kube-apiserver@type systemdmatches [{ "_SYSTEMD_UNIT": "kube-apiserver.service" }]<storage>@type localpersistent truepath /var/log/journald-kube-apiserver.pos</storage>read_from_head truetag kube-apiserver</source><source>@id journald-kube-controller-manager@type systemdmatches [{ "_SYSTEMD_UNIT": "kube-controller-manager.service" }]<storage>@type localpersistent truepath /var/log/journald-kube-controller-manager.pos</storage>read_from_head truetag kube-controller-manager</source><source>@id journald-kube-scheduler@type systemdmatches [{ "_SYSTEMD_UNIT": "kube-scheduler.service" }]<storage>@type localpersistent truepath /var/log/journald-kube-scheduler.pos</storage>read_from_head truetag kube-scheduler</source><source>@id journald-kubelet@type systemdmatches [{ "_SYSTEMD_UNIT": "kubelet.service" }]<storage>@type localpersistent truepath /var/log/journald-kubelet.pos</storage>read_from_head truetag kubelet</source><source>@id journald-kube-proxy@type systemdmatches [{ "_SYSTEMD_UNIT": "kube-proxy.service" }]<storage>@type localpersistent truepath /var/log/journald-kube-proxy.pos</storage>read_from_head truetag kube-proxy</source><source>@id journald-node-problem-detector@type systemdmatches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]<storage>@type localpersistent truepath /var/log/journald-node-problem-detector.pos</storage>read_from_head truetag node-problem-detector</source><source>@id kernel@type systemdmatches [{ "_TRANSPORT": "kernel" }]<storage>@type localpersistent truepath /var/log/kernel.pos</storage><entry>fields_strip_underscores truefields_lowercase true</entry>read_from_head truetag kernel</source>forward.input.conf: |-# Takes the messages sent over TCP<source>@id forward@type forward</source>monitoring.conf: |-# Prometheus Exporter Plugin# input plugin that exports metrics<source>@id prometheus@type prometheus</source><source>@id monitor_agent@type monitor_agent</source># input plugin that collects metrics from MonitorAgent<source>@id prometheus_monitor@type prometheus_monitor<labels>host ${hostname}</labels></source># input plugin that collects metrics for output plugin<source>@id prometheus_output_monitor@type prometheus_output_monitor<labels>host ${hostname}</labels></source># input plugin that collects metrics for in_tail plugin<source>@id prometheus_tail_monitor@type prometheus_tail_monitor<labels>host ${hostname}</labels></source>output.conf: |-<match **>@id elasticsearch@type elasticsearch@log_level infotype_name _docinclude_tag_key truehost 10.10.5.78port 9200logstash_format true<buffer>@type filepath /var/log/fluentd-buffers/kubernetes.system.bufferflush_mode intervalretry_type exponential_backoffflush_thread_count 2flush_interval 5sretry_foreverretry_max_interval 30chunk_limit_size 2Mqueue_limit_length 8overflow_action block</buffer></match>
fluentd-es-ds.yaml:此文件用于部署fluentd
apiVersion: v1 kind: ServiceAccount metadata:name: fluentd-esnamespace: efk-loglabels:k8s-app: fluentd-eskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: fluentd-eslabels:k8s-app: fluentd-eskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups:- ""resources:- "namespaces"- "pods"verbs:- "get"- "watch"- "list" --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: fluentd-eslabels:k8s-app: fluentd-eskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile subjects: - kind: ServiceAccountname: fluentd-esnamespace: efk-logapiGroup: "" roleRef:kind: ClusterRolename: fluentd-esapiGroup: "" --- apiVersion: apps/v1 kind: DaemonSet metadata:name: fluentd-es-v2.4.0namespace: efk-loglabels:k8s-app: fluentd-esversion: v2.4.0kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile spec:selector:matchLabels:k8s-app: fluentd-esversion: v2.4.0template:metadata:labels:k8s-app: fluentd-eskubernetes.io/cluster-service: "true"version: v2.4.0# This annotation ensures that fluentd does not get evicted if the node# supports critical pod annotation based priority scheme.# Note that this does not guarantee admission on the nodes (#40573).annotations:scheduler.alpha.kubernetes.io/critical-pod: ''seccomp.security.alpha.kubernetes.io/pod: 'docker/default'spec:priorityClassName: system-node-criticalserviceAccountName: fluentd-escontainers:- name: fluentd-es#image: k8s.gcr.io/fluentd-elasticsearch:v2.4.0image: mirrorgooglecontainers/fluentd-elasticsearch:v2.4.0env:- name: FLUENTD_ARGSvalue: --no-supervisor -qresources:limits:memory: 500Mirequests:cpu: 100mmemory: 200MivolumeMounts:- name: varlogmountPath: /var/log- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: config-volumemountPath: /etc/fluent/config.d#nodeSelector:#beta.kubernetes.io/fluentd-ds-ready: "true"terminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/log- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: config-volumeconfigMap:name: fluentd-es-config-v0.2.0
转载于:https://blog.51cto.com/liuzhengwei521/2406253
在k8s上安装fluentd收集日志相关推荐
- k8s上安装并初始化Helm。minikube
helm简介 Helm 可以理解为 Kubernetes 的包管理工具,可以方便地发现.共 享和使用为Kubernetes构建的应用. Helm 采用客户端/服务器架构,有如下组件组成: Helm C ...
- 在k8s上安装calico时报错: no matches for kind “PodDisruptionBudget“ in version “policy/v1“
在k8s上安装calico时报错: unable to recognize "calico.yaml": no matches for kind "PodDisrupti ...
- Jenkins 设置镜像_在k8s上安装Jenkins及常见问题
持续集成和部署是DevOps的重要组成部分,Jenkins是一款非常流行的持续集成和部署工具,最近试验了一下Jenkins,发现它是我一段时间以来用过的工具中最复杂的.一个可能的原因是它需要与各种其它 ...
- 在k8s上安装Jenkins及常见问题
持续集成和部署是DevOps的重要组成部分,Jenkins是一款非常流行的持续集成和部署工具,最近试验了一下Jenkins,发现它是我一段时间以来用过的工具中最复杂的.一个可能的原因是它需要与各种其它 ...
- k8s上安装并初始化Helm,helm基础使用教程
全栈工程师开发手册 (作者:栾鹏) 架构系列文章 helm的github地址:https://github.com/helm/helm?spm=a2c4g.11186623.2.7.Qr9c2B he ...
- OpenResty 安装,收集日志保存到文本文件
目录 安装 1.安装相关类库 2.安装编译openresty 3.编写配置启动openresty服务 4.通过 openresty 保存日志数据到系统 安装 1.安装相关类库 yum install ...
- 从0开始CentOS7上安装ELK,实现日志收集
从0开始CentOS7上安装ELK实现日志收集 1. ELK Stack 简介 2. 组件下载 2.1 安装环境及版本 2.2 下载安装包 3.安装 3.1 ElasticSearch安装 3.1.1 ...
- kubesphere k8s 安装Fluentd,带elasticsearch插件
目录 前言 一.制作Fluentd镜像 二.编写配置文件 1.编辑配置 2.配置说明(可忽略不看) 3.logback-spring.xml的配置 三.部署fluentd 前言 Fluentd是一款开 ...
- Docker下ELK三部曲之三:K8S上的ELK和应用日志上报
本章是<Docker下ELK三部曲>系列的终篇,前面章节已经详述了ELK环境的搭建以及如何制作自动上报日志的应用镜像,今天我们把ELK和web应用发布到K8S环境下,模拟多个后台serve ...
最新文章
- 认认真真推荐几个机器学习、深度学习公众号
- 普通话计算机考试相关信息,普通话考试常见问题有哪些
- 借助numpy.rot90实现图片顺时针旋转90°,旋转后图片没有黑边
- python实现合并链表_python:16.合并两个排序的链表
- 图像水平梯度和竖直梯度代码_Opencv图像处理(三)
- http响应状态码大全
- matlab 2017安装教程
- Solaris 10系统指南
- 解决ASP.NET 安装完成报错500
- 微信小程序免300元认证费的方法,无需续费年检!
- IE11下载文件文件名出现乱码
- nrf52832 蓝牙组网_nrf52832 蓝牙开发
- Visual Assist X 10.9 builds 2333 with patch.7z
- 《学术研究,你的成功之路》阅读笔记
- python实现豆瓣电影评价感情分析
- MySQL批量插入1000w条数据
- 配置给127.0.0.1配置本地域名
- 店宝宝:海外版“拼多多” 低调的Vova能否创辉煌?
- 2022SDUT知到/智慧树----C语言第七章测试题解
- astar插件下载 就行_怀旧服插件:简约清爽界面推荐,MangUI、NDui、ShadowUF安装与设置...