一、 架构图

二、 CRD 简介
Fluent Operator 为 Fluent Bit 和 Fluentd 分别定义了两个 Group:fluentbit.fluent.io 和 fluentd.fluent.io。

  • fluentbit.fluent.io 分组下包含以下 6 个 CRDs:
Fluentbit                 定义了Fluent Bit 的属性,比如镜像版本、污点、亲和性等参数。
ClusterFluentbitConfig    定义了Fluent Bit 的配置文件。
ClusterInput              定义了Fluent Bit 的 input 插件,即输入插件。通过该插件,用户可以自定义采集何种日志。
ClusterFilter             定义了Fluent Bit 的 filter 插件,该插件主要负责过滤以及处理 fluentbit 采集到的信息。
ClusterParser             定义了Fluent Bit 的 parser 插件,该插件主要负责解析日志信息,可以将日志信息解析为其他格式。
ClusterOutput             定义了Fluent Bit 的 output 插件,该插件主要负责将处理后的日志信息转发到目的地。
  • fluentd.fluent.io 分组下包含以下 7 个 CRDs:
Fluentd                   定义了 Fluentd 的属性,比如镜像版本、污点、亲和性等参数。
ClusterFluentdConfig      定义了 Fluentd 集群级别的配置文件。
FluentdConfig             定义了 Fluentd 的 namespace 范围的配置文件。
ClusterFilter             定义了 Fluentd 集群范围的 filter 插件,该插件主要负责过滤以及处理 Fluentd 采集到的信息。如果安装了 Fluent Bit,则可以更进一步的处理日志信息。
Filter CRD 该             定义了 Fluentd namespace 的 filter 插件,该插件主要负责过滤以及处理 Fluentd 采集到的信息。如果安装了 Fluent Bit,则可以更进一步的处理日志信息。
ClusterOutput             定义了 Fluentd 的集群范围的 output 插件,该插件主要负责将处理后的日志信息转发到目的地。
Output                    定义了 Fluentd 的 namespace 范围的 output 插件,该插件主要负责将处理后的日志信息转发到目的地。

三、 fluent operator 三种工作模式

Fluent Bit only 模式       如果您只需要在收集日志并在简单处理后将日志发送到最终目的地,您只需要 Fluent Bit。
Fluentd only 模式          如果需要通过网络以 HTTP 或 Syslog 等方式接收日志,然后将日志处理并发送到最终的目的地,则只需要 Fluentd。
Fluent Bit + Fluentd 模式  如果你还需要对收集到的日志进行一些高级处理或者发送到更多的 sink,那么你可以组合使用 Fluent Bit 和 Fluentd。

四、fluent operator 实战

环境准备

  • kubernetes 1.23.4 (containerd)
  • helm 3.8
  • elasticsearch 6.8.5
  • kibana 6.8.5
  • log-generator(nginx)

以下是创建log-generator(nginx)、elasticsearch、kibana、fluent-operator的脚本

mkdir -p /data/fluent-operatorcat > /data/fluent-operator/nginx.yaml  << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:name: nginxnamespace: defaultlabels:app: nginxannotations:fluentbit.io/parser: nginx    #使用nginx  parser
#    fluentbit.io/exclude: "true"    #不收集此pod的日志
spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- env:- name: TZvalue: Asia/Shanghainame: nginximage: banzaicloud/log-generator:0.3.2ports:- containerPort: 80
EOFkubectl apply -f /data/fluent-operator/nginx.yaml
  • 部署elasticsearch
mkdir -p  /var/lib/container/elasticsearch/data \
&& chmod 777  /var/lib/container/elasticsearch/datakubectl create ns elasticsearchcat > /data/fluent-operator/elasticsearch.yaml  << 'EOF'
apiVersion: v1
kind: Secret
metadata:name: elasticsearch-passwordnamespace: elasticsearch
data:ES_PASSWORD: RWxhc3RpY3NlYXJjaDJPMjE=ES_USER: ZWxhc3RpYw==
type: Opaque---
apiVersion: apps/v1
kind: Deployment
metadata:name: elasticsearchnamespace: elasticsearch
spec:replicas: 1selector:matchLabels:app: elasticsearchtemplate:metadata:labels:app: elasticsearchspec:volumes:- name: elasticsearch-datahostPath:path: /var/lib/container/elasticsearch/data- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghaicontainers:- env:- name: TZvalue: Asia/Shanghai- name: xpack.security.enabledvalue: "true"- name: discovery.typevalue: single-node- name: ES_JAVA_OPTSvalue: "-Xms512m -Xmx512m"- name: ELASTIC_PASSWORDvalueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDname: elasticsearchimage: elasticsearch:7.13.1imagePullPolicy: Alwaysports:- containerPort: 9200- containerPort: 9300resources:requests:memory: 1000Micpu: 200mlimits:memory: 1000Micpu: 500mvolumeMounts:- name: elasticsearch-datamountPath: /usr/share/elasticsearch/data- name: localtimemountPath: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:labels:app: elasticsearchname: elasticsearchnamespace: elasticsearch
spec:ports:- port: 9200protocol: TCPtargetPort: 9200selector:app: elasticsearch
---
apiVersion: apps/v1
kind: Deployment
metadata:name: kibananamespace: elasticsearchlabels:app: kibana
spec:selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaspec:
#      nodeSelector:
#        es: logcontainers:- name: kibanaimage: kibana:7.13.1resources:limits:cpu: 1000mrequests:cpu: 1000menv:- name: TZvalue: Asia/Shanghai- name: ELASTICSEARCH_HOSTSvalue: http://elasticsearch:9200- name: ELASTICSEARCH_USERNAMEvalueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_USER- name: ELASTICSEARCH_PASSWORDvalueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDports:- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:name: kibananamespace: elasticsearchlabels:app: kibana
spec:ports:- port: 5601nodePort: 5601type: NodePortselector:app: kibana
EOFkubectl apply -f /data/fluent-operator/elasticsearch.yaml
  • helm部暑fluent-operator
app=fluent-operator
version=v1.0.1
mkdir /data/$app
cd /data/$app
wget  https://github.com/fluent/fluent-operator/releases/download/$version/fluent-operator.tgztar zxvf $app.tgz --strip-components 1 $app/values.yamlcat > /data/$app/start.sh << EOF
helm upgrade --install --wait $app $app.tgz \
--create-namespace \
-f values.yaml \
-n $app
EOF

修改values.yaml 配置

containerRuntime: containerd
Kubernetes: true
fluentbit:enable: false     #禁止自动安装fluentbit,
fluentd:enable: false     #禁止自动安装fluentd
bash /data/fluent-operator/start.sh

4.1、 fluent-bit 工作模式(实战一)

mkdir  /data/fluent-operator/fluent-bit_only -p
cat > /data/fluent-operator/fluent-bit_only/fluentbit_systemd.yaml  << 'EOF'
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:name: fluent-bit-onlynamespace: fluent-operatorlabels:app.kubernetes.io/name: fluent-bit
spec:image: kubesphere/fluent-bit:v1.9.3positionDB:hostPath:path: /var/lib/fluent-bit/resources:requests:cpu: 10mmemory: 25Milimits:cpu: 500mmemory: 200MifluentBitConfigName: fluent-bit-only-configtolerations:- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:name: fluent-bit-only-configlabels:app.kubernetes.io/name: fluent-bit
spec:service:parsersFile: parsers.confinputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit-only"filterSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit-only"outputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit-only"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:name: systemdlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit-only"
spec:systemd:tag: service.kubeletpath: /var/log/journaldb: /fluent-bit/tail/systemd.db
#    systemdFilter:
#      - _SYSTEMD_UNIT=kubelet.service
#      - _SYSTEMD_UNIT=containerd.service
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:name: systemdlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit-only"
spec:match: service.*filters:- lua:script:key: systemd.luaname: fluent-bit-luacall: add_timetimeAsTable: true
---
apiVersion: v1
kind: ConfigMap
metadata:labels:app.kubernetes.io/component: operatorapp.kubernetes.io/name: fluent-bit-luaname: fluent-bit-luanamespace: fluent-operator
data:systemd.lua: |function add_time(tag, timestamp, record)new_record = {}timeStr = os.date("!*t", timestamp["sec"])t = string.format("%4d-%02d-%02dT%02d:%02d:%02d.%sZ",timeStr["year"], timeStr["month"], timeStr["day"],timeStr["hour"], timeStr["min"], timeStr["sec"],timestamp["nsec"])kubernetes = {}kubernetes["pod_name"] = record["_HOSTNAME"]kubernetes["container_name"] = record["SYSLOG_IDENTIFIER"]kubernetes["namespace_name"] = "kube-system"new_record["time"] = tnew_record["log"] = record["MESSAGE"]new_record["kubernetes"] = kubernetesreturn 1, timestamp, new_recordend
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:name: es-systemdlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit-only"
spec:matchRegex: (?:kube|service).(.*)es:host: elasticsearch.elasticsearch.svcport: 9200generateID: truelogstashPrefix: fluent-log-fb-systemlogstashFormat: truetimeKey: "@timestamp"
EOFkubectl apply -f /data/fluent-operator/fluent-bit_only/fluentbit_systemd.yaml

  • 使用 Fluent Bit 收集 containerd 的应用日志并输出到 elasticsearch

mkdir  /data/fluent-operator/fluent-bit_only/ -p
cat > /data/fluent-operator/fluent-bit_only/fluentbit_containerd.yaml  << 'EOF'
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:name: fluent-bitnamespace: fluent-operatorlabels:app.kubernetes.io/name: fluent-bit
spec:image: kubesphere/fluent-bit:v1.9.3positionDB:hostPath:path: /var/lib/fluent-bit/resources:requests:cpu: 10mmemory: 25Milimits:cpu: 500mmemory: 200MifluentBitConfigName: fluent-bit-configtolerations:- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:name: fluent-bit-configlabels:app.kubernetes.io/name: fluent-bit
spec:service:parsersFile: parsers.confinputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "k8s"filterSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "k8s"outputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "k8s"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:name: taillabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "k8s"
spec:tail:tag: kube.*path: /var/log/containers/*.logparser: crirefreshIntervalSeconds: 10memBufLimit: 5MBskipLongLines: truedb: /fluent-bit/tail/pos.dbdockerMode: false
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:name: kuberneteslabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "k8s"
spec:match: kube.*filters:- kubernetes:kubeURL: https://kubernetes.default.svc:443kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtkubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokentlsVerify: falsebufferSize: 5MBk8sLoggingParser: true  #允许 Kubernetes Pods 建议一个预定义的解析器,与Kubernetes Annotations相关 Offk8sLoggingExclude: true #允许 Kubernetes Pods 从日志处理器中排除它们的日志,与Kubernetes Annotations相关。 Off
#      labels: false      #在额外的元数据中包含 Kubernetes 资源标签。
#      annotations: false  #在额外的元数据中包含 Kubernetes 资源注释。
#      mergeLog: ture    #当log也是json时,它将作为日志结构的一部分追加 map 字段
#      keepLog: ture
#      mergeLogTrim: true          #当启用 Merge_Log 时,修剪(删除可能的 \n 或 \r)字段值。#将kubernetes块展开,并在左边添加kubernetes_前缀- nest: operation: liftnestedUnder: kubernetesaddPrefix: kubernetes_
#匹配traefik的continerd_podname的入库- grep:regex: kubernetes_container_name (traefik|fluent-bit)
#删除以下前缀的key- modify:rules:- remove: stream- remove: kubernetes_pod_id- remove: kubernetes_host- remove: kubernetes_container_hash
#匹配以kubernetes_为前缀,并删除前缀- nest:operation: nestwildcard:- kubernetes_*nestUnder: kubernetesremovePrefix: kubernetes_
#对nginx和json等日志再进一步行处理,再分析message字段- parser:                       keyName: messageparser: nginx,jsonpreserveKey: false  #保留所有原field,reserveData: true   #保留所有原field,包括处理parser前的field- modify:rules:- remove: logtag---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:name: eslabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "k8s"
spec:matchRegex: (?:kube|service)\.(.*)
#  stdout: {}es:host: elasticsearch.elasticsearch.svcport: 9200httpUser:valueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_USERhttpPassword:valueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDgenerateID: truelogstashPrefix: fluent-log-fb-containerdlogstashFormat: truetimeKey: "@timestamp"---
apiVersion: v1
kind: Secret
metadata:name: elasticsearch-passwordnamespace: fluent-operator
data:ES_PASSWORD: RWxhc3RpY3NlYXJjaDJPMjE=ES_USER: ZWxhc3RpYw==
type: OpaqueEOFkubectl apply -f /data/fluent-operator/fluent-bit_only/fluentbit_containerd.yaml

4.2、 fluent-bit +fluentd 工作模式(实战二)

  • 部署fluentbit
cat > /data/fluent-operator/fluentbit-fluentd/fluentbit.yaml << 'EOF'
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:name: fluent-bit-configlabels:app.kubernetes.io/name: fluent-bit
spec:service:parsersFile: parsers.confinputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit"filterSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit"outputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:name: fluent-bitnamespace: fluent-operatorlabels:app.kubernetes.io/name: fluent-bit
spec:image: kubesphere/fluent-bit:v1.9.3positionDB:hostPath:path: /var/lib/fluent-bit/resources:requests:cpu: 10mmemory: 25Milimits:cpu: 500mmemory: 200MifluentBitConfigName: fluent-bit-configtolerations:- operator: Exists
---apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:name: clusterinput-fluentbit-systemdlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit"
spec:systemd:tag: service.kubeletpath: /var/log/journaldb: /fluent-bit/tail/systemd.dbsystemdFilter:- _SYSTEMD_UNIT=kubelet.service- _SYSTEMD_UNIT=containerd.service
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:name: clusterfilter-fluentbit-systemdlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit"
spec:match: service.*filters:- lua:script:key: systemd.luaname: fluent-bit-luacall: add_timetimeAsTable: true
---
apiVersion: v1
kind: ConfigMap
metadata:labels:app.kubernetes.io/component: operatorapp.kubernetes.io/name: fluent-bit-luaname: fluent-bit-luanamespace: fluent-operator
data:systemd.lua: |function add_time(tag, timestamp, record)new_record = {}timeStr = os.date("!*t", timestamp["sec"])t = string.format("%4d-%02d-%02dT%02d:%02d:%02d.%sZ",timeStr["year"], timeStr["month"], timeStr["day"],timeStr["hour"], timeStr["min"], timeStr["sec"],timestamp["nsec"])kubernetes = {}kubernetes["pod_name"] = record["_HOSTNAME"]kubernetes["container_name"] = record["SYSLOG_IDENTIFIER"]kubernetes["namespace_name"] = "kube-system"new_record["time"] = tnew_record["log"] = record["MESSAGE"]new_record["kubernetes"] = kubernetesreturn 1, timestamp, new_recordend
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:name: fluentdlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "fluentbit"
spec:matchRegex: (?:kube|service)\.(.*)forward:host: fluentd.fluent-operator.svcport: 24224EOFkubectl apply -f /data/fluent-operator/fluentbit-fluentd/fluentbit.yaml
  • 部暑fluentd
cat > /data/fluent-operator/fluentbit-fluentd/fluentd.yaml << 'EOF'
apiVersion: fluentd.fluent.io/v1alpha1
kind: Fluentd
metadata:name: fluentdnamespace: fluent-operatorlabels:app.kubernetes.io/name: fluentd
spec:globalInputs:- forward:bind: 0.0.0.0port: 24224replicas: 1image: kubesphere/fluentd:v1.14.4fluentdCfgSelector:matchLabels:config.fluentd.fluent.io/enabled: "true"buffer:hostPath:path: "/var/log/fluentd-buffer"
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFluentdConfig
metadata:name: cluster-fluentd-configlabels:config.fluentd.fluent.io/enabled: "true"
spec:watchedNamespaces:- kube-system- defaultclusterFilterSelector:matchLabels:filter.fluentd.fluent.io/type: "buffer"filter.fluentd.fluent.io/enabled: "true"clusterOutputSelector:matchLabels:output.fluentd.fluent.io/scope: "cluster"output.fluentd.fluent.io/enabled: "true"
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFilter
metadata:name: cluster-fluentd-filter-bufferlabels:filter.fluentd.fluent.io/type: "buffer"filter.fluentd.fluent.io/enabled: "true"
spec:filters:- recordTransformer:enableRuby: truerecords:- key: kubernetes_nsvalue: ${record["kubernetes"]["namespace_name"]}
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterOutput
metadata:name: cluster-fluentd-output-eslabels:output.fluentd.fluent.io/scope: "cluster"output.fluentd.fluent.io/enabled: "true"
spec:outputs:- elasticsearch:host: elasticsearch.elasticsearch.svcport: 9200logstashFormat: truelogstashPrefix: fluent-log-cluster-fdbuffer:type: filepath: /buffers/es_buffer       #要对/var/log/fluentd-buffer目录增加权限,否则提示无权访问  chmod 777 /var/log/fluentd-bufferEOFkubectl apply -f  /data/fluent-operator/fluentbit-fluentd/fluentd.yaml

4.3 fluentbit对nginx和json日志入库(还有问题没有解决)


apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:name: fluent-bit-nginxnamespace: fluent-operatorlabels:app.kubernetes.io/name: fluent-bit-nginx
spec:image: kubesphere/fluent-bit:v1.9.3positionDB:hostPath:path: /var/lib/fluent-bit-nginx/resources:requests:cpu: 10mmemory: 25Milimits:cpu: 500mmemory: 200MifluentBitConfigName: fluent-bit-config-nginxtolerations:- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:name: fluent-bit-config-nginxlabels:app.kubernetes.io/name: fluent-bit-nginx
spec:service:parsersFile: parsers.confinputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "nginx"filterSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "nginx"outputSelector:matchLabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "nginx"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:name: tail-nginxlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "nginx"
spec:tail:tag: nginx.*path: /var/log/containers/*.logparser: crirefreshIntervalSeconds: 10memBufLimit: 5MBskipLongLines: truedb: /fluent-bit/tail/pos.dbdockerMode: false
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:name: kubernetes-nginxlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "nginx"
spec:match: nginx.*filters:- kubernetes:kubeURL: https://kubernetes.default.svc:443kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtkubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenlabels: falseannotations: falsetlsVerify: false
#      keepLog: false
#      mergeLogTrim: true- parser:keyName: messageparser: nginx,jsonpreserveKey: true  #保留所有原fieldreserveData: true   #保留所有原field,包括处理parser前的field- nest:operation: liftnestedUnder: kubernetesaddPrefix: kubernetes_- modify:rules:- remove: stream
#      - remove: kubernetes_pod_id
#      - remove: kubernetes_host
#      - remove: kubernetes_container_hash- nest:operation: nestwildcard:- kubernetes_*nestUnder: kubernetesremovePrefix: kubernetes_
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:name: es-nginxlabels:fluentbit.fluent.io/enabled: "true"fluentbit.fluent.io/mode: "nginx"
spec:matchRegex: (?:kube|service|nginx).(.*)es:host: elasticsearch.elasticsearch.svcport: 9200generateID: truelogstashPrefix: fluent-log-fb-containerd-nginxlogstashFormat: truetimeKey: "@timestamp"

五、排错

  • 5.1 获取fluent- bit.conf配置
#安装jq
rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install jq -y#获取fluent-bit.conf配置
kubectl get secrets -n fluent-operator fluent-bit-config -o json | jq '.data."fluent-bit.conf"' | xargs echo | base64 --decode[Service]Parsers_File    parsers.conf
[Input]Name    systemdPath    /var/log/journalDB    /fluent-bit/tail/systemd.dbTag    service.kubeletSystemd_Filter    _SYSTEMD_UNIT=kubelet.serviceSystemd_Filter    _SYSTEMD_UNIT=containerd.service
[Filter]Name    luaMatch    service.*script    /fluent-bit/config/systemd.luacall    add_timetime_as_table    true
[Output]Name    forwardMatch_Regex    (?:kube|service)\.(.*)Host    fluentd.fluent-operator.svcPort    24224
  • 5.1 获取fluentd.conf配置
#获取fluentd.conf配置
kubectl get secrets  -n fluent-operator fluentd-config -o json | jq '.data."app.conf"' | xargs echo | base64 --decode<source>@type  forwardbind  0.0.0.0port  24224
</source>
<match **>@id  main@type  label_router<route>@label  @cc1e154ba6a75c2de510ede5385b61da<match>namespaces  default,kube-system</match></route>
</match>
<label @cc1e154ba6a75c2de510ede5385b61da><match **>@id  ClusterFluentdConfig-cluster-cluster-fluentd-config::cluster::clusteroutput::cluster-fluentd-output-es-0@type  elasticsearchhost  elasticsearch.elasticsearch.svclogstash_format  truelogstash_prefix  fluent-log-cluster-fdport  9200</match>
</label>

参考: https://blog.csdn.net/easylife206/article/details/124507103
https://lequ7.com/guan-yu-yun-ji-suan-fluentoperator-yun-yuan-sheng-ri-zhi-guan-li-de-yi-ba-rui-shi-jun-dao.html
https://github.com/fluent/fluent-operator/tree/master/docs
parser处理过程: https://www.jianshu.com/p/0ac6e8091d5a
fluentbit–modify nest kubernetes等模板处理方法:https://zhuanlan.zhihu.com/p/425167977

k8s 日志收集工具 (fluent operator)相关推荐

  1. k8s 日志收集工具(logging operator)--logging、 flow、output、clusteroutput、clusterflow、HostTailer、EventTailer

    配置参考https://banzaicloud.com/docs/one-eye/logging-operator/configuration/fluentd/ github https://gith ...

  2. 性能优越的轻量级日志收集工具,微软、亚马逊都在用!

    ELK日志收集系统大家都知道,但是还有一种日志收集系统EFK,肯定有很多朋友不知道!这里的F指的是Fluentd,它具有Logstash类似的日志收集功能,但是内存占用连Logstash的十分之一都不 ...

  3. Scribe日志收集工具

    Scribe日志收集工具 概述 Scribe是facebook开源的日志收集系统,在facebook内部已经得到大量的应用.它能够从各种日志源上收集日志,存储到一个中央存储系统(可以是NFS,分布式文 ...

  4. 分布式日志收集工具分析比较

    目录 写在最前:为什么做日志收集系统❓ 一.多种日志收集工具比较 1.背景介绍 2.Facebook 的 Scribe 3.Apache 的 Chukwa 4.LinkedIn 的 Kafka 5.C ...

  5. Oracle TFA日志收集工具的安装与使用

    TFA日志收集工具: 一.介绍: TFA全称:Trace File Analyzer,日志分析工具. TFA会监视的日志,以发现可能影响服务的重大问题,在检测到任何潜在问题时也会自动收集相关的诊断信息 ...

  6. Oracle GI 日志收集工具 - TFA

    1.TFA的目的: TFA是个11.2版本上推出的用来收集Grid Infrastructure/RAC环境下的诊断日志的工具,它可以用非常简单的命令协助用户收集RAC里的日志,以便进一步进行诊断:T ...

  7. 在Kubernetes上搭建新版fluentd-elasticsearch_1.22日志收集工具

    背景介绍 第一,对于企业来说,日志的重要性不言而喻,就不赘述了. 第二,日志收集分析展示平台的选择,这里给出几点选择ELK的理由.ELK是一套非常成熟的系统,她本身的构架非常适合Kubernetes集 ...

  8. k8s日志收集实战(无坑)

    一.k8s收集日志方案简介 本文主要介绍在k8s中收集应用的日志方案,应用运行中日志,一般情况下都需要收集存储到一个集中的日志管理系统中,可以方便对日志进行分析统计,监控,甚至用于机器学习,智能分析应 ...

  9. 日志收集工具ELK,简单集群配置

    因项目部署在多台服务器上,如果出现Bug需要查询日志的时候,日志非常难查询.所以采用Logstash来收集日志,通过Kibana页面将日志展示出来.一开始偷懒,使用Docker安装了个单机版的ELK, ...

最新文章

  1. 数据库与操作系统时区更改
  2. yum clean all之后出错_“之后”英语的4种表达方式?
  3. Java,Scala,Guava和Trove集合-它们可以容纳多少数据?
  4. 【数据结构与算法】【算法思想】【推荐系统】向量空间
  5. Docker 精通之常用命令
  6. 12123两小时没付款怎么办_机械厂上班的男朋友,一天十小时,周末不休,没时间陪我怎么办?...
  7. 慕课网上的星级评分--学习视频后模仿实现
  8. ArrayList在foreach正常迭代删除不报错的原因
  9. Struts2,在Action中使用session
  10. zabbix报错cannot set resource limit: [13] Permission denied解决方法
  11. openCV无法打开源文件opencv2\opencv.hpp
  12. 根据银行卡号查询银行名接口
  13. Go语言 —— 前景
  14. windows7无声音,提示未插入扬声器或耳机的解决
  15. 常见模拟电路设计 四 :比较器详讲
  16. 数据库设计:我的租房网
  17. icon-font的使用
  18. 深度学习常用python库学习笔记
  19. SSM+校园社团平台 毕业设计-附源码251554
  20. html f12键的作用,电脑键盘中F1-F12每个功能键的作用您都知道吗?

热门文章

  1. 怎样增加混凝土粘聚性_混凝土拌合物的粘聚性较差时,常用的改善措施是
  2. 最新最全论文合集——ICDE 历年最佳论文汇总
  3. 计算机 云 开发,云计算ppt-【ppt】介绍一种计算机新技术的基本原理、应用和发展情况。(如云计算、物联网、嵌入式软件设计开发等)...
  4. 迎接现代物流新阶段计算物流智能配送
  5. String类的切割功能
  6. 毕业设计 单片机高精度北斗定位控制终端系统
  7. 大学物理简明教程重点归纳
  8. 游戏建模都用什么软件?
  9. antdpro使用AbortController取消请求
  10. 有什么免费好用的全球天气api?