es-logstash-kibana-filebeat-ELK日志分析
Elasticsearch介绍
Elasticsearch的功能
- 分布式搜索和分析引擎
- 全文检索,结构化检索,数据分析
- 对海量数据进行近实时的处理
Elasticsearcg的适用场景
- 全文检索,高亮,搜索推荐
- 新闻网站,用户日志+社交网络数据分析
- GitHub(开源代码管理),搜索上亿行代码 电商网站,检索商品
- 日志数据分析,logstash采集日志,ES进行复杂的数据分析
- BI系统,ES执行数据分析和挖掘,Kibana进行数据可视化
- BI 系统,商业智能,分析用户消费趋势和用户群体的组成构成。
Elasticsearch的特点
- 大型分布式集群(数百台服务器)技术,处理PB级数据,服务大公司,也 可以运行在单机上,服务小公司
- 主要是将全文检索,数据分析以及分布式技术,合并在一起,才形成了独一 无二的ES,lucene(全文检索)
Elasticsearcg的核心概念
Near Realtime(NRT): 从写入数据到可以被搜索有一个小延迟(大概1秒);
Cluster:集群,包含多个节点,用于配置集群名称
Node:集群中的一个节点,节点也有一个名称(默认是随机分配的),节点名称 很重要(在执行运维管理操作的时候),默认节点会去加入一个名称为XXX的集群, 如果直接启动一堆节点,那么他们会自动组成一个XXX集群,当然一个节点也可 以组成一个XXX集群
Index:相当于数据库的库。
Type: 相当于数据库的表
Document:相当于数据库的行。ES中最小数据单元,一个document可以是一条客 户数据,一条商品分类数据,一条订单数据,通常是JSON数据结构表示
filed:相当于数据库的字段
shard:相当于数据分片存储,默认5分片
replica,副本策略,默认1副本
ELK搭建
功能
- 分布式日志数据集中查询和管理
- 应用+系统硬件监控
- 故障排除
- 安全信息和时间管理
- 报表功能
安装基础环境
安装卸载安装自带jdk,安装jdk
tar -xvf jdk-11.0.7_linux-x64_bin.tar.gz -C /data/ vim /etc/profilesource /etc/profile 生产./bin/jlink --module-path jmods --add-modules java.desktop --output jre 检测:java -version
安装node.js
因为head插件是用node.js开发的,所以需要此环境
tar -xvf node-v12.14.1-linux-x64.tar.gz -C /data/ vim /etc/profile export NODE_HOME=/data/node-v12.14.1-linux-x64 export PATH=$NODE_HOME/bin:$PATH export NODE_PATH=$NODE_HOME/lib/node_modules:$PATHsource /etc/profile 检测:node -v
初始化参数
vim /etc/security/limits.conf * soft nofile 655360 * hard nofile 655360 root soft nofile 655360 root hard nofile 655360 * soft core unlimited * hard core unlimited root soft core unlimited elk soft memlock unlimited elk hard memlock unlimited
vi /etc/sysctl.conf #解决bootstrap.memory_lock: true报错 vm.swappiness=0 vm.max_map_count=262144sysctl -p
安装核心主键
安装Elasticsearch:负责日志检索和储存
创建用户elk:
useradd elk
安装elasticsearch:
tar -xvf elasticsearch-7.7.0-linux-x86_64.tar.gz -C /data/ mkdir /data/elasticsearch-7.7.0/data chown -R elk:elk /data/elasticsearch-7.7.0/
修改配置
vim /data/elasticsearch-7.7.0/config/jvm.options #修改内存(更具业务情况修改,不超过32G,一般服务器一半) -Xms1g -Xmx1g
vim /data/elasticsearch-7.7.0/config/elasticsearch.yml #集群名,同一集群必须相同 cluster.name: elk #指定节点主机名 node.name: "es-ip尾号" #数据存放路径 path.data: /data/elasticsearch-7.7.0/data #日志路径 path.logs: /data/elasticsearch-7.7.0/logs #关闭锁定内存,设置为true会报错(生产环境中系统优化后开启true) bootstrap.memory_lock: true #监听ip network.host: 本机ip #http端口 http.port: 9200 #初始主机列表 discovery.seed_hosts: ["es1ip", "es2ip", "es3ip"] cluster.initial_master_nodes: ["es-ip尾号1", "es-ip尾号2", "es-ip尾号3"] #添加 transport.tcp.port: 9300 # 添加允许head插件访问es http.cors.enabled: true # 添加 http.cors.allow-origin: "*"
启动测试
su - elk -c "/data/elasticsearch-7.7.0/bin/elasticsearch -d" #查看日志是否有正常启动 tail -f /data/elasticsearch-7.7.0/logs/elk.log
检测
#查看集群健康状态 curl http://节点ip:9200/_cluster/health?pretty #查看master节点 curl http://随便节点ip:9200/_cat/master?v #查看集群详细信息 curl http://随便节点ip:9200/_cluster/state?pretty
服务启动配置文件
mkdir /data/elasticsearch-7.7.0/run touch /data/elasticsearch-7.7.0/run/elasticsearch.pid chown -R elk:elk /data/elasticsearch-7.7.0/ vi /etc/sysconfig/elasticsearch ################################ # Elasticsearch ################################# Elasticsearch home directory #ES_HOME=/usr/share/elasticsearch ES_HOME=/data/elasticsearch-7.7.0# Elasticsearch Java path #JAVA_HOME=/opt/jdk-11.0.7 #CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib JAVA_HOME=/data/jdk-11.0.7 CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib # Elasticsearch configuration directory #ES_PATH_CONF=/etc/elasticsearch ES_PATH_CONF=/data/elasticsearch-7.7.0/config# Elasticsearch PID directory #PID_DIR=/var/run/elasticsearch PID_DIR=/data/elasticsearch-7.7.0/run# Additional Java OPTS #ES_JAVA_OPTS=# Configure restart on package upgrade (true, every other setting will lead to not restarting) #RESTART_ON_UPGRADE=true################################ # Elasticsearch service ################################# SysV init.d # # The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process ES_STARTUP_SLEEP_TIME=5################################ # System properties ################################# Specifies the maximum file descriptor number that can be opened by this process # When using Systemd, this setting is ignored and the LimitNOFILE defined in # /usr/lib/systemd/system/elasticsearch.service takes precedence #MAX_OPEN_FILES=65535# The maximum number of bytes of memory that may be locked into RAM # Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option # in elasticsearch.yml. # When using systemd, LimitMEMLOCK must be set in a unit file such as # /etc/systemd/system/elasticsearch.service.d/override.conf. #MAX_LOCKED_MEMORY=unlimited# Maximum number of VMA (Virtual Memory Areas) a process can own # When using Systemd, this setting is ignored and the 'vm.max_map_count' # property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf #MAX_MAP_COUNT=262144
vi /usr/lib/systemd/system/elasticsearch.service [Unit] Description=Elasticsearch Documentation=http://www.elastic.co Wants=network-online.target After=network-online.target[Service] RuntimeDirectory=elasticsearch PrivateTmp=true Environment=ES_HOME=/data/elasticsearch-7.7.0 Environment=ES_PATH_CONF=/data/elasticsearch-7.7.0/config Environment=PID_DIR=/data/elasticsearch-7.7.0/run EnvironmentFile=-/etc/sysconfig/elasticsearchWorkingDirectory=/data/elasticsearch-7.7.0/dataUser=elk Group=elkExecStart=/data/elasticsearch-7.7.0/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet# StandardOutput is configured to redirect to journalctl since # some error messages may be logged in standard output before # elasticsearch logging system is initialized. Elasticsearch # stores its logs in /var/log/elasticsearch and does not use # journalctl by default. If you also want to enable journalctl # logging, you can simply remove the "quiet" option from ExecStart. StandardOutput=journal StandardError=inherit# Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65535# Specifies the maximum number of processes LimitNPROC=4096# Specifies the maximum size of virtual memory LimitAS=infinity# Specifies the maximum file size LimitFSIZE=infinity# Disable timeout logic and wait until process is stopped TimeoutStopSec=0# SIGTERM signal is used to stop the Java process KillSignal=SIGTERM# Send the signal only to the JVM rather than its control group KillMode=process# Java process is never killed SendSIGKILL=no# When a JVM receives a SIGTERM signal it exits with code 143 SuccessExitStatus=143[Install] WantedBy=multi-user.target# Built for packages-6.7.1 (packages)
#启动 chown -R elk:elk /data/elasticsearch-7.7.0 chmod +x /usr/lib/systemd/system/elasticsearch.service #先kill之前的elasticsearch进程 ps -ef | grep elasticsearch kill -9 30794 #解决bootstrap.memory_lock: true报错 systemctl edit elasticsearch [Service] LimitMEMLOCK=infinitysystemctl daemon-reload systemctl enable elasticsearch systemctl start elasticsearch netstat -tunpl | grep 9200 netstat -tunpl | grep 9300
查看
#查看日志是否有正常启动 tail -f /data/elasticsearch-7.7.0/logs/elk.log #查看集群健康状态 curl http://节点ip:9200/_cluster/health?pretty #查看master节点 curl http://随便节点ip:9200/_cat/master?v #查看集群详细信息 curl http://随便节点ip:9200/_cluster/state?pretty
插件
8.1Head:拓扑结构、索引节点、数据分片和分布式存储(一个节点安装)
安装
unzip -o -d /data/ master.zip mv /data/elasticsearch-head-master/ /data/elasticsearch-head cd /data/elasticsearch-head npm install -g cnpm --registry=https://registry.npm.taobao.org cnpm install -g grunt-cli cnpm install -g grunt cnpm install grunt-contrib-clean cnpm install grunt-contrib-concat cnpm install grunt-contrib-watch cnpm install grunt-contrib-connect cnpm install grunt-contrib-copy cnpm install grunt-contrib-jasmine #若报错就再执行一遍
修改配置
vim /data/elasticsearch-head/Gruntfile.js #找到下面connect属性,新增 hostname: '0.0.0.0', connect: {server: {options: {hostname: '0.0.0.0',port: 9100,base: '.',keepalive: true}} }
后续方便,给head做个启动脚本
vi /etc/init.d/elasticsearch-head #!/bin/bash #chkconfig: 2345 55 24 #description: elasticsearch-head service managerdata="cd /data/elasticsearch-head/ ; nohup /data/node-v12.14.1-linux-x64/bin/npm run start&" START() {source /etc/profileeval ${data} }STOP() {ps -ef | grep grunt | grep -v "grep" | awk '{print $2}' | xargs kill -s 9 >/dev/null }case "$1" instart)START;;stop)STOP;;restart)STOPsleep 2START;;*)echo "Usage: elasticsearch-head (|start|stop|restart)";; esacchmod +x /etc/init.d/elasticsearch-head
启动
chkconfig elasticsearch-head on service elasticsearch-head start
访问:http://192.168.66.51:9100
8.2X-PACK:是一个Elastic Stack的扩展,将安全,警报,监视,报告和图形功能包 含在一个易于安装的软件包中(一个节点安装)(7.7.0已经有了不用安 装)
安装命令:
bin/elasticsearch-plugin install x-pack
验证X-Pack:
在浏览器上输入: http://192.168.1.57:5601/
8.3安装中文分词库analysis-ik(所有节点)
analysis-ik中文分词插件项目地址
https://github.com/medcl/elasticsearch-analysis-ik/releases安装
mkdir plugins/analysis-ik unzip -o -d analysis-ik/ elasticsearch-analysis-ik-7.7.0.zip
分词词库设置
进入 elasticsearch安装目录
编辑词库配置文件
vim plugins/analysis-ik/config/IKAnalyzer.cfg.xml重启es
systemctl restart elasticsearch
分词测试
curl -XPUT "http://192.168.10.163:9200/analysis-ik-cxk-test"curl -XGET "http://192.168.10.163:9200/analysis-ik-cxk-test/_analyze" -H 'Content-Type: application/json' -d'{ "text":"中华人民共和国MN","tokenizer": "ik_max_word"}'
结果
8.4安装拼音分词库elasticsearch-analysis-pinyin下载项目文件
https://github.com/medcl/elasticsearch-analysis-pinyin
安装Kibana:负责日志的可视化(随便一个节点安装)
安装
tar -xvf kibana-7.7.0-linux-x86_64.tar.gz -C /data/ mv /data/kibana-7.7.0-linux-x86_64/ /data/kibana-7.7.0
修改配置文件:
创建用户elk:
useradd elk mkdir /data/kibana-7.7.0/logs touch /data/kibana-7.7.0/logs/kibana.log chown -R elk:elk /data/kibana-7.7.0/ vi /data/kibana-7.7.0/config/kibana.yml server.port: 5601 #服务器监听地址 server.host: "0.0.0.0" #集群es地址 elasticsearch.hosts: ["http://es1ip:9200","http://es2ip:9200","http://es3ip:9200"] #日志路径 logging.dest: /data/kibana-7.7.0/logs/kibana.log #默认索引 kibana.index: ".kibana" #配置中文 i18n.locale: "zh-CN" 配置启动文件 vi /etc/default/kibana user="elk" group="elk" chroot="/" chdir="/" nice=""# If this is set to 1, then when `stop` is called, if the process has # not exited within a reasonable time, SIGKILL will be sent next. # The default behavior is to simply log a message "program stop failed; still running" KILL_ON_STOP_TIMEOUT=0
vi /etc/systemd/system/kibana.service [Unit] Description=Kibana StartLimitIntervalSec=30 StartLimitBurst=3[Service] Type=simple User=elk Group=elk # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/etc/default/kibana EnvironmentFile=-/etc/sysconfig/kibana ExecStart=/data/kibana-7.7.0/bin/kibana "-c /data/kibana-7.7.0/config/kibana.yml" Restart=always WorkingDirectory=/[Install] WantedBy=multi-user.target
启动
chown -R elk:elk /data/kibana-7.7.0/ systemctl daemon-reload systemctl start kibana.service systemctl status kibana.service ss -antup | grep 5601
访问:http://192.168.66.55:5601/
安装Logstash:负责日志的收集和分析、处理
环境:4G运行内存以上
安装
tar -xvf logstash-7.7.0.tar.gz -C /data/
修改配置
vim /data/logstash-7.7.0/config/logstash.yml http.host: "节点ip" http.port: 9600 path.data: /data/logstash-7.7.0/data path.config: /data/logstash-7.7.0/conf.d/*.conf path.logs: /data/logstash-7.7.0/logs
配置启动文件
#创建用户elk: useradd elk mkdir /data/logstash-7.7.0/data mkdir /data/logstash-7.7.0/conf.d mkdir /data/logstash-7.7.0/run touch /data/logstash-7.7.0/run/logstash.pid mkdir /data/logstash-7.7.0/logs touch /data/logstash-7.7.0/logs/gc.log chown -R elk:elk /data/logstash-7.7.0 vim /etc/default/logstash JAVA_HOME="/data/jdk-11.0.7" LS_HOME="/data/logstash-7.7.0" LS_SETTINGS_DIR="/data/logstash-7.7.0" LS_PIDFILE="/data/logstash-7.7.0/run/logstash.pid" LS_USER="elk" LS_GROUP="elk" LS_GC_LOG_FILE="/data/logstash-7.7.0/logs/gc.log" LS_OPEN_FILES="16384" LS_NICE="19" SERVICE_NAME="logstash" SERVICE_DESCRIPTION="logstash"
vi /etc/systemd/system/logstash.service [Unit] Description=logstash[Service] Type=simple User=elk Group=elk #Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. #Prefixing the path with '-' makes it try to load, but if the file doesn'texist,it continues onward. EnvironmentFile=-/etc/default/logstash EnvironmentFile=-/etc/sysconfig/logstash ExecStart=/data/logstash-7.7.0/bin/logstash "--path.settings" "/data/logstash-7.7.0/config" "--path.config" "/data/logstash-7.7.0/conf.d" Restart=always WorkingDirectory=/ Nice=19 LimitNOFILE=16384[Install] WantedBy=multi-user.target
启动:
systemctl daemon-reload systemctl enable logstash systemctl start logstash netstat -tunpl | grep 9600
客户端安装:filebeat插件把web日志传递给logstash端分析日志
安装
tar -xvf filebeat-7.7.0-linux-x86_64.tar.gz -C /data/ mv /data/filebeat-7.7.0-linux-x86_64/ /data/filebeat-7.7.0
修改配置文件:
vim /data/filebeat-7.7.0/filebeat.yml ######################文件输入######################### filebeat.inputs: #容器输入 - type: container#指定流读取all、stdout、stderr。默认值为“all”stream: allpaths:- "/var/log/containers/*.log" #普通输入 - type: log#启用enabled: false#输入文件路径paths:- /var/log/*.log- /var/log/*.cxk#打tag标签tags: ["XXXXX"] #排除那些行#exclude_lines: ['^DBG']#包含那些行#include_lines: ['^ERR', '^WARN']#排除那些文件#exclude_files: ['.gz$']###支持解析json #json.keys_under_root: true#json.overwrite_keys: true###多行选项用于一条日志多行显示的#匹配以[开头的行#multiline.pattern: ^\[#定义pattern下的模式集是否应被否定。默认值为false#multiline.negate: true#可以为“after”或“before”。在Logstash中,After等同于previous,before等同于next#multiline.match: after ######################filebeat摸板设置######################### filebeat.config.modules:#配置加载模块的路径path: /codebackup/code/filebeat/modules.d/*.yml#是否启用reload.enabled: false#刷新加载的时间间隔reload.period: 10s ######################ES摸板设置######################### #设置模板名 setup.template.name: "nginx" #以XXX的索引使用该摸板 setup.template.pattern: "nginx-*" #不使用自带的模板名 setup.template.enabled: false #是否覆盖原来的模板 setup.template.overwrite: true setup.template.settings: #定义索引分片数和副本index.number_of_shards: 1 #index.number_of_replicas: 1#index.codec: best_compression#_source.enabled: false ######################输出到ES######################### output.elasticsearch:#es集群信息hosts: ["192.168.10.163:9200", "192.168.10.181:9200", "192.168.10.228:9200"]indices:#自定义索引名称安月生产索引- index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"when.contains:tags: "access"- index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"when.contains:tags: "error"#例如https://10.45.3.2:9220/elasticsearch#path: /elasticsearch#协议http(默认)或https#protocol: "https"#API 密钥认证#api_key: "id:api_key"#基本认证#username: "elastic"#password: "changeme"#PKI证书认证#ssl.certificate: "/etc/pki/client/cert.pem"#ssl.key: "/etc/pki/client/cert.key" ######################输出到Logstash######################### #output.logstash:#主机信息#hosts: ["localhost:5044" , "localhost:5045"]#多个 Logstash 主机,则输出插件会将发布的事件负载平衡到所有 Logstash 主机上。默认false#loadbalance: true#HTTPS服务器验证的根证书列表#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]#SSL客户端身份验证证书#ssl.certificate: "/etc/pki/client/cert.pem"#客户端证书密钥#ssl.key: "/etc/pki/client/cert.key" ######################输出到kafka######################### output.kafka:#配置集群元数据的初始代理主机hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]#消息主题选择+分区topic: "logs-%{[agent.version]}"topics:- topic: "critical-%{[agent.version]}"when.contains:message: "CRITICAL"- topic: "error-%{[agent.version]}"when.contains:message: "ERR"partition.round_robin:reachable_only: falserequired_acks: 1compression: gzip#将删除大于最大消息字节数的事件max_message_bytes: 1000000 ######################输出到redis######################### output.redis:hosts: ["localhost"]password: "my_password"key: "default_list"keys:#如果“info_list”字段包含信息,则发送至信息列表- key: "info_list"when.contains:message: "INFO"#如果“debug_list”字段包含信息,则发送至信息列表- key: "debug_list"when.contains:message: "DEBUG"- key: "%{[fields.list]}"mappings:http: "frontend_list"nginx: "frontend_list"mysql: "backend_list"#发布事件的 Redis 数据库编号。默认值为 0db: 0timeout: 5 ######################处理器######################### #配置处理器以增强或操纵节拍生成的事件 processors:- add_host_metadata: ~- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~ ######################日志记录######################### #设置日志级别error, warning, info, debug #logging.level: debug ######################X-Pack监控######################### #启用监视报告程序 #monitoring.enabled: false #设置Elasticsearch群集的UUID #monitoring.cluster_uuid: #它指向您的Elasticsearch监控集群,您可以取消对下一行的注释 #monitoring.elasticsearch: ######################迁移######################### #这允许启用6.7迁移别名,基本上用不到 #migration.6_to_7.enabled: true
配置filebeat启动服务
mkdir /data/filebeat-7.7.0/logdata mkdir /data/filebeat-7.7.0/data mkdir /data/filebeat-7.7.0/logs vi /usr/lib/systemd/system/filebeat.service [Unit] Description=Filebeat sends log files to Logstash or directly to Elasticsearch. Documentation=https://www.elastic.co/products/beats/filebeat Wants=network-online.target After=network-online.target[Service] ExecStart=/data/filebeat-7.7.0/filebeat -c /data/filebeat-7.7.0/filebeat.yml -path.home /data/filebeat-7.7.0 -path.config /data/filebeat-7.7.0 -path.data /data/filebeat-7.7.0/data -path.logs /data/filebeat-7.7.0/logs Restart=always[Install] WantedBy=multi-user.target
启动
systemctl daemon-reload systemctl enable filebeat systemctl start filebeat
或者容器启动 导入elk-filebeat-7.7.0V.tar 解压filebeat.zip并修改配置 修改配置文件: vim /data/filebeat-7.7.0/filebeat.yml 见filebeat配置实例.docx vim /opt/docker/filebeat/docker-compose.yml version: '2' services:filebeat:image: elk-filebeat:7.7.0Vrestart: alwayscontainer_name: filebeatvolumes:- /codebackup/code/filebeat:/opt/filebeat #要映射采集的日志文件- /codebackup/XXX:/tmp/XXXnetworks:- filebeat networks:filebeat:ipam:config:- subnet: 10.129.89.0/24gateway: 10.129.89.1
测试
查看elk是否有filebeat开通的索引
Elasticsearch操作
(库)创建cxktest(索引-库)
curl -X PUT "http://192.168.10.163:9200/cxktest?pretty"
(表)插入cxktest库/表user/uid1(插入时uid不能重复,可以不写使用随机的)
curl -X PUT "http://192.168.10.163:9200/cxktest/user/1?pretty" -d ' { "id":"1", "name":"李白", "age":"25", "about":"爱好","interests":["篮球", "泡妞"] }'
(表)修改数据cxktest库/表user/uid1
curl -X PUT "http://192.168.1.55:9200/hmtd/teacher/1" -d '{"doc":{"年代": "唐代"} }'
(表)查询数据hmtd库/表teacher/uid3
curl -X GET "http://192.168.1.55:9200/hmtd/teacher/3?pretty" {"_index" : "hmtd","_type" : "teacher","_id" : "3","found" : false }
(表)删除数据hmtd库/表teacher/uid3
curl -X DELETE "http://192.168.1.55:9200/hmtd/teacher/3?pretty" {"found" : false,"_index" : "hmtd","_type" : "teacher","_id" : "3","_version" : 1,"_shards" : {"total" : 2,"successful" : 2,"failed" : 0} }
(库)删除索引
删除索引删除库tedu
curl -X DELETE http://192.168.1.55:9200/tedu/
删除所有索引删除所有库
curl -X DELETE http://192.168.1.65:9200/*
Kibana配置
Kibana导入filebeat存的索引
或者添加error
画图
添加自定义图摸板
logstash配置
创建规则文件模板:
vim /data/logstash-6.7.1/conf.d/logstash.conf input {beats {port => 5044} }output {elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}" } } input{#Beats框架接收事件端口beats{port => 5044}#执行输入事件exec {#执行命令command => "ls"#执行间隔interval => 30Schedule => 0 * * * *}#文件输入事件 path => "/var/log/*" #排除 exclude => "*.gz"#jdbc输入插件jdbc {jdbc_driver_library => "/XXXXX/mysql-connector-java.jar"jdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_connection_string => "jdbc:mysql://localhost:3306/p31102"jdbc_user => "p31102"jdbc_user => "p31102"#解码器codec => plain { charset => "UTF-8"}parameters => { "favorite_artist" => "Beethoven" }schedule => "* * * * *"statement => "SELECT * from songs where artist = :favorite_artist"statement_filepath => "XXXXX/jdbc.sql"}#stdin{插件 => "参数"}file{#文件输入#path => ["文件路径1","文件路径2"]path => ["/tmp/a.log"]sincedb_path => "/var/lib/logstash/since.db"start_position => "beginning"#数据标签type => "testlog"} } filter{if[type] == "apache_log"{grok{#正则分组匹配match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"}}} } output{elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" }stdout{codec => "rubydebug"}if[type] == "apache_log"{elasticsearch{hosts => ["es1:9200", "es2:9200", "es3:9200"]index => "weblog"flush_size => 2000idle_flush_time => 10}} }
logstash获取自定义日志字符串及切分字段
最近使用elk做应用审计的日志信息收集
2.1、定义应用输出格式
日志文件中如:
2020-06-27 09:25:01.458 [L:INFO] 11779 --- [io-8090-exec-55] n.e.s.m.d.d.P.insertSelective!selectKey :AD.scan|system|order|delete|2020-07-29|15:53|user1|success.AD
2.2、logstash中获取有用的字符串
通过grok脚本,截取需要的字符串存放在新建字段中
可以通过http://grok.51vagaa.com/验证脚本
2.3、通过分隔符截取字段内容赋值新建字段
logstash具体实现代码如:filter{grok{match => { "message" => "(?<audit_info>(?<=AD.).*?(?=.AD))"}}if [audit_info]{mutate{split => ["audit_info","|"] add_field => {"audit_sys" => "%{[audit_info][0]}"}add_field => {"audit_type" => "%{[audit_info][1]}"}add_field => {"audit_module" => "%{[audit_info][2]}"}add_field => {"audit_event" => "%{[audit_info][3]}"}add_field => {"audit_date" => "%{[audit_info][4]}"}add_field => {"audit_time" => "%{[audit_info][5]}"}add_field => {"audit_user" => "%{[audit_info][6]}"}add_field => {"audit_result" => "%{[audit_info][7]}"}}} }
更多配置:
https://www.elastic.co/guide/en/logstash/7.x/ev
filebeat配置
自定义采集
nginx
vim /data/filebeat-7.7.0/filebeat.yml ######################文件输入######################### filebeat.inputs: - type: logenabled: truepaths:- /tmp/access.logtags: ["nginx-access-163"] - type: logenabled: truepaths:- /tmp/error.logtags: ["nginx-error-163"] ######################ES摸板设置######################### setup.template.name: "nginx" setup.template.pattern: "nginx-*" setup.template.enabled: false setup.template.overwrite: true ######################输出到ES######################### output.elasticsearch:hosts: ["192.168.10.163:9200", "192.168.10.181:9200", "192.168.10.228:9200"]indices:- index: "nginx-access-163-%{[agent.version]}-%{+yyyy.MM}"when.contains:tags: "nginx-access-163"- index: "nginx-error-163-%{[agent.version]}-%{+yyyy.MM}"when.contains:tags: "nginx-error-163"
tomcat
vim /data/filebeat-7.7.0/filebeat.yml ######################文件输入######################### filebeat.inputs: - type: logenabled: truepaths:- /var/log/tomcat/localhost_access_log.*.txttags: ["tomcat"] ######################ES摸板设置######################### setup.template.name: "tomcat" setup.template.pattern: "tomcat-*" setup.template.enabled: false setup.template.overwrite: true ######################输出到ES######################### output.elasticsearch:hosts: ["192.168.10.163:9200", "192.168.10.181:9200", "192.168.10.228:9200"]indices:- index: "tomcat-access-163-%{[agent.version]}-%{+yyyy.MM}"when.contains:tags: "tomcat-access-163"
java
vim /data/filebeat-7.7.0/filebeat.yml ######################文件输入######################### filebeat.inputs: - type: logenabled: truepaths:- /tmp/system.logtags: ["java-18888"] multiline.pattern: '^\[0-9]{4}-[0-9]{2}-[0-9]{2}'multiline.negate: truemultiline.match: after ######################ES摸板设置######################### setup.template.name: "java" setup.template.pattern: "java-*" setup.template.enabled: false setup.template.overwrite: true ######################输出到ES######################### output.elasticsearch:hosts: ["192.168.10.163:9200", "192.168.10.181:9200", "192.168.10.228:9200"]indices:- index: "java-163-%{[agent.version]}-%{+yyyy.MM}"when.contains:tags: "java-18888"
使用模块采集
激活模块: nginx
filebeat modules enable nginx vim /etc/filebeat/modules.d/nginx.yml - module: nginxaccess:enabled: truevar.paths: [“日志路径”]error:enabled: truevar.paths: [“日志路径”]
转格式
Nginx转json
vim /data/nginx/conf/nginx.conflog_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';log_format json '{ "time_local": "$time_local", ''"remote_addr": "$remote_addr", ''"referer": "$http_referer", ''"request": "$request", ''"status": $status, ''"bytes": $body_bytes_sent, ''"agent": "$http_user_agent", ''"x_forwarded": "$http_x_forwarded_for", ''"up_addr": "$upstream_addr",''"up_host": "$upstream_http_host",''"upstream_time": "$upstream_response_time",''"request_time": "$request_time"' ' }';access_log /var/log/nginx/access.log json;
Tomcat转json
vim /etc/tomcat/server.xml <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="localhost_access_log." suffix=".txt" pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
喜欢的亲可以关注点赞评论哦!以后每天都会更新的哦!本文为小编原创文章; 文章中用到的文件、安装包等可以加小编联系方式获得;
欢迎来交流小编联系方式VX:CXKLittleBrother 进入运维交流群
es-logstash-kibana-filebeat-ELK日志分析相关推荐
- ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台
ELK平台介绍 在搜索ELK资料的时候,发现这篇文章比较好,于是摘抄一小段: 以下内容来自: http://baidu.blog.51cto.com/71938/1676798 日志主要包括系统日志. ...
- ELK(ElasticSearch+Logstash+ Kibana)搭建实时日志分析平台
来源:http://www.cnblogs.com/zclzhao/p/5749736.html 一.简介 ELK 由三部分组成elasticsearch.logstash.kibana,elasti ...
- ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平
ELK平台介绍 在搜索ELK资料的时候,发现这篇文章比较好,于是摘抄一小段: 以下内容来自:http://baidu.blog.51cto.com/71938/1676798 日志主要包括系统日志.应 ...
- ELK 日志分析平台 —— Logstash
ELK 日志分析平台 -- Logstash 文章目录 ELK 日志分析平台 -- Logstash Logstash 简介 Logstash的工作原理 [注]:Logstash file插件 sin ...
- ELK日志分析平台(三)— kibana数据可视化、kibana监控、采集日志插件filebeat
1.kibana数据可视化--日志分析 [root@foundation50 network-scripts]# cd /mnt/pub/docs/elk/7.6/ [root@foundation5 ...
- 搭建ELK日志分析平台(下)—— 搭建kibana和logstash服务器
27.6 安装kibana 27.7 安装logstash 27.8 配置logstash 27.9 kibana上查看日志 27.10 收集nginx日志 27.11 使用beats采集日志 本文是 ...
- Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台
Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台 日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在 ...
- CentOS 6.8下ELK+filebeat+redis 日志分析平台
转载来源 :ELK+filebeat+redis 日志分析平台 :https://www.jianshu.com/p/bf2793e85d18 一.简介 ELK Stack是软件集合Elasticse ...
- 企业项目实战---ELK日志分析平台之logstash数据采集(二)
ELK日志分析平台 logstash数据采集 1.logstash简介 2.Logstash安装 3.标准输入到标准输出 4.标准输入到文件 5.标准输入到es主机 6.指定文件输入到es主机 7.s ...
- 【Elastic Stack学习】ELK日志分析平台(一)ELK简介、ElasticSearch集群
* ELK简介: ELK是Elasticsearch . Logstash.Kibana三个开源软件的缩写.ELK Stack 5.0版本之后新增Beats工具,因此,ELK Stack也改名为Ela ...
最新文章
- 大数据征信需把控 数据源的“量”与“度”
- 5 -- Hibernate的基本用法 -- 要点
- android开发检测用户是否使用了虚拟定位
- java 文本 从列开始_如何从sql java中检索文本列?
- 信息图:大数据2016年分析趋势
- “约见”面试官系列之常见面试题之第八十二篇之MVC(建议收藏)
- 带父节点的平衡二叉树_深入理解(二叉树、平衡二叉树、B-Tree、B+Tree )的区别
- 随记:Ubuntu12.04下关闭图形界面的相关问题
- n维椭球体积公式_干掉公式 —— numpy 就该这么学
- 花椒前端基于WebAssembly 的H.265播放器研发
- 阿拉伯文字库 阿拉伯语字库 阿拉伯 字库 变形组合算法
- 电脑怎么加快网页打开速度?加快网速。
- 关于计算机学院 公众号的名字,好听的微信公众号名字
- 鸿蒙操作系统全面屏,首发鸿蒙操作系统!华为P50 Pro稳了:居中开孔全面屏
- 360测试开发面试总结 -- 失败
- 欢迎进入夜色的繁星博客导航一站式搜索(所有博客的汇总帖)
- openEuler couldnt resolve host name
- 使用Flash制作IeBook中页面跳转的按钮
- 修改安卓系统应用,将自己的app变成系统应用(需要root)
- [JQuery实现] 测测你今天的运势如何?(程序猿老黄历)
热门文章
- 苹果手表使用|watchOS 7+自动化:Apple Watch使用方法
- MATLAB中的清除命令
- 【白娘子传奇】大话版VM一键端+GM后台+视频教程
- 小米电视怎么看电视直播?推荐好用的电视直播软件
- 数据结构实验九 下三角矩阵的压缩存储
- Irene Tong的空间
- Jmeter测试上传身份证图片base64编码接口
- 数字信号处理实践方法 第二版 笔记
- 不止V神,全球最懂以太坊的人都来齐了,还差你
- 计算机桌面上的声音图标没了怎么办,音量图标不见了的4个解决方法!电脑桌面上声音图标怎么恢复?...