ELK日志监控系统搭建

一、安装Elasticsearch: https://es.xiaoleilu.com/index.html 
1、下载elasticsearch安装文件,官网下载地址:https://www.elastic.co/cn/downloads 
2、解压elasticsearch压缩包至目录/opt/apps/elasticsearch 
3、elasticsearch用root账户运行会报错,因此创建账号elastic 
groupadd elastic 
useradd –g elastic elastic 
chown –R elastic /opt/apps/elasticsearch 
chgrp –R elastic /opt/apps/elasticsearch 
3、Elasticsearch运行依赖java,如果未安装jre,需要安装jre 
4、系统配置修改: 
4.1、vim /etc/sysctl.conf, 添加如下配置: vm.max_map_count=655360 并执行命令:sysctl –p 
4.2、 vim /etc/security/limits.conf, 添加如下配置: 
* soft nofile 65536 
* hard nofile 131072 
* soft nproc 2048 
* hard nproc 4096 
5、启动elasticsearch: 记得不要用root账号启动, /opt/apps/elasticsearch/elasticsearch/bin/elasticsearch & 
6、启动成功之后可以访问http://192.168.40.128:9200/,如果显示如下内容,这说明es启动成功,如果没有,请根据es启动日志寻找错误 
 
完整配置文件: 
Master: 
# ======================== Elasticsearch Configuration ========================= 

# NOTE: Elasticsearch comes with reasonable defaults for most settings. 
# Before you set out to tweak and tune the configuration, make sure you 
# understand what are you trying to accomplish and the consequences. 

# The primary way of configuring a node is via this file. This template lists 
# the most important settings you may want to configure for a production cluster. 

# Please consult the documentation for further information on configuration options: 
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html 

# ———————————- Cluster ———————————– 

# Use a descriptive name for your cluster: 

cluster.name: prj-logcollection 

# ———————————— Node ———————————— 

# Use a descriptive name for the node: 

node.name: es-node1 

# Add custom attributes to the node: 

node.attr.rack: r1

node.master: true
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
path.logs: /opt/apps/elasticsearch/elasticsearch-run.logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.3.16
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# 设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9300
http.enabled: true
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.3.18"] #slave
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
#discovery.zen.ping.multicast.enabled: true
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
http.cors.enabled: true
http.cors.allow-origin: "*"#index.number_of_shards: 8
#index.number_of_replicas: 2#-----------------------------------------------------------------------------------------------slave:     # ----------------------------------- Paths ------------------------------------# Path to log files:
path.logs: /opt/apps/elasticsearch/elasticsearch-run.logs# ----------------------------------- Memory -----------------------------------# Lock the memory on startup:
##bootstrap.memory_lock: true
## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is  allowed to use this# limit.
## Elasticsearch performs poorly when the system is swapping the memory.
## ---------------------------------- Network -----------------------------------
## Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.3.18
## Set a custom port for HTTP:
#
http.port: 9200# 设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9300
http.enabled: true# --------------------------------- Discovery ----------------------------------# Pass an initial list of hosts to perform discovery when new node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]discovery.zen.ping.unicast.hosts: ["192.168.3.18"] discovery.zen.minimum_master_nodes: 1http.cors.enabled: true
http.cors.allow-origin: "*"

二、安装Logstash: https://kibana.logstash.es/content/logstash/ 
1、在需要采集日志的server上安装logstash(tar.gz),官网下载地址:https://www.elastic.co/cn/downloads 
2、解压logstash压缩包至目录/opt/apps/logstash 
3、Logstash运行依赖java,如果未安装jre,需要安装jre 
4、修改logstash配置文件 vim /opt/apps/logstash/config/logstash.yml 
node.name: logstash-node-1 //踩过的雷,ELK的配置文件中“:”后面需要空一格 
保存并退出 
5、根据需要匹配的日志格式新建自定义patterns 
Patterns定义好之后开始配置启动配置文件:vim /opt/apps/logstash/config/logstash-ebk.config 
这里需要定义三个部分: input(指定日志源) filter(定义匹配规则,和过滤处理日志数据), output(指定输出地方) 
配置如下

input {file {path => "/home/jack/logs/log.*"type => "error_log"start_position => "beginning"codec => multiline {pattern => “^\d{2}:\d{1,2}:\d{1,2}\.\d{1,4}“ //用于匹配头部,合并被冲散的日志negate => truewhat => "previous"}}//filebeatbeats {port => "5044"}}        filter { if [type] == "RPOD_EBK_LOG" {grok {patterns_dir => "/opt/apps/logstash/config/self_patterns"match => {"message" => "%{STIME:timestamp}\s{1}%{SUER:user}\s{1}%{SLOGLEVEL:loglevel}\s{1,3}%{SCLASS:class}\s{1,3}-\s{1}%{SMESSAGE:msg}"}}}if [type] == "ACCESS_LOG" {grok {patterns_dir => "/opt/apps/logstash/config/self_patterns"match => {"message" => "%{SWIP:remoteIp}\s{1,3}%{SANAME:loginNick}\s{1,3}%{SATIME:timestamp}\s{1,3}%{SAMSG:msg}\s{1,3}%{SNUMUMBER:responseCode}\s{1,3}%{SNUMUMBER:dataSize}"}}mutate {split => ["remoteIp",","]add_field => {"client_ip" => "%{[remoteIp][0]}"}remove_field => ["remoteIp"]add_field => {"remote_ip" => "%{[client_ip]}"}remove_field => ["client_ip"]}mutate {remove_field => ["message"]gsub => ["timestamp","\[",""]gsub => ["timestamp","\]",""]}date {match => ["timestamp","dd/MMM/YYYY:HH:mm:ss Z"]target => "@timestamp"}#geopoint{#  source => "remote_ip"#}geoip {source => "remote_ip"}mutate {split => ["msg"," "]add_field => {"method" => "%{[msg][0]}"}add_field => {"requesturltemp" => "%{[msg][1]}"}# remove_field => ["msg"]}mutate {split => ["requesturltemp","?"]add_field => {"requesturl" => "%{[requesturltemp][0]}"}add_field => {"params" => "%{[requesturltemp][1]}"}remove_field => ["requesturltemp"]}}}output {if [type] == "ERROR_LOG" {if "ERROR" in [loglevel] {elasticsearch {hosts => ["192.168.3.175:9200"] //es master 地址index => "prod-ebk-info-%{+YYYY.MM.dd}" //index不能有大写字母document_type => "%{type}"flush_size => 500idle_flush_time => 150sniffing => truetemplate => "/opt/apps/logstash/config/logstash-template.json"template_overwrite => true}}}if [type] == "ACCESS_LOG" {if [tags] and "_grokparsefailure" in [tags] {     #DO NOTHING} else {elasticsearch {hosts => ["192.168.3.175:9200"]index => "access-%{+YYYY.MM.dd}"document_type => "%{type}"flush_size => 500idle_flush_time => 300sniffing => truetemplate => "/opt/apps/logstash/config/logstash-template.json"template_overwrite => true}}}}

三、安装filebeat

     过程与logstash安装过程一样,此处省略,安装包在elastic官方网站上可以找到;以下是配置文件filebeat:spool_size: 1024                                       # 最大可以攒够 1024 条数据一起发送出去 此处设置需根据服务实际log数量设置,设置不当会给服务器带来额外负载idle_timeout: "10s"                                  # 否则每n 秒钟也得发送一次  registry_file: ".filebeat"                           # 文件读取位置记录文件,会放在当前工作目录下。所以如果你换一个工作目录执行 filebeat 会导致重复传输!prospectors:- input_type: log                                     # 输入类型logpaths:                                                      # 监控日志路径- /home/jack/prj/logs/prj.*- /home/admin/prj/.default/logs/prj1.*include_lines: ["\\s\\S*ERROR"]exclude_files: ["\\s\\S*INFO"]ignore_older: "5m"scan_frequency: "6000s" backoff: "19s"tail_files: falseharvester_buffer_size: 16384document_type: RPOD_LOGmultiline.pattern: ^\d{2}:\d{1,2}:\d{1,2}\.\d{1,4}multiline.negate: truemultiline.match: before- input_type: logpaths:- /home/jack/prj/.default/logs/localhost_access_log.*ignore_older: "5m"scan_frequency: "6000s" backoff: "19s"tail_files: falseharvester_buffer_size: 16384#log开头正则匹配 multiline.pattern: ^(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[1-9])\.(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[1-9]|0)\.(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[1-9]|0)\.(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[0-9])multiline.negate: truemultiline.match: beforedocument_type: PROD_ACCESS_LOG# Optional protocol and basic auth credentials.#protocol: "https"#username: "elastic"#password: "changeme"output.logstash:# The Logstash hostshosts: ["192.168.16.15:5044"] #worker: 1#loadbalance: true# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"

四、 Kibana安装 文档地址: https://kibana.logstash.es/content/kibana/ 
1、下载Kibana安装文件,官网下载地址:https://www.elastic.co/cn/downloads 
2、解压Kibana压缩包至目录/opt/apps/kibana 
3、修改kibana配置文件,vim /opt/apps/kibana/config/kibana.yml 
server.host: “192.168.40.128“ 
elasticsearch.url: http://192.168.40.128:9200 
kibana.index: “.kibana“ 
4、启动kibana:/opt/apps/kibana/bin/kibana & 
5、启动成功之后,在浏览器访问:http://192.168.40.128:5601

五、启动

**es启动顺序,slave-->master 脚本:**#!/bin/shnohup ./bin/elasticsearch > ./elasticsearch.log   >/dev/null 2>&1  & **es停止脚本:**pid=`ps -ef|grep elasticsearch | grep -v "$0" | grep -v "grep" | awk '{print $2}'`echo $pidkill -9 $pid**filebeat启动脚本:**#!/bin/shnohup  ./filebeat  -c filebeat.yml -e >/dev/null 2>&1  &**filebeat停止脚本:**#! /bin/shpid=`ps -ef|grep filebeat | grep -v "$0" | grep -v "grep" | awk '{print $2}'`echo $pidkill -9 $pid**logstash启动脚本**#!/bin/shnohup ./bin/logstash -f ./config/logstash_log.config  >/dev/null 2>&1  &**kibana启动脚本**#!/bin/shnohup ./bin/kibana > ./kibana.log 2>&1  &**kibana停止脚本**#! /bin/shpid=`fuser -n tcp  5601| grep -v "$0" | grep -v "grep" | awk '{print $0}'`echo $pidkill -9 $pid

六、Kibana使用高德地图 
修改配置 
1. 编辑kibana配置文件kibana.yml,最后面添加 
tilemap.url: ‘http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}’ 
2. 在logstash服务器下载IP地址归类查询库 
wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz & gunzip GeoLite2-City.mmdb.gz 
3. 编辑logstash配置文件 
filter { 
geoip { 
source => “message” 
target => “geoip” 
database => “/usr/local/logstash-5.1.1/config/GeoLite2-City.mmdb” 
add_field => [“[geoip][coordinates]”,”%{[geoip][longitude]}”] 
add_field => [“[geoip][coordinates]”,”%{[geoip][latitude]}”] 

}

• geoip: IP查询插件
• source: 需要通过geoip插件处理的field,一般为ip,这里因为通过控制台手动输入的是ip所以直接填message,生成环境中如果查询nginx访问用户,需先将客户端ip过滤出来,然后这里填clientip即可
• target: 解析后的Geoip地址数据,应该存放在哪一个字段中,默认是geoip这个字段
• database: 指定下载的数据库文件
• add_field: 这里两行是添加经纬度,地图中地区显示是根据经纬度来识别

ELK日志监控系统搭建相关推荐

  1. ELK日志分析系统搭建以及springboot日志发送到ELK中

    前言 安装之前服务器必须装了Java环境,我们这里安装的是7.7.0版本,而且7.7.0版本还必须要求jdk11以上.,最好跟我安装的路径保持一致/usr/local/elk,千万不要在root 安装 ...

  2. 深入浅出ELK日志收集系统搭建

    先看一下目录图 背景 试想这么一种场景:Nginx负载了2个Tomcat,那么日志查看就很麻烦了,每次查看日志都要登录2台服务器,挨个搜索,2台还好,如果5台呢?10台呢?那查看日志就可费劲了,所以需 ...

  3. CentOs 7.2下ELK日志分析系统搭建

    系统环境 为了安装时不出错,建议选择这两者选择一样的版本,本文全部选择5.3版本. System: Centos release 7.2 Java: openjdk version "1.8 ...

  4. ELK日志分析系统搭建 v6.0.0

    2019独角兽企业重金招聘Python工程师标准>>> 一.服务器信息 [注意:必须2核以上] 二.基础支持yum安装 [注意:否则后面操作会异常] yum -y install j ...

  5. ELK 搭建 TB 级海量日志监控系统,这个太强了!

    欢迎关注方志朋的博客,回复"666"获面试宝典 作者:非洲羚羊 来源:cnblogs.com/dengbangpang/p/12961593.html 本文主要介绍怎么使用 ELK ...

  6. 深度学习核心技术精讲100篇(四十八)-TB级的日志监控系统很难?带你使用ELK轻松搭建日志监控系统

    前言 本文主要介绍怎么使用 ELK Stack 帮助我们打造一个支撑起日产 TB 级的日志监控系统.在企业级的微服务环境中,跑着成百上千个服务都算是比较小的规模了.在生产环境上,日志扮演着很重要的角色 ...

  7. 如何用ELK搭建TB级的日志监控系统?

    点击上方"朱小厮的博客",选择"设为星标" 后台回复"加群",加入新技术 来源:8rr.co/6UEz 本文主要介绍怎么使用 ELK Sta ...

  8. 搭建ELK日志分析系统详解

    日志分析是运维工程师解决系统故障.发现问题的主要手段.日志包含多种类型,包括程序日志.系统日志以及安全日志等.通过对日志的分析,既可以做到未雨绸缪.预防故障的发生,又可以在故障发生时,寻找蛛丝马迹.快 ...

  9. ELK日志分析系统(二)之ELK搭建部署

    文章目录 引言 一.Elasticsearch 集群部署(在Node1上操作) 1.准备环境 2.部署安装Elasticsearch软件 3.配置Elasticsearch主配置文件 4.创建数据存放 ...

最新文章

  1. 老视频修复爆火,却惹恼了历史学家:这并不是照片的本质
  2. SecureCRT在卸载时似乎会同时删除系统自带的Consolas字体
  3. 【AI视野·今日CV 计算机视觉论文速览 第190期】Fri, 9 Apr 2021
  4. rocketmq技术内幕 pdf_618买什么也别忘了买书!精选100+本技术好书和思维导图,建议收藏!...
  5. 为什么每个邮件收到后都会有一个htm的附件_职场邮件:领导、同事都喜欢收到的邮件丨邮件技巧...
  6. atitit. orm mapping cfg 映射配置(3)-------hbnt one2maney cfg
  7. SQL 分页查询 返回总条数
  8. 苹果手机长截屏_涨知识了!原来苹果手机也可以长截屏,还不知道的,快来学一学...
  9. 数据库考点之关系代数表达
  10. 制作一个主题网站(注意是网站,不是网页,网站应该包括一个主页和若干子页),本次主题中华民族传统美德。
  11. 澤天夬 (易經大意 韓長庚)
  12. git 主干修改合并到分支_git分支与主干合并操作
  13. 微信第三方登陆实现-微信浏览器实现弹出提示授权非微信浏览器提供二维码
  14. 外贸里面 LC TT DP DA BG 是什么
  15. 镜头的焦距与视场角简介!
  16. python爬虫抓取双色球_Python爬虫练习:爬取双色球每期的中奖号码,看能不能中奖...
  17. iOS 越狱的Tweak开发
  18. 使用google.gson工具时-JSON(谷歌)的使用
  19. 针对校园 移动 联通 路由器安装方法
  20. 以太坊智能合约的原理和使用方法

热门文章

  1. 镜子的Spring之旅 之 Spring MVC
  2. window10 android studio连接不上夜神/mumu/蓝叠模拟器
  3. mysql mysqldbcompare 比较两个数据库间的结构差异、数据差异
  4. mathtype使复制粘贴功能失效
  5. delphi xe 10.3 用FastReport打印预览当前记录
  6. python 之pyecharts画图:最全地图,词云图,世界地图,省份图,区县图
  7. TikTok Shop跨境英国市场商品入仓模式更新啦,单多多跨境提醒您应注意以下几点
  8. 三菱PLC伺服fb功能块程序 伺服用的FB功能块写法,编程方式非常清晰明了
  9. 万能恢复大师广告弹窗--问题解决
  10. 强制6点下班不再996?腾讯回应。。。