ELK Stack 简介

ELK 不是一款软件,而是 Elasticsearch、Logstash 和 Kibana 三种软件产品的首字母缩写。这三者都是开源软件,通常配合使用,而且又先后归于 Elastic.co 公司名下,所以被简称为 ELK Stack。根据 Google Trend 的信息显示,ELK Stack 已经成为目前最流行的集中式日志解决方案。
组成入下图:

Elasticsearch:分布式搜索和分析引擎,具有高可伸缩、高可靠和易管理等特点。基于 Apache Lucene 构建,能对大容量的数据进行接近实时的存储、搜索和分析操作。通常被用作某些应用的基础搜索引擎,使其具有复杂的搜索功能;
Logstash:数据收集引擎。它支持动态的从各种数据源搜集数据,并对数据进行过滤、分析、丰富、统一格式等操作,然后存储到用户指定的位置;
Kibana:数据分析和可视化平台。通常与 Elasticsearch 配合使用,对其中数据进行搜索、分析和以统计图表的方式展示;
Filebeat:ELK 协议栈的新成员,一个轻量级开源日志文件数据搜集器,基于 Logstash-Forwarder 源代码开发,是对它的替代。在需要采集日志数据的 server 上安装 Filebeat,并指定日志目录或日志文件后,Filebeat 就能读取数据,迅速发送到
Logstash 进行解析,亦或直接发送到 Elasticsearch 进行集中式存储和分析。
ELK 常用架构及使用场景介绍
在这种架构中,只有一个 Logstash、Elasticsearch 和 Kibana 实例。Logstash 通过输入插件从多种数据源(比如日志文件、标准输入 Stdin 等)获取数据,再经过滤插件加工数据,然后经 Elasticsearch 输出插件输出到 Elasticsearch,通过 Kibana 展示。详见图 1。

Logstash 作为日志搜集器

这种架构是对上面架构的扩展,把一个 Logstash 数据搜集节点扩展到多个,分布于多台机器,将解析好的数据发送到 Elasticsearch server 进行存储,最后在 Kibana 查询、生成日志报表等。
、Beats 作为日志搜集器

这种架构引入 Beats 作为日志搜集器。目前 Beats 包括四种:

Packetbeat(搜集网络流量数据);
Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据);
Filebeat(搜集文件数据);
Winlogbeat(搜集 Windows 事件日志数据)。

Beats 将搜集到的数据发送到 Logstash,经 Logstash 解析、过滤后,将其发送到 Elasticsearch 存储,并由 Kibana 呈现给用户。

二、工作流程

在需要收集日志的所有服务上部署logstash,作为logstash agent(logstash shipper)用于监控并过滤收集日志,

将过滤后的内容发送到Redis,然后logstash indexer将日志收集在一起交给全文搜索服务ElasticSearch,

可以用ElasticSearch进行自定义搜索通过Kibana 来结合自定义搜索进行页面展示

而Logstash 社区通常习惯用 shipper,broker 和 indexer 来描述数据流中不同进程各自的角色。如下图:

官网:https://www.elastic.co/可浏览

三、环境准备(三台虚拟机:内存2G,注意:为了避免影响测试,临时将firewall与selinux关闭。)
1.elasticsearch环境部署:

[root@server1 ~]# ls
elasticsearch-6.6.1.rpm  elasticsearch-head-master.zip  jdk-8u171-linux-x64.rpm
[root@server1 ~]# rpm -ivh elasticsearch-6.6.1.rpm
warning: elasticsearch-6.6.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
could not find java; set JAVA_HOME or ensure java is in PATH   ##报错:找不到java环境
error: %pre(elasticsearch-0:6.6.1-1.noarch) scriptlet failed, exit status 1
error: elasticsearch-0:6.6.1-1.noarch: install failed[root@server1 ~]# rpm -ivh  jdk-8u171-linux-x64.rpm
Preparing...                          ################################# [100%]
Updating / installing...1:jdk1.8-2000:1.8.0_171-fcs        ################################# [100%]
Unpacking JAR files...tools.jar...plugin.jar...javaws.jar...deploy.jar...rt.jar...jsse.jar...charsets.jar...localedata.jar...
[root@server1 ~]# rpm -ivh elasticsearch-6.6.1.rpm
warning: elasticsearch-6.6.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...1:elasticsearch-0:6.6.1-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch

编辑配置文件:

[root@server1 ~]# cd /etc/elasticsearch/  ##安装后自动生成
[root@server1 elasticsearch]# ls
elasticsearch.keystore  jvm.options        role_mapping.yml  users
elasticsearch.yml       log4j2.properties  roles.yml         users_roles
[root@server1 elasticsearch]# vim elasticsearch.yml 17 cluster.name: my-es   #给服务起个名字
23 node.name: server1   #自己的主机名
33 path.data: /var/lib/elasticsearch   ##数据存放路径(可以多个不同路径)
43 bootstrap.memory_lock: true  ##自动锁定内存(自动为该服务锁定1G内存,内存不够的话无法开启服务,可不打开这一项)55 network.host: 172.25.3.1  #主机ip59 http.port: 9200

修改完配置文件开启服务

[root@server1 elasticsearch]# systemctl start elasticsearch
[root@server1 elasticsearch]# netstat -antlp | grep :9200   ##此时端口没打开,服务状态关闭
[root@server1 elasticsearch]# systemctl status elasticsearch
● elasticsearch.service - ElasticsearchLoaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)Active: failed (Result: exit-code) since Thu 2019-07-25 09:58:16 CST; 24s agoDocs: http://www.elastic.coProcess: 2846 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=78)Main PID: 2846 (code=exited, status=78)


查看日至寻找报错原因:
[root@server1 elasticsearch]# cat /var/log/elasticsearch/my-es.log

[root@server1 elasticsearch]# cat  /var/log/elasticsearch/my-es.log
[2019-07-25T09:57:53,982][WARN ][o.e.b.JNANatives         ] [server1] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2019-07-25T09:57:53,985][WARN ][o.e.b.JNANatives         ] [server1] This can result in part of the JVM being swapped out.
[2019-07-25T09:57:53,985][WARN ][o.e.b.JNANatives         ] [server1] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2019-07-25T09:57:53,985][WARN ][o.e.b.JNANatives         ] [server1] These can be adjusted by modifying /etc/security/limits.conf, for example: # allow user 'elasticsearch' mlockallelasticsearch soft memlock unlimited   #服务没有限制/etc/security/limits.conf, 文件中elasticsearch hard memlock unlimited


修改限制

[root@server1 elasticsearch]# sysctl -a | grep file  ##查看主机最大文件开启数
fs.file-max = 184182   #最大数
fs.file-nr = 800   0   184182
fs.xfs.filestream_centisecs = 3000
[root@server1 elasticsearch]# vim  /etc/security/limits.conf  #在文件末尾添加
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
elasticsearch   -    nofile  65536[root@server1 elasticsearch]# free -mtotal        used        free      shared  buff/cache   available
Mem:           1839          70        1253          14         515        1597
Swap:          2047           2        2045
[root@server1 elasticsearch]# swapoff -a  ##关闭swap分区
[root@server1 elasticsearch]# vim /etc/fstab
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0  #注释这一行永久关闭
[root@server1 elasticsearch]# vim  /etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimitedelasticsearch   -    nofile  65536   #小于主机最大文件数
elasticsearch   -    nproc   4096[root@server1 elasticsearch]# vim /usr/lib/systemd/system/elasticsearch.service
LimitNOFILE=65536
LimitMEMLOCK=infinity   #添加内存锁# Specifies the maximum number of processes
LimitNPROC=4096# Specifies the maximum size of virtual memory
LimitAS=infinity# Specifies the maximum file size
LimitFSIZE=infinity[root@server1 elasticsearch]# systemctl daemon-reload   ##重启服务
[root@server1 elasticsearch]# systemctl restart elasticsearch
[root@server1 elasticsearch]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      658/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      833/master
tcp        0      0 172.25.3.1:22           172.25.3.250:56776      ESTABLISHED 2065/sshd: root@pts
tcp6       0      0 172.25.3.1:9200         :::*                    LISTEN      13108/java
tcp6       0      0 172.25.3.1:9300         :::*                    LISTEN      13108/java
tcp6       0      0 :::22                   :::*                    LISTEN      658/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      833/master
[root@server1 elasticsearch]# tail -f /var/log/elasticsearch/my-es.log   #查看日志

访问http://172.25.3.1:9200/:

解压elasticsearch-head-master

[root@server1 ~]# yum install -y unzip
[root@server1 ~]# unzip
.bash_logout                   elasticsearch-6.6.1.rpm        .ssh/
.bash_profile                  elasticsearch-head-master.zip  .tcshrc
.bashrc                        jdk-8u171-linux-x64.rpm        .viminfo
.cshrc                         .oracle_jre_usage/
[root@server1 ~]# unzip elasticsearch-head-master.zip   #解压该文件
[root@server1 ~]# ls
6.6                      elasticsearch-head-master #解压成功后自动生成该目录      jdk-8u171-linux-x64.rpm
elasticsearch-6.6.1.rpm  elasticsearch-head-master.zip
[root@server1 ~]# [root@server1 6.6]# rpm -ivh  nodejs-9.11.2-1nodesource.x86_64.rpm    ##安装nodesource软件包
warning: nodejs-9.11.2-1nodesource.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 34fa74dd: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:nodejs-2:9.11.2-1nodesource      ################################# [100%]
[root@server1 6.6]# node -v   ##查看node版本
v9.11.2
[root@server1 6.6]# npm -v  #查看npm 版本
5.6.0
[root@server1 ~]# npm config list
; cli configs
metrics-registry = "https://registry.npmjs.org/"   ##这个库会很慢
scope = ""
user-agent = "npm/5.6.0 node/v9.11.2 linux x64"; node bin location = /usr/bin/node
; cwd = /root
; HOME = /root
; "npm config ls -l" to show all defaults.
[root@server1 ~]# npm set registry https://registry.npm.taobao.org/   ##修改一个仓库
[root@server1 ~]# npm config list
; cli configs
metrics-registry = "https://registry.npm.taobao.org/"
scope = ""
user-agent = "npm/5.6.0 node/v9.11.2 linux x64"; userconfig /root/.npmrc
registry = "https://registry.npm.taobao.org/"; node bin location = /usr/bin/node
; cwd = /root
; HOME = /root
; "npm config ls -l" to show all defaults.[root@server1 6.6]# yum install bzip2
[root@server1 6.6]# tar jxf phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@server1 6.6]# cd phantomjs-2.1.1-linux-x86_64
[root@server1 phantomjs-2.1.1-linux-x86_64]# cd bin/
[root@server1 bin]# ls
phantomjs
[root@server1 bin]# cp phantomjs /usr/local/bin/
[root@server1 ~]# cd elasticsearch-head-master
[root@server1 elasticsearch-head-master]# npm install
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.9 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.9: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})up to date in 4.847s[root@server1 elasticsearch-head-master]# vim _site/app.js
4360                         this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "h     ttp://172.25.3.1:9200";
[root@server1 elasticsearch-head-master]# npm run start &  ##运行npm 打入后台
[root@server1 elasticsearch-head-master]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      13359/grunt
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      658/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      833/master
tcp        0      0 172.25.3.1:22           172.25.3.250:56776      ESTABLISHED 2065/sshd: root@pts
tcp6       0      0 172.25.3.1:9200         :::*                    LISTEN      13385/java
tcp6       0      0 172.25.3.1:9300         :::*                    LISTEN      13385/java
tcp6       0      0 :::22                   :::*                    LISTEN      658/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      833/master

[root@server1 elasticsearch-head-master]# vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
[root@server1 elasticsearch-head-master]# systemctl restart elasticsearch

访问http://172.25.3.1:9100/

查询测试:


建立集群

当前状态集群未建立:

server2、server3部署elasticsearch:

[root@server2 ~]# swapoff -a   ##两台节点关闭swap分区
[root@server2 ~]# vim /etc/fstab
[root@server2 6.6]# rpm -ivh jdk-8u171-linux-x64.rpm
Preparing...                          ################################# [100%]
Updating / installing...1:jdk1.8-2000:1.8.0_171-fcs        ################################# [100%]
Unpacking JAR files...tools.jar...plugin.jar...javaws.jar...deploy.jar...rt.jar...jsse.jar...charsets.jar...localedata.jar...
[root@server2 6.6]# rpm -ivh elasticsearch-6.6.1.rpm
warning: elasticsearch-6.6.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...1:elasticsearch-0:6.6.1-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch修改配置文件
[root@server2 ~]# vim /usr/lib/systemd/system/elasticsearch.service
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
LimitMEMLOCK=infinity# Specifies the maximum number of processes
LimitNPROC=4096# Specifies the maximum size of virtual memory
LimitAS=infinity# Specifies the maximum file size
LimitFSIZE=infinity# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0[root@server2 ~]# vim /etc/elasticsearch/elasticsearch.yml 17 cluster.name: my-es23 node.name: server2
33 path.data: /var/lib/elasticsearch43 bootstrap.memory_lock: true
55 network.host: 172.25.3.259 http.port: 9200
61 http.cors.enabled: true62 http.cors.allow-origin: "*"[root@server2 ~]# vim /etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimitedelasticsearch   -    nofile  65536
elasticsearch   -    nproc   4096
[root@server2 ~]# systemctl daemon-reload
[root@server2 ~]# systemctl start elasticsearch
[root@server2 ~]# tail -f /var/log/elasticsearch/my-es.log
[2019-07-25T14:08:11,386][INFO ][o.e.g.GatewayService     ] [server2] recovered [0] indices into cluster_state
[2019-07-25T14:08:11,808][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-07-25T14:08:11,982][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-07-25T14:08:12,059][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.watches] for index patterns [.watches*]
[2019-07-25T14:08:12,150][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-07-25T14:08:12,266][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-07-25T14:08:12,356][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-07-25T14:08:12,431][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-07-25T14:08:12,513][INFO ][o.e.c.m.MetaDataIndexTemplateService] [server2] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-07-25T14:08:12,788][INFO ][o.e.l.LicenseService     ] [server2] license [c3d844ed-586b-4267-b2f4-5dc3dac7e951] mode [basic] - valid

查看端口:

server1、server2、server3添加集群节点:

[root@server1 ~]# vim /etc/elasticsearch/elasticsearch.yml
discovery.zen.ping.unicast.hosts: ["server1", "server2","server3"]
[root@server1 ~]# systemctl restart elasticsearch

再次访问:

但这个时候还分不出谁是master,需要在文件中配置

[root@server1 ~]# vim /etc/elasticsearch/elasticsearch.yml
node.name: server1
#
node.master: true   #作为master
node.data: false   ##不存储数据
[root@server1 ~]# systemctl restart elasticsearch[root@server2 ~]#  vim /etc/elasticsearch/elasticsearch.yml
node.name: server2
#
node.master: false   ##不作为master
node.data: true   #存储数据
[root@server2 ~]#  systemctl restart elasticsearch[root@server3 ~]# vim /etc/elasticsearch/elasticsearch.yml
node.name: server3
#
node.master: false
node.data: true
[root@server3 ~]#  systemctl restart elasticsearch

再次访问http://172.25.3.1:9100/:

###部署logstash####
server2上部署:

[root@server2 6.6]# rpm -ivh logstash-6.6.1.rpm
warning: logstash-6.6.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:logstash-1:6.6.1-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)     ##报错:内存不足
/usr/share/logstash/bin/system-install: line 88: #: command not found
Unable to install system startup script for Logstash.
chmod: cannot access ‘/etc/default/logstash’: No such file or directory
warning: %post(logstash-1:6.6.1-1.noarch) scriptlet failed, exit status 1
[root@server2 6.6]# free -m   ##查看空间不足total        used        free      shared  buff/cache   available
Mem:           1819        1372          82          16         364         148
Swap:             0           0           0
[root@server2 6.6]# vim /etc/fstab    ##打开swap分区
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server2 6.6]# mount -a
[root@server2 6.6]# swapon -a
[root@server2 6.6]# swapon -s
Filename                Type        Size    Used    Priority
/dev/dm-1                               partition   2097148 0   -1
[root@server2 6.6]# rpm -ivh logstash-6.6.1.rpm  --force
warning: logstash-6.6.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:logstash-1:6.6.1-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash   ##安装成功
[root@server2 bin]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {} }'   ##输入输出测试    


编辑采集数据文件

[root@server2 bin]# cd /etc/logstash/conf.d/
[root@server2 conf.d]# vim es.conf
input {stdin {}
}output {stdout {}elasticsearch {hosts => "172.25.3.1:9200"index => "logstash-%{+YYYY.MM.dd}"}
}[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf


访问日志采集界面:


实现实时采集

[root@server2 conf.d]# vim es.confinput {file {path => "/var/log/elasticsearch/my-es.log"start_position => "beginning"}
}output {stdout {}elasticsearch {hosts => "172.25.3.1:9200"index => "es-%{+YYYY.MM.dd}"}
}[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf


此时的日至采集页面改变

删除界面监控信息;点击监控项->动作->删除->输入:删除->ok

注意:再次监控统一监控向需要删除i know号

[root@server2 conf.d]# cd /usr/share/logstash/data/plugins/inputs/file/
[root@server2 file]# ls
[root@server2 file]# l.
.  ..  .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5
[root@server2 file]# cat .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5
33637093 0 64768 65109 1564039466.11288 /var/log/elasticsearch/my-es.log   #这两个的i know号一致,每个文件的都有一个i know 号
[root@server2 file]# ll -i /var/log/elasticsearch/my-es.log
33637093 -rw-r--r-- 1 elasticsearch elasticsearch 65109 Jul 25 15:18 /var/log/elasticsearch/my-es.log
[root@server2 file]# rm -f .sincedb_d5a86a03368aaadc80f9eeaddba3a9f5   ##之后才能继续进行采集

节点日志采集:

server3编辑文件:
[root@server3 ~]# vim /etc/rsyslog.conf*.* @@172.25.3.2:514
[root@server3 ~]# systemctl restart rsyslog[root@server2 conf.d]# vim es.conf
input {
#        file {
#                  path => "/var/log/elasticsearch/my-es.log"
#                  start_position => "beginning"
#
#        }syslog {port => 514}}output {stdout {}elasticsearch {hosts => "172.25.3.1:9200"index => "syslog-%{+YYYY.MM.dd}"}
}
[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf
[root@server3 ~]# logger hello world
[root@server3 ~]# logger  hello westos[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf
>"0.0.0.0:514"}
[INFO ] 2019-07-25 15:57:48.243 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-07-25 15:57:49.841 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2019-07-25 15:58:22.740 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-syslog-3.4.1/lib/logstash/inputs/syslog.rb:130] syslog - new connection {:client=>"172.25.3.3:59848"}
{"severity" => 5,"@timestamp" => 2019-07-25T07:58:22.000Z,"priority" => 13,"timestamp" => "Jul 25 15:58:22","severity_label" => "Notice","@version" => "1","program" => "root","facility" => 1,"facility_label" => "user-level","host" => "172.25.3.3","message" => "hello world\n","logsource" => "server3"
}
{"severity" => 5,"@timestamp" => 2019-07-25T07:59:29.000Z,"priority" => 13,"timestamp" => "Jul 25 15:59:29","severity_label" => "Notice","@version" => "1","program" => "root","facility" => 1,"facility_label" => "user-level","host" => "172.25.3.3","message" => "hello westos\n","logsource" => "server3"
}

访问日至采集页面


上图采集的日至不会分行,不便于查看

编辑可以分行的采集日志方式;

[root@server2 conf.d]# vim test.conf
input {stdin {codec => multiline {pattern => "^EOF"   #以E0F开头negate => "true"what => "previous"}}
}output {stdout {}
}[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
[INFO ] 2019-07-25 16:09:14.927 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
1223
213243
EOF   ##以E0F开头的为一个日志采集点
{"message" => "1223\n213243","@timestamp" => 2019-07-25T08:09:28.874Z,"@version" => "1","host" => "server2","tags" => [[0] "multiline"]更改对server3日至采集的方式
[root@server2 conf.d]# vim es.conf
input {file {path => "/var/log/elasticsearch/my-es.log"start_position => "beginning"codec => multiline {pattern => "^\["   #以[ ]开头分一个negate => "true"what => "previous"}}#          syslog {
#                 port => 514
#           }
#
}output {stdout {}elasticsearch {hosts => "172.25.3.1:9200"index => "es-%{+YYYY.MM.dd}"}
[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf


访问页面:


###采集http的日志###
安装http

[root@server2 ~]# yum install -y httpd
[root@server2 ~]# cd /var/www/html/
[root@server2 html]# echo www.westos.org > index.html   ##生成前端访问页面
[root@server2 html]# systemctl start httpd
[root@server2 html]# curl localhost
www.westos.org

可以通过压测生成大量日志
[root@foundation3 ~]# ab -c 1 -n 100 172.25.3.2/index.html

server2编辑日志采集文件

input {
#        file {
#                  path => "/var/log/elasticsearch/my-es.log"
#                  start_position => "beginning"
#                  codec => multiline {
#                       pattern => "^\["
#                       negate => "true"
#                       what => "previous"
#                 }
#
#        }
#
#
#         syslog {
#                 port => 514
#           }file {path => "/var/log/httpd/access_log"start_position => "beginning"}
}output {stdout {}elasticsearch {hosts => "172.25.3.1:9200"index => "apache-%{+YYYY.MM.dd}"}
}}
[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf


访问:


添加过滤(删除i know号和前端apache日志采集):

[root@server2 conf.d]# vim es.conf file {path => "/var/log/httpd/access_log"start_position => "beginning"}
}filter {grok {match => { "message" => "%{HTTPD_COMBINEDLOG}"}}
}
output {stdout {}elasticsearch {hosts => "172.25.3.1:9200"index => "apache-%{+YYYY.MM.dd}"}
}[root@server2 conf.d]# rm  -f /usr/share/logstash/data/plugins/inputs/file/.sincedb_15940cad53dd1d99808eeaecd6f6ad3f[root@server2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/es.conf

访问:


####kabana部署####
server3

[root@server3 6.6]# rpm -ivh kibana-6.6.1-x86_64.rpm
warning: kibana-6.6.1-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:kibana-6.6.1-1                   ################################# [100%]
[root@server3 6.6]# cd /etc/kibana/
[root@server3 kibana]# ls
kibana.yml
[root@server3 kibana]# vim kibana.yml 2 server.port: 56017 server.host: "172.25.3.3"
28 elasticsearch.hosts: ["http://172.25.3.1:9200"]
37 kibana.index: ".kibana"
[root@server3 kibana]# systemctl start kibana
[root@server3 kibana]# netstat -antlp | grep :5601
tcp        0      0 172.25.3.3:5601         0.0.0.0:*               LISTEN      12710/node
[root@server3 kibana]# vim /etc/fstab
[root@server3 kibana]# swapon -a
[root@server3 kibana]# swapon -s
Filename                Type        Size    Used    Priority
/dev/dm-1                               partition   2097148 0   -1
[root@server3 kibana]# free -mtotal        used        free      shared  buff/cache   available
Mem:           1839         391        1065          16         382        1282
Swap:          2047           0        2047

访问界面http://172.25.3.3:5601:


kibana全英文,可以j借助插件汉化(6.7版本就自动支持汉化)

[root@server3 ~]# yum install -y unzip
[root@server3 6.6]# unzip Kibana_Hanization-master.zip
root@server3 6.6]# cd Kibana_Hanization-master
[root@server3 Kibana_Hanization-master]# ls
config  image  main.py  README.md  requirements.txt
[root@server3 Kibana_Hanization-master]# python main.py
使用示例: python main.py "/opt/kibana-5.6.2-darwin-x86_64/"
[root@server3 Kibana_Hanization-master]# python main.py /usr/share/kibana/
文件[/usr/share/kibana/dlls/vendors.bundle.dll.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/canvas/index.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/canvas/canvas_plugin/renderers/all.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/canvas/canvas_plugin/uis/arguments/all.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/canvas/canvas_plugin/uis/datasources/all.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/canvas/public/register_feature.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/ml/index.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/ml/public/register_feature.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/spaces/index.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/spaces/public/components/manage_spaces_button.js]已翻译。
文件[/usr/share/kibana/node_modules/x-pack/plugins/spaces/public/views/nav_control/components/spaces_description.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/apm.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/canvas.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/commons.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/infra.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/kibana.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/login.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/ml.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/monitoring.bundle.js]已翻译。
文件[/usr/share/kibana/optimize/bundles/timelion.bundle.js]已翻译。
文件[/usr/share/kibana/src/legacy/core_plugins/kibana/server/tutorials/kafka_logs/index.js]已翻译。
文件[/usr/share/kibana/src/ui/public/chrome/directives/global_nav/global_nav.js]已翻译。
恭喜,Kibana汉化完成!

刷新访问界面就汉化了:http://172.25.3.3:5601

#####ELK#####相关推荐

  1. 2021年大数据ELK(二十八):制作Dashboard

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 制作Dashboard 一.点击第三个组件图标,并创建一个新的Dashboar ...

  2. 2021年大数据ELK(二十七):数据可视化(Visualize)

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 数据可视化(Visualize) 一.数据可视化的类型 二.以饼图展示404与 ...

  3. 2021年大数据ELK(二十六):探索数据(Discovery)

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 探索数据(Discovery) 一.使用探索数据功能 二.导入更多的Apach ...

  4. 2021年大数据ELK(二十五):添加Elasticsearch数据源

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 添加Elasticsearch数据源 一.Kibana索引模式 添加Elast ...

  5. 2021年大数据ELK(二十四):安装Kibana

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 安装Kibana 在Linux下安装Kibana,可以使用Elastic stack ...

  6. 2021年大数据ELK(二十三):Kibana简介

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. Kibana简介 通过上面的这张图就可以看到,Kibana可以用来展示丰富的图表. ...

  7. 2021年大数据ELK(二十二):采集Apache Web服务器日志

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 采集Apache Web服务器日志 一.需求 二.准备日志数据 三.使用Fil ...

  8. 2021年大数据ELK(二十一):Logstash简介和安装

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 Logstash简介和安装 一.简介 1.经典架构 2.对比Flume 3.对 ...

  9. 2021年大数据ELK(二十):FileBeat是如何工作的

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 FileBeat是如何工作的 一.input和harvester 1.inpu ...

  10. 2021年大数据ELK(十九):使用FileBeat采集Kafka日志到Elasticsearch

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 使用FileBeat采集Kafka日志到Elasticsearch 一.需求分 ...

最新文章

  1. java 数组的基本操作
  2. 关于ARP、MAC、IP欺骗以及TCP劫持
  3. 倒排索引优化 - 跳表求交集 空间换时间 贪心
  4. 【AR实验室】OpenGL ES绘制相机(OpenGL ES 1.0版本)
  5. Android system server之WindowManagerService按键消息传播流程
  6. 烧水壶起水沟了怎么办?
  7. JZOJ 5490. 【清华集训2017模拟11.28】图染色
  8. H5各种头部meta标签的功能
  9. 苹果电脑连服务器传文件慢,两个mac之间快速传递文件
  10. 东软实训2-在jsp中使用javaBean
  11. TP收集一些可以用的资源
  12. STM32精英版(正点原子STM32F103ZET6开发板)学习篇1——新建库函数模版
  13. 约瑟夫生死者游戏_独立游戏从死者特许经营中夺冠时,游戏玩家获胜
  14. AAC的ADTS头解析
  15. Burp Spider 使用指南
  16. 经典算法之左边界二分查找法(俗称左边界二分搜索法)
  17. spring spel 获取环境变量
  18. Instruction set
  19. ERROR 1118 (42000): Row size too large (8126). Changing some columns to TEXT or BLOB or using ROW_
  20. 用RIO包健壮地读写

热门文章

  1. umeditor 踩坑
  2. 大学计算机实验二报告表答案,北理大学计算机实验基础实验二实验报告表答案...
  3. 05_同步对象SYNC的基本概念
  4. 基于89c51的74ls138模块的四位数码管动态显示
  5. ELK之JStorm
  6. Scrapy安装及使用
  7. C++ 类模板与模板类详解
  8. 人因工程与虚拟现实实验室建设方案
  9. STM32MP157驱动开发——设备树下的LED驱动
  10. win7提示“Windows资源管理器已经停止工作”有什么好的解决办法