1、环境说明

服务器角色:

192.168.50.211         kafka+zookeeper

192.168.50.212          kafka+zookeeper

192.168.50.213         kafka+zookeeper

192.168.50.214         nginx filebeat logstash

192.168.50.215        elasticsearch kibana logstash

软件版本:

kafka_2.12-0.10.2.1.tgz

zookeeper-3.4.10.tar.gz

filebeat-5.3.2-linux-x86_64.tar.gz

kibana-5.3.2-linux-x86_64.tar.gz

logstash-5.3.2.tar.gz

elasticsearch-5.3.2.tar.gz

x-pack-5.3.2.zip

安装es的系统的系统版本:

# uname -a

Linux ansibleer 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/issue

CentOS release 6.5 (Final)

Kernel \r on an \m

# uname -r

2.6.32-431.el6.x86_64

2、es安装配置

2.1、

内核升级

5以上版本的es需要部署在linux内核的版本3.0以上的系统上

内核下载:

https://www.kernel.org/

longterm:3.10.1052017-02-10[tar.xz]

安装依赖:

# yum groupinstall "Development Tools"

# yum install ncurses-devel

# yum install qt-devel

# yum install hmaccalc zlib-devel binutils-devel elfutils-libelf-devel

升级内核:

编译内核配置环境

# tar -xf linux-3.10.105.tar.xz -C /usr/local

# cd /usr/local/linux-3.10.105

# cp /boot/config-2.6.32-431.el6.x86_64 .config

# sh -c 'yes "" | make oldconfig'

HOSTCC  scripts/basic/fixdep

HOSTCC  scripts/kconfig/conf.o

SHIPPED scripts/kconfig/zconf.tab.c

SHIPPED scripts/kconfig/zconf.lex.c

SHIPPED scripts/kconfig/zconf.hash.c

HOSTCC  scripts/kconfig/zconf.tab.o

HOSTLD  scripts/kconfig/conf

scripts/kconfig/conf --oldconfig Kconfig

............

CRC8 function (CRC8) [M/y/?] (NEW)

XZ decompression support (XZ_DEC) [Y/?] (NEW) y

x86 BCJ filter decoder (XZ_DEC_X86) [Y/n] (NEW)

PowerPC BCJ filter decoder (XZ_DEC_POWERPC) [N/y] (NEW)

IA-64 BCJ filter decoder (XZ_DEC_IA64) [N/y] (NEW)

ARM BCJ filter decoder (XZ_DEC_ARM) [N/y] (NEW)

ARM-Thumb BCJ filter decoder (XZ_DEC_ARMTHUMB) [N/y] (NEW)

SPARC BCJ filter decoder (XZ_DEC_SPARC) [N/y] (NEW)

XZ decompressor tester (XZ_DEC_TEST) [N/m/y/?] (NEW)

Averaging functions (AVERAGE) [Y/?] y

CORDIC algorithm (CORDIC) [M/y/?] m

JEDEC DDR data (DDR) [N/y/?] (NEW)

#

# configuration written to .config

#

使用cpu的个数作为参数去编译内核

# grep pro /proc/cpuinfo | wc -l

2

# make -j2 bzImage

# make -j2 modules

# make -j2 modules_install

# make install

sh /usr/local/linux-3.10.105/arch/x86/boot/install.sh 3.10.105 arch/x86/boot/bzImage \

System.map "/boot"

ERROR: modinfo: could not find module vsock

ERROR: modinfo: could not find module vmci

ERROR: modinfo: could not find module vmware_balloon

# vi /etc/grub.conf

default=1

timeout=5

splashimage=(hd0,0)/grub/splash.xpm.gz

hiddenmenu

title CentOS (3.10.105)

root (hd0,0)

kernel /vmlinuz-3.10.105 ro root=/dev/mapper/vg_ansibleer-lv_root rd_LVM_LV=vg_ansibleer/lv_swap rd_NO_LUKS rd_LVM_LV=vg_ansibleer/lv_root rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=128M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM LANG=en_US.UTF-8 rhgb quiet

initrd /initramfs-3.10.105.img

title CentOS (2.6.32-431.el6.x86_64)

root (hd0,0)

kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/vg_ansibleer-lv_root rd_LVM_LV=vg_ansibleer/lv_swap rd_NO_LUKS rd_LVM_LV=vg_ansibleer/lv_root rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=128M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM LANG=en_US.UTF-8 rhgb quiet

initrd /initramfs-2.6.32-431.el6.x86_64.img

修改default=1 to default=0

# reboot

# uname -r

3.10.105

2.2、es编译安装

环境配置:

# vi /etc/security/limits.conf

添加

*               soft    nproc           65536

*               hard    nproc           65536

*               soft    nofile          65536

*               hard    nofile          65536

# vi /etc/sysctl.conf

添加

fs.file-max=65536

vm.max_map_count=262144

# sysctl -p

# vi /etc/security/limits.d/90-nproc.conf

*          soft    nproc     1024

修改为

*          soft    nproc     2048

配置java环境:

# tar -zxf jdk-8u101-linux-x64.tar.gz -C /usr/local

# vi /etc/profile

添加

export JAVA_HOME=/usr/local/jdk1.8.0_101

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

# source /etc/profile

编译安装及配置es

# groupadd esuser

# useradd -g esuser -d /home/esuser -m esuser

# passwd esuser

# tar -zxf elasticsearch-5.3.2.tar.gz -C /usr/local

# cd /usr/local

# ln -sv elasticsearch-5.3.2 elasticsearch

`elasticsearch' -> `elasticsearch-5.3.2'

# mkdir -pv /data/elasticsearch/{data,logs}

mkdir: created directory `/data'

mkdir: created directory `/data/elasticsearch'

mkdir: created directory `/data/elasticsearch/data'

mkdir: created directory `/data/elasticsearch/logs'

# chown -R esuser:esuser /data/

# ll -d /data

drwxr-xr-x. 3 esuser esuser 4096 May  5 13:21 /data

# vi /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: es-cluster

node.name: es-node

path.data: /data/elasticsearch/data

path.logs: /data/elasticsearch/logs

network.host: 182.180.117.200

http.port: 9200

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

启动服务

# su - esuser -c "/usr/local/elasticsearch/bin/elasticsearch &"

这个时候需要注意/usr/local/elsticsearch目录以及目录下的文件的权限,所属者和所属组均是esuser

开通防火墙相应端口

# vi /etc/sysconfig/iptables

在80的规则下添加9200的规则

-A INPUT -p tcp --dport 80 -j ACCEPT

-A INPUT -p tcp --dport 9200 -j ACCEPT

浏览器输入http://192.168.50.215:9200/

得到结果

{

"name" : "es-node",

"cluster_name" : "es-cluster",

"cluster_uuid" : "F3vEJMuvTxmwrT5j049GPA",

"version" : {

"number" : "5.3.2",

"build_hash" : "3068195",

"build_date" : "2017-04-24T16:15:59.481Z",

"build_snapshot" : false,

"lucene_version" : "6.4.2"

},

"tagline" : "You Know, for Search"

}

2.3、215 logstash环境配置

# tar -zxf logstash-5.3.2.tar.gz -C /usr/local

# cd /usr/local

# ln -sv logstash-5.3.2 logstash

`logstash' -> `logstash-5.3.2'

# cd /usr/local/logstash/config

可以使用下面的简单测试系统之间的连通性

# vi logstash-simple.conf

input { stdin { } }

output {

elasticsearch {

hosts => ["182.180.17.200:9200"]

user => elastic

password => changeme

}

stdout { codec => rubydebug }

}

3、kibana环境配置及插件安装

# tar -zxf kibana-5.3.2-linux-x86_64.tar.gz -C /usr/local

# cd /usr/local

# ln -sv kibana-5.3.2-linux-x86_64 kibana

`kibana' -> `kibana-5.3.2-linux-x86_64'

# cd kibana/bin

# ./kibana-plugin install file:///root/x-pack-5.3.2.zip

es插件安装

# cd /usr/local/elasticsearch/bin

# su - esuser -c "/usr/local/elasticsearch/bin/elasticsearch-plugin install file:///root/x-pack-5.3.2.zip"

-> Downloading file:///root/x-pack-5.3.2.zip

[=================================================] 100%??

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

@     WARNING: plugin requires additional permissions     @

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries

* java.lang.RuntimePermission getClassLoader

* java.lang.RuntimePermission setContextClassLoader

* java.lang.RuntimePermission setFactory

* java.security.SecurityPermission createPolicy.JavaPolicy

* java.security.SecurityPermission getPolicy

* java.security.SecurityPermission putProviderProperty.BC

* java.security.SecurityPermission setPolicy

* java.util.PropertyPermission * read,write

* java.util.PropertyPermission sun.nio.ch.bugLevel write

* javax.net.ssl.SSLPermission setHostnameVerifier

See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html

for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y

-> Installed x-pack

# cd /usr/local/elasticsearch/config

# vi elasticsearch.yml

添加

#x-pack authc

xpack.security.authc

anonymous:

username: guest

roles: superuser

authz_exception: true

# cd /usr/local/kibana/config

# vi kibana.yml

server.host: "182.180.117.200"

elasticsearch.url: "http://182.180.117.200:9200"

elasticsearch.username: "elastic"

elasticsearch.password: "changeme"

pid.file: /var/run/kibana.pid

# ps -ef | grep elasticsearsh

kill掉es进程

再次启动

# su - esuser -c "/usr/local/elasticsearch/bin/elasticsearch &"

当可以看到端口打开,如下所示,说明启动完成

# netstat -an | grep 9200

tcp        0      0 ::ffff:182.180.117.200:9200 :::*                        LISTEN

# cd /usr/local/logstash/bin

# ./logstash -f ../config/logstash-simple.conf

# cd bin

# ./kibana

# vi /etc/sysconfig/iptables

在9200规则下添加

-A INPUT -p tcp --dport 5601 -j ACCEPT

浏览器输入http://192.168.50.215:5601

出现图示:kibana-1

输入用户名和密码: elastic changeme

4、kafka和zookeeper集群搭建配置

kafka和zookeeper集群的搭建参考我之前es2版本的部署配置文章

链接:

http://xiaoxiaozhou.blog.51cto.com/4681537/1854684

5、nginx日志处理

214上搭建有nginx的应用作为多个服务的反向代理

nginx配置:

# cd /usr/local/nginx

# vi conf/nginx.conf

log_format  main  '$remote_addr - $upstream_addr - $remote_user [$time_local] "$request" '

'$status $upstream_status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for"'

'$request_time - $upstream_cache_status' ;

nginx access日志示例:

IP1 - IP2:port - - [11/May/2017:14:18:31 +0800] "GET /content/dam/phone/emv/index.html HTTP/1.1" 304 304 0 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/600.1.3 (KHTML, like Gecko) Version/8.0 Mobile/12A4345d Safari/600.1.4" "IP3, IP4, IP5, IP6"0.007 - -

$remote_addr客户端地址

$upstream_addr后台upstream的地址,即真正提供服务的主机地址

$remote_user客户端用户名称

$time_local访问时间和时区

$request请求的URI和HTTP协议

$statusHTTP请求状态

$upstream_statusupstream状态

$body_bytes_sent发送给客户端文件内容大小

$http_refererurl跳转来源

$http_user_agent用户终端浏览器等信息

$http_x_forwarded_for记录客户端的ip地址,客户端IP,Nginx负载均衡服务器IP

$request_time  整个请求的总时间

$upstream_cache_status  缓存的状态

%{IPORHOST:client_ip}

(%{URIHOST:upstream_host}|-)

%{USER:ident} %{USER:auth}

\[%{HTTPDATE:timestamp}\]

\"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|-)"

%{HOST:domain} %{NUMBER:response}

(?:%{NUMBER:bytes}|-)

%{QS:referrer}

%{QS:agent}

"(%{WORD:x_forword}|-)"

(%{BASE16FLOAT:request_time})

0\"-\"

# tar -zxf logstash-5.3.2.tar.gz -C /usr/local

# cd /usr/local

# ln -sv logstash-5.3.2 logstash

`logstash' -> `logstash-5.3.2'

# cd logstash

# mkdir patterns

# vi patterns/nginx

NGINXACCESS %{IPORHOST:client_ip} (%{URIHOST:upstream_host}|-) %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|-)" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} "(%{WORD:x_forword}|-)" (%{BASE16FLOAT:request_time}) 0\"-\"

注意:logstash也需要jdk1.8的支持

214的logstash管道文件

# vi /usr/local/logstash/config/logstash_in_nginx.conf

input {

file {

type => "nginx-access"

path => "/usr/local/nginx/logs/access.log"

tags => [ "nginx","access" ]

}

file {

type => "nginx-error"

path => "/usr/local/nginx/logs/error.log"

tags => [ "nginx","error" ]

}

}

output {

stdout { codec => rubydebug }

kafka {

bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"

topic_id => "access-nginx-messages"

}

}

215的logstash管道文件

# vi /usr/local/logstash/config/logstash_nginx_indexer.conf

input {

kafka {

bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"

topics => ["access-nginx-messages"]

}

}

filter {

if [type] == "nginx-access" {

grok{

patterns_dir => "/usr/local/logstash/patterns"

match => {

"message" => "%{NGINXACCESS}"

}

}

date{

match=>["time","dd/MMM/yyyy:HH:mm:ss"]

target=>"logdate"

}

ruby{

code => "event.set('logdateunix',event.get('logdate').to_i)"

}

}

else if [type] == "nginx-error" {

grok {

match => [

"message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?<client_ip>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})?",

"message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message}"]

}

date{

match=>["time","yyyy/MM/dd HH:mm:ss"]

target=>"logdate"

}

ruby{

code => "event.set('logdateunix',event.get('logdate').to_i)"

}

}

}

output {

stdout { codec => rubydebug }

elasticsearch {

hosts => ["192.168.50.215:9200"]

index => "access-nginx-messages-%{+YYYY.MM.dd}"

flush_size => 20000

idle_flush_time => 10

template_overwrite => true

}

}

6、es索引操作

查询索引

# curl 192.168.50.215:9200/_search?pretty=true  | grep _index

删除nginx-access-messages开头和access-nginx-messages开头的索引

# curl -XDELETE 'http://192.168.50.215:9200/nginx-access-messages*'

{"acknowledged":true}[root@ansibleer config]#

# curl -XDELETE 'http://192.168.50.215:9200/access-nginx-messages*'

创建索引

# curl -XPUT '192.168.50.215:9200/customer?pretty'

{

"acknowledged" : true,

"shards_acknowledged" : true

}

查询所有索引

# curl '192.168.50.215:9200/_cat/indices?v'

7、214服务器filebeat、logstash配置

# tar -zxf filebeat-5.3.2-linux-x86_64.tar.gz -C /usr/local

# cd /usr/local

# ln -s filebeat-5.3.2 filebeat

# cd filebeat

# egrep -v "#|^$" filebeat.yml

filebeat.prospectors:

- input_type: log

paths:

- /usr/local/nginx/logs/access.log

output.logstash:

hosts: ["192.168.50.214:5043"]

# cd /usr/local/logstash

# cat config/logstash_in_nginx.conf

input {

beats {

port => "5043"

}

}

filter {

grok {

patterns_dir => "/usr/local/logstash/patterns"

match => {

"message" => "%{NGINXACCESS}"

}

}

}

output {

stdout { codec => rubydebug }

kafka {

bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"

topic_id => "access-nginx-messages"

}

}

215服务器filebeat配置

# tar -zxf filebeat-5.3.2-linux-x86_64.tar.gz -C /usr/local

# cd /usr/local

# ln -s filebeat-5.3.2-linux-x86_64 filebeat

# cd filebeat

# egrep -v "#|^$" filebeat.yml

filebeat.prospectors:

- input_type: log

paths:

- /usr/local/nginx/logs/access.log

output.logstash:

hosts: ["192.168.50.215:5043"]

# nohup ./filebeat -c filebeat.yml &

215服务器的logstash管道文件

# cat /usr/local/logstash/config/logstash_in_nginx.conf

input {

beats {

port => "5043"

}

}

filter {

grok {

patterns_dir => "/usr/local/logstash/patterns"

match => {

"message" => "%{NGINXACCESS}"

}

}

}

output {

stdout { codec => rubydebug }

kafka {

bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"

topic_id => "access-nginx-messages"

}

}

转载于:https://blog.51cto.com/xiaoxiaozhou/2106926

ELK5.3环境部署相关推荐

  1. 实战 | 某小公司项目环境部署演变之路

    点击上方蓝色"视学算法",选择"设为星标" 获取独家整理的学习资料! 作者 | 邵磊 来源 | juejin.im/post/5ade8a37f265da0b8 ...

  2. 【Smobiler企业APP开发之一】开发环境部署

    最近研究了下Smobiler-.net移动开发,对于做企业APP开发Smobiler还是够用的,况且是完全使用C#进行编程,对于使用C#进行程序设计的程序员来说还是很容易上手的. 本章节主要介绍Smo ...

  3. ansible自动化运维(二)——环境部署及常用模块的使用

    实验环境 主机 ip server1(主控端) 172.25.6.1 server2(节点) 172.25.6.2 server3(节点) 172.25.6.3 一.环境部署 ansible的配置文件 ...

  4. kafka 基础知识梳理及集群环境部署记录

    一.kafka基础介绍 Kafka是最初由Linkedin公司开发,是一个分布式.支持分区的(partition).多副本的(replica),基于zookeeper协调的分布式消息系统,它的最大的特 ...

  5. 华为云计算FusionCompute环境部署实验之使用批量部署工具安装

    本文由乾颐堂HCIE培训讲师姜帆老师提供 一.环境介绍 使用实验环境必须提前在中登记预约 1. 实验拓扑 2. 环境简介 使用环境安装虚拟机通过服务器的BMC接口实现,电源控制 访问Console 光 ...

  6. eclipse中hadoop2.3.0环境部署及在eclipse中直接提交mapreduce任务

    转自:http://my.oschina.net/mkh/blog/340112 1 eclipse中hadoop环境部署概览 eclipse中部署hadoop包括两大部分:hdfs环境部署和mapr ...

  7. Mac下web自动化环境部署

    1.前提:安装python环境,文件可参考:https://www.cnblogs.com/nbnuan/p/9717881.html 2.浏览器很多,主流的浏览器有:ie,firefox,chrom ...

  8. 环境部署(java安装和配置,Tomcat安装和配置)(tomcat下部署war包)

    1,上传环境部署安装包到服务器上 2,解压安装包,并部署java #  tar -xf jdk-8u201-linux-x64.tar.g # mkdir /usr/java # cp  jdk1.8 ...

  9. (个人)Zookeeper集群环境部署

    一.准备工作 1. 下载zookeeper,下载地址:https://zookeeper.apache.org/releases.html#download  2. 下载CentOS7的镜像,下载地址 ...

最新文章

  1. Linux下二进制文件安装MySQL
  2. Deepin系统安装
  3. 皮一皮:只恨不为女儿身...
  4. Wireshark 抓包分析 RTSP/RTP/RTCP 基本工作过程
  5. 时间插件只能选择整点和半点_我花一小时自制了三款PPT插件,不仅免费分享,还想手把手教你制作...
  6. 其他团队对本团队评价的总结
  7. 使用CocoaPods给微信集成SDK打印收发消息
  8. python中o_Python O
  9. zabbix3.0 监控mysql服务器性能实现过程
  10. OLAP-impala-大数据Week13-DAY6-impala
  11. 【java学习之路】(javaWeb【后端】篇)007.AjaxAxios
  12. 安卓beforetextchanged_【已解决】Android中给EditText添加的TextWatcher中的onTextChanged始终被调用(被执行多次)...
  13. Linux各类压宿包的压缩和解压方法
  14. python模块之httplib(在py3中功能进一步强大,请详看文档)
  15. 【CF1107G】Vasya and Maximum Profit(单调栈/单调栈+线段树最大子段和)
  16. 汉字编码计算机,计算机汉字编码,computerbased Chinese codings,音标,读音,翻译,英文例句,英语词典...
  17. 4.2.5 Kafka集群与运维(集群的搭建、监控工具 Kafka Eagle)
  18. 科技进化的终点,与荣耀全场景的起点
  19. 计算机知识及保密培训目的,二勘院举办保密知识和计算机网络安全专题培训会...
  20. 关于软碟通UltraISO制作Ubuntu系统盘无法启动,推荐制作启动盘使用免费软件rufus

热门文章

  1. Java中的string定义的两种方法和区别
  2. wxwidgets mysql_Ubuntu下wxWidgets学生公寓管理编程,sqlite3的用法(mysql数据_MySQL
  3. mkfontscale没有这个命令_那些实用的小命令
  4. python将EXCEL数据导入数据库时日期型数据变成数字并加.0的问题一行代码解决方案方案
  5. 【杂谈】篇篇精华,有三AI不得不看的技术综述(超过100篇核心干货)
  6. 【caffe速成】caffe图像分类从模型自定义到测试
  7. 2022年全球及中国面粉混合物行业发展态势与消费需求前景调查报告
  8. iOS快速开发框架Bee-Framework应用和解析(三) - Message, Model, Signal
  9. batchnorm2d参数 torch_Pytorch-nn.BatchNorm2d()
  10. 自己实现spring核心功能 三