一个日志服务架构主要包括3个部分:日志采集agent,日志存储,及日志浏览

本日志服务方案采用logstash+elasticsearch+kibana的组合搭建,其中logstash负责日志的采集和入库,elasticsearch负责日志的存储和索引,kubana负责日志的搜索和前端展示

在实际部署过程中,为保证可用和节省资源,主要注意以下几点:

1.logstash支持很多socket远程input和ouput插件,但为不影响主干业务流,尽量采用日志文件异步读出写入es的方式,如log4j支持socketappend,但需要修改既有应用的配置,而且一旦server端出错,应用日志没有本地文件,容易丢失

2.logstash agent可以直接写入es,但为减小es的压力,采用redis和logstash

indexer的较色来延长消息传递流程,降低各环节的压力

3.logstash基于java实现,为节约系统资源,不论agent还是indexer,单台物理机只配置一个文件,采用一个守护进程启动服务

cat

/usr/local/logserver/logstash/conf/allinone_indexer.conf

input

{

redis

{

host =>

"10.241.223.112"

type =>

"log4j-account"

data_type =>

"list"

key =>

"log4j-account"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-activity"

data_type =>

"list"

key =>

"log4j-activity"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-databag"

data_type =>

"list"

key =>

"log4j-databag"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-document"

data_type =>

"list"

key =>

"log4j-document"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-job"

data_type =>

"list"

key =>

"log4j-job"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-monitor"

data_type =>

"list"

key =>

"log4j-monitor"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-notification"

data_type =>

"list"

key =>

"log4j-notification"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-qto"

data_type =>

"list"

key =>

"log4j-qto"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-search"

data_type =>

"list"

key =>

"log4j-search"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-social"

data_type =>

"list"

key =>

"log4j-social"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"log4j-storage"

data_type =>

"list"

key =>

"log4j-storage"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"tomcat-account"

data_type =>

"list"

key =>

"tomcat-account"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"tomcat-databag"

data_type =>

"list"

key =>

"tomcat-databag"

format =>

"json_event"

}

redis

{

host =>

"10.241.223.112"

type =>

"tomcat-monitor"

data_type =>

"list"

key =>

"tomcat-monitor"

format =>

"json_event"

}

}

filter {

multiline

{

type =>

"log4j-account"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-activity"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-databag"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-document"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-job"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-monitor"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-notification"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-qto"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-search"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-social"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"log4j-storage"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"tomcat-account"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"tomcat-databag"

pattern =>

"^\s"

what =>

"previous"

}

multiline

{

type =>

"tomcat-monitor"

pattern =>

"^\s"

what =>

"previous"

}

}

output {

elasticsearch {

host

=> "10.241.223.112"

cluster

=> "logstashelasticsearch"

port

=> 9300

}

}

cat

/usr/local/logserver/logstash/conf/nginx_agent.conf

input {

file {

type => "nginx"

path => ["/data/logs/nginx/*.log"

]

}

}

output {

redis {

host => "10.241.223.112"

data_type => "list"

key => "nginx"

type => "nginx"

}

}

cat

/usr/local/logserver/logstash/startup.sh

#!/bin/bash

workdir=$(cd $(dirname $0);pwd)

#echo $workdir

usage(){

cat <

This script start, stop and retart logstash agent and indexer

of apps

Usage: $(basename $0) [ agent | indexer | all ] app1 app2 app3

...

EOF1

}

[ $# -lt 2 ] && usage

&& exit

start_mode=$1

for appname in "$@";do

case "$appname" in

"agent" )

continue

;;

"indexer" )

continue

;;

"all" )

continue

;;

*)

if [ "$start_mode" =

"all" ]; then

java

-jar $workdir/logstash.jar agent -f

$workdir/conf/"$appname"_indexer.conf &

java

-jar $workdir/logstash.jar agent -f

$workdir/conf/"$appname"_agent.conf &

else

java

-jar $workdir/logstash.jar agent -f

$workdir/conf/"$appname"_"$start_mode".conf &

fi

;;

esac

echo

done

cat /usr/local/logserver/logstash/indexer_startup.sh

#!/bin/bash

#/usr/local/logserver/logstash/startup.sh indexer account activity

databag

/usr/local/logserver/logstash/startup.sh indexer allinone

java+log日志服务器_Logserver日志服务器结构相关推荐

  1. java 如何去掉http debug日志_你居然还去服务器上捞日志,搭个日志收集系统难道不香吗?...

    作者:MacroZheng 链接:https://juejin.im/post/5eef217d51882565d74fb4eb 来源:掘金 SpringBoot实战电商项目mall(35k+star ...

  2. linux日志切割命令,Linux 服务器log日志切割三种方法【附命令行】

    今天爱分享给大家带来Linux 服务器log日志切割方法[三种附命令行],希望能够帮助到大家. 业务服务器上产生了一个 10G 的log文件,然后很悲催的是什么样的文本编辑器都打不开,然后只能切分一下 ...

  3. java连接服务器读取日志

    说明:1.工具在使用中可能存在BUG,可以自己修改或者告诉我哦~~~ 2.本文内容在最后有文档~ <!-- https://mvnrepository.com/artifact/ch.ethz. ...

  4. boost::log模块实现将日志记录初始化到远程 syslog 服务器

    boost::log模块实现将日志记录初始化到远程 syslog 服务器 实现功能 C++实现代码 实现功能 boost::log模块实现将日志记录初始化到远程 syslog 服务器 C++实现代码 ...

  5. 服务器的日志 /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/SystemOut.log

    服务器的日志 /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/SystemOut.log

  6. java 如何去掉http debug日志_你居然还去服务器上捞日志,搭个日志收集系统难道不香么!...

    作者:MacroZheng 链接:https://juejin.im/post/5eef217d51882565d74fb4eb 摘要 ELK日志收集系统进阶使用,本文主要讲解如何打造一个线上环境真实 ...

  7. 服务器系统log文件在哪,云服务器日志在哪里

    云服务器日志在哪里 内容精选 换一换 可以.如果您有业务部署在华为云服务器(含弹性公网IP),即可通过华为云备案,与域名注册服务商没有关系.示例:如果您的主体和域名均为第一次备案,即在工信部无任何备案 ...

  8. java hs err pid_JVM致命错误日志(hs_err_pid.log)分析

    当jvm出现致命错误时,会生成一个错误文件 hs_err_pid.log,其中包括了导致jvm crash的重要信息,可以通过分析该文件定位到导致crash的根源,从而改善以保证系统稳定.当出现cra ...

  9. java怎么控制log文件大小,限制 Apache日志文件大小的方法

    限制 Apache日志文件大小的方法 更新时间:2009年04月28日 20:23:53   作者: access.log,件在 WEB 服务器运行一段时间之后会达到几十兆甚至上百兆,如果Apache ...

最新文章

  1. 骚操作:不重启 JVM,如何替换掉已经加载的类?
  2. 计算机通过路由器连接打印机共享的打印机,如何利用无线路由器进行打印机共享访问操作...
  3. Xstream util xml 与 bean之间互转
  4. springboot整合oracle_SpringBoot2.x系列教程67--Spring Boot整合分布式事务简介
  5. matlab从工作区读取一维数组和结构体
  6. mysql忘记root密码解决办法
  7. connect跨进程 qt_编写 Qt 跨线程异步调用器
  8. HowNet介绍及使用
  9. Codeforces Round #280 (Div. 2) D. Vanya and Computer Game 二分
  10. 题目111-分数加减法
  11. 绿联USB3.0扩展坞网卡:显示未连接;及Mac共享wifi
  12. 花体字转换器微信小程序源码支持多种花样字体不同风格
  13. “超低能,劲搞笑”笑话管理系统 v2.0
  14. 【LiteOS】小白进阶之系统移植配置解析
  15. matplotlib绘图归纳(双纵轴、柱状渐变、堆叠柱状)
  16. kali流量转发后依然断网_三大运营商的无限流量卡,哪家的网速最快,看完千万别选错了...
  17. JavaScript正则表达式验证手机号码
  18. 重装系统时不小心全盘分区了的文件恢复办法
  19. 宽带隙器件增强电机控制设计
  20. 在win10系统中安装一个linux双系统

热门文章

  1. 一个免费的网站长链接转短链接的工具
  2. electronics.local在SAP Hybris中出现的几个位置
  3. ComponentBase.createMetaData and manifest.json oRoute
  4. SAP BOPF draft table automatic deletion
  5. 推荐一个免费的在线图片工具网站
  6. SAP CRM One Order框架搜索条件里,posting date下拉菜单的渲染逻辑
  7. 由于Item category group customizing 缺失导致的BDOC error
  8. 关于如何根据UI的版本把Tab切换成新的Notes UI Component
  9. One Order行项目里Item Category是怎么计算出来的
  10. python3.8.2中文手册chm_springboot2.2.X手册:构建全局唯一的短链接数据中心