sýnesis™ Lite for Snort is built using the Elastic Stack, including Elasticsearch, Logstash and Kibana. To install and configure sýnesis™ Lite for Snort, you must first have a working Elastic Stack environment. The latest release requires Elastic Stack version 6.2 or later.
Refer to the following compatibility chart to choose a release of sýnesis™ Lite for Snort that is compatible with the version of the Elastic Stack you are using.
Elastic Stack    v1.x
6.2    ✓
Setting up Elasticsearch
Currently there is no specific configuration required for Elasticsearch. As long as Kibana and Logstash can talk to your Elasticsearch cluster you should be ready to go. The index template required by Elasticsearch will be uploaded by Logstash.
At high ingest rates (>5K logs/s), or for data redundancy and high availability, a multi-node cluster is recommended.
If you are new to the Elastic Stack, this video goes beyond a simple default installation of Elasticsearch and Kibana. It discusses real-world best practices for hardware sizing and configuration, providing production-level performance and reliability.
 
Additionally local SSD storage should be considered as mandatory! For an in-depth look at how different storage options compare, and in particular how bad HDD-based storage is for Elasticsearch (even in multi-drive RAID0 configurations) you should watch this video...
 
Filebeat
As Snort is usually run on one or more Linux servers, the solution includes both Filebeat and Logstash. Filebeat is used to collect the log data on the system where Snort is running, and ships it to Logstash via the Beats input. An example Filebeat prospector configuration is included in filebeat/filebeat.yml.
Setting up Logstash
The sýnesis™ Lite for Snort Logstash pipeline is the heart of the solution. It is here that the raw flow data is collected, decoded, parsed, formatted and enriched. It is this processing that makes possible the analytics options provided by the Kibana dashboards.
Follow these steps to ensure that Logstash and sýnesis™ Lite for Snort are optimally configured to meet your needs.
1. Set JVM heap size.
To increase performance, sýnesis™ Lite for Snort takes advantage of the caching and queueing features available in many of the Logstash plugins. These features increase the consumption of the JVM heap. The JVM heap space used by Logstash is configured in jvm.options. It is recommended that Logstash be given at least 2GB of JVM heap. This is configured in jvm.options as follows:
-Xms2g
-Xmx2g
2. Add and Update Required Logstash plugins
It is also recommended that you always use the latest version of the DNS filter. This can achieved by running the following command:
LS_HOME/bin/logstash-plugin update logstash-filter-dns
3. Copy the pipeline files to the Logstash configuration path.
There are four sets of configuration files provided within the logstash/synlite_snort folder:
logstash
  `- synlite_snort
       |- conf.d  (contains the logstash pipeline)
       |- dictionaries (yaml files used to enrich the raw log data)
       |- geoipdbs  (contains GeoIP databases)
       `- templates  (contains index templates)
Copy the synlite_snort directory to the location of your Logstash configuration files (e.g. on RedHat/CentOS or Ubuntu this would be /etc/logstash/synlite_snort ). If you place the pipeline within a different path, you will need to modify the following environment variables to specify the correct location:
Environment Variable    Description    Default Value
SYNLITE_SNORT_DICT_PATH    The path where the dictionary files are located    /etc/logstash/synlite_snort/dictionaries
SYNLITE_SNORT_TEMPLATE_PATH    The path to where index templates are located    /etc/logstash/synlite_snort/templates
SYNLITE_SNORT_GEOIP_DB_PATH    The path where the GeoIP DBs are located    /etc/logstash/synlite_snort/geoipdbs
4. Setup environment variable helper files
Rather than directly editing the pipeline configuration files for your environment, environment variables are used to provide a single location for most configuration options. These environment variables will be referred to in the remaining instructions. A reference of all environment variables can be found here.
Depending on your environment there may be many ways to define environment variables. The files profile.d/synlite_snort.sh and logstash.service.d/synlite_snort.conf are provided to help you with this setup.
Recent versions of both RedHat/CentOS and Ubuntu use systemd to start background processes. When deploying sýnesis™ Lite for Snort on a host where Logstash will be managed by systemd, copy logstash.service.d/synlite_snort.conf to /etc/systemd/system/logstash.service.d/synlite_snort.conf. Any configuration changes can then be made by editing this file.
Remember that for your changes to take effect, you must issue the command sudo systemctl daemon-reload.
5. Add the sýnesis™ Lite for Snort pipeline to pipelines.yml
Logstash 6.0 introduced the ability to run multiple pipelines from a single Logstash instance. The pipelines.yml file is where these pipelines are configured. While a single pipeline can be specified directly in logstash.yml, it is a good practice to use pipelines.yml for consistency across environments.
Edit pipelines.yml (usually located at /etc/logstash/pipelines.yml) and add the sýnesis™ Lite for Snort pipeline (adjust the path as necessary).
- pipeline.id: synlite_snort
  path.config: "/etc/logstash/synlite_snort/conf.d/*.conf"
6. Configure inputs
By default Filebeat data will be recieved on all IPv4 addresses of the Logstash host using the default TCP port 5044. You can change both the IP and port used by modifying the following environment variables:
Environment Variable    Description    Default Value
SYNLITE_SNORT_BEATS_HOST    The IP address on which to listen for Filebeat messages    0.0.0.0
SYNLITE_SNORT_BEATS_PORT    The TCP port on which to listen for Filebeat messages    5044
7. Configure Elasticsearch output
Obviously the data needs to land in Elasticsearch, so you need to tell Logstash where to send it. This is done by setting these environment variables:
Environment Variable    Description    Default Value
SYNLITE_SNORT_ES_HOST    The Elasticsearch host to which the output will send data    127.0.0.1:9200
SYNLITE_SNORT_ES_USER    The password for the connection to Elasticsearch    elastic
SYNLITE_SNORT_ES_PASSWD    The username for the connection to Elasticsearch    changeme
If you are only using the open-source version of Elasticsearch, it will ignore the username and password. In that case just leave the defaults.
8. Enable DNS name resolution (optional)
In the past it was recommended to avoid DNS queries as the latency costs of such lookups had a devastating effect on throughput. While the Logstash DNS filter provides a caching mechanism, its use was not recommended. When the cache was enabled all lookups were performed synchronously. If a name server failed to respond, all other queries were stuck waiting until the query timed out. The end result was even worse performance.
Fortunately these problems have been resolved. Release 3.0.8 of the DNS filter introduced an enhancement which caches timeouts as failures, in addition to normal NXDOMAIN responses. This was an important step as many domain owner intentionally setup their nameservers to ignore the reverse lookups needed to enrich flow data. In addition to this change, I submitted am enhancement which allows for concurrent queries when caching is enabled. The Logstash team approved this change, and it is included in 3.0.10 of the plugin.
With these changes I can finally give the green light for using DNS lookups to enrich the incoming data. You will see a little slow down in throughput until the cache warms up, but that usually lasts only a few minutes. Once the cache is warmed up, the overhead is minimal, and event rates averaging 10K/s and as high as 40K/s were observed in testing.
The key to good performance is setting up the cache appropriately. Most likely it will be DNS timeouts that are the source of most latency. So ensuring that a higher volume of such misses can be cached for longer periods of time is most important.
The DNS lookup features of sýnesis™ Lite for Snort can be configured using the following environment variables:
Environment Variable    Description    Default Value
SYNLITE_SNORT_RESOLVE_IP2HOST    Enable/Disable DNS requests    false
SYNLITE_SNORT_NAMESERVER    The DNS server to which the dns filter should send requests    127.0.0.1
SYNLITE_SNORT_DNS_HIT_CACHE_SIZE    The cache size for successful DNS queries    25000
SYNLITE_SNORT_DNS_HIT_CACHE_TTL    The time in seconds successful DNS queries are cached    900
SYNLITE_SNORT_DNS_FAILED_CACHE_SIZE    The cache size for failed DNS queries    75000
SYNLITE_SNORT_DNS_FAILED_CACHE_TTL    The time in seconds failed DNS queries are cached    3600
9. Start Logstash
You should now be able to start Logstash and begin collecting network flow data. Assuming you are running a recent version of RedHat/CentOS or Ubuntu, and using systemd, complete these steps:
Run systemctl daemon-reload to ensure any changes to the environment variables are recognized.
Run systemctl start logstash
NOTICE! Make sure that you have already setup the Logstash init files by running LS_HOME/bin/system-install. If the init files have not been setup you will receive an error. To follow along as Logstash starts you can tail its log by running:
tail -f /var/log/logstash/logstash-plain.log
Logstash takes a little time to start... BE PATIENT!
Logstash setup is now complete. If you are receiving data from Filebeat, you should have snort- daily indices in Elasticsearch.
Setting up Kibana
An API (yet undocumented) is available to import and export Index Patterns. The JSON files which contains the Index Pattern configurations are synlite_snort.index_pattern.json and synlite_snort_stats.index_pattern.json. To setup the Index Patterns run the following commands:
curl -X POST -u USERNAME:PASSWORD http://KIBANASERVER:5601/api/saved_objects/index-pattern/snort-* -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @/PATH/TO/synlite_snort.index_pattern.json
Finally the vizualizations and dashboards can be loaded into Kibana by importing the synlite_snort.dashboards.json file from within the Kibana UI. This is done in the Kibana Management app under Saved Objects.
Recommended Kibana Advanced Settings
You may find that modifying a few of the Kibana advanced settings will produce a more user-friendly experience while using sýnesis™ Lite for Snort . These settings are made in Kibana, under Management -> Advanced Settings.
Advanced Setting    Value    Why make the change?
doc_table:highlight    false    There is a pretty big query performance penalty that comes with using the highlighting feature. As it isn't very useful for this use-case, it is better to just trun it off.
filters:pinnedByDefault    true    Pinning a filter will it allow it to persist when you are changing dashbaords. This is very useful when drill-down into something of interest and you want to change dashboards for a different perspective of the same data. This is the first setting I change whenever I am working with Kibana.
state:storeInSessionStorage    true    Kibana URLs can get pretty large. Especially when working with Vega visualizations. This will likely result in error messages for users of Internet Explorer. Using in-session storage will fix this issue for these users.
timepicker:quickRanges    see below    The default options in the Time Picker are less than optimal, for most logging and monitoring use-cases. Fortunately Kibana no allows you to customize the time picker. Our recommended settings can be found see below.
Dashboards
The following dashboards are provided.
NOTE: The dashboards are optimized for a monitor resolution of 1920x1080.
Alerts
 
Threats - Public Attackers
 
Threats - At-Risk Servers
 
Threats - At-Risk Services
 
Threats - High-Risk Clients
 
Sankey
 
Geo IP
 
Raw Logs
 
Environment Variable Reference
The supported environment variables are:
Environment Variable    Description    Default Value
SYNLITE_SNORT_DICT_PATH    The path where the dictionary files are located    /etc/logstash/synlite_snort/dictionaries
SYNLITE_SNORT_TEMPLATE_PATH    The path to where index templates are located    /etc/logstash/synlite_snort/templates
SYNLITE_SNORT_GEOIP_DB_PATH    The path where the GeoIP DBs are located    /etc/logstash/synlite_snort/geoipdbs
SYNLITE_SNORT_GEOIP_CACHE_SIZE    The size of the GeoIP query cache    8192
SYNLITE_SNORT_GEOIP_LOOKUP    Enable/Disable GeoIP lookups    true
SYNLITE_SNORT_ASN_LOOKUP    Enable/Disable ASN lookups    true
SYNLITE_SNORT_CLEANUP_SIGS    Enable this option to remove unneeded text from alert signatures.    false
SYNLITE_SNORT_RESOLVE_IP2HOST    Enable/Disable DNS requests    false
SYNLITE_SNORT_NAMESERVER    The DNS server to which the dns filter should send requests    127.0.0.1
SYNLITE_SNORT_DNS_HIT_CACHE_SIZE    The cache size for successful DNS queries    25000
SYNLITE_SNORT_DNS_HIT_CACHE_TTL    The time in seconds successful DNS queries are cached    900
SYNLITE_SNORT_DNS_FAILED_CACHE_SIZE    The cache size for failed DNS queries    75000
SYNLITE_SNORT_DNS_FAILED_CACHE_TTL    The time in seconds failed DNS queries are cached    3600
SYNLITE_SNORT_ES_HOST    The Elasticsearch host to which the output will send data    127.0.0.1:9200
SYNLITE_SNORT_ES_USER    The password for the connection to Elasticsearch    elastic
SYNLITE_SNORT_ES_PASSWD    The username for the connection to Elasticsearch    changeme
SYNLITE_SNORT_BEATS_HOST    The IP address on which to listen for Filebeat messages    0.0.0.0
SYNLITE_SNORT_BEATS_PORT    The TCP port on which to listen for Filebeat messages    5044
Recommended Setting for timepicker:quickRanges
I recommend configuring timepicker:quickRanges for the setting below. The result will look like this:
 
[
  {
    "from": "now/d",
    "to": "now/d",
    "display": "Today",
    "section": 0
  },
  {
    "from": "now/w",
    "to": "now/w",
    "display": "This week",
    "section": 0
  },
  {
    "from": "now/M",
    "to": "now/M",
    "display": "This month",
    "section": 0
  },
  {
    "from": "now/d",
    "to": "now",
    "display": "Today so far",
    "section": 0
  },
  {
    "from": "now/w",
    "to": "now",
    "display": "Week to date",
    "section": 0
  },
  {
    "from": "now/M",
    "to": "now",
    "display": "Month to date",
    "section": 0
  },
  {
    "from": "now-15m",
    "to": "now",
    "display": "Last 15 minutes",
    "section": 1
  },
  {
    "from": "now-30m",
    "to": "now",
    "display": "Last 30 minutes",
    "section": 1
  },
  {
    "from": "now-1h",
    "to": "now",
    "display": "Last 1 hour",
    "section": 1
  },
  {
    "from": "now-2h",
    "to": "now",
    "display": "Last 2 hours",
    "section": 1
  },
  {
    "from": "now-4h",
    "to": "now",
    "display": "Last 4 hours",
    "section": 2
  },
  {
    "from": "now-12h",
    "to": "now",
    "display": "Last 12 hours",
    "section": 2
  },
  {
    "from": "now-24h",
    "to": "now",
    "display": "Last 24 hours",
    "section": 2
  },
  {
    "from": "now-48h",
    "to": "now",
    "display": "Last 48 hours",
    "section": 2
  },
  {
    "from": "now-7d",
    "to": "now",
    "display": "Last 7 days",
    "section": 3
  },
  {
    "from": "now-30d",
    "to": "now",
    "display": "Last 30 days",
    "section": 3
  },
  {
    "from": "now-60d",
    "to": "now",
    "display": "Last 60 days",
    "section": 3
  },
  {
    "from": "now-90d",
    "to": "now",
    "display": "Last 90 days",
    "section": 3
  }
]
Attribution
This product includes GeoLite data created by MaxMind, available from (http://www.maxmind.com)

sýnesis™ Lite for Snort provides basic analytics for Snort IDS/IPS alert logs using the Elastic Stac相关推荐

  1. snort create_mysql_入侵检测系统Snort+Base安装

    安装一些支持库 tar -zxvf zlib-1.2.3.tar.gz cd zlib-1.2.3 ./configure make make install cd .. tar -zxvf libp ...

  2. d.php xfso_centos平台基于snort、barnyard2以及base的IDS(入侵检测系统)的搭建与测试及所遇问题汇总...

    一.基本环境 虚拟机工具:Vmware Workstation Pro 12 Centos版本:CentOS-7-x86_64-Minimal-1511 Snort版本:snort-2.9.9.0 B ...

  3. 朔源反制:IDS IPS snort suricata

    IDS:Intrusion detection systems 入侵检测系统 IPS:Intrusion prevention systems 入侵防御系统 Snort: 参考:https://www ...

  4. snort mysql_入侵检测:Snort,Base,MySQL和Apache2在Ubuntu 7.10(Gutsy Gibbon)

    入侵检测:Snort,Base,MySQL和Apache2在Ubuntu 7.10(Gutsy Gibbon) 在本教程中,我将介绍如何从Ubuntu 7.10(Gutsy Gibbon)上的源,BA ...

  5. snort mysql 优点_配置snort

    0.如果要输出到mysql,请安装barnyard2 在此之前,请启动并配置mysql git clone https://github.com/firnsy/barnyard2 cd barnyar ...

  6. snort mysql_linux入侵检测系统snort安装配置

    队长让俺瞅瞅snort,没想到安装配置都遇到问题...整理下过程,给跟我一样的家伙看看.. 由于本人机器是ubuntu,apt-get 几下就可以了,其实网上有不少这样的文章...之所以还要写就是.. ...

  7. centos平台基于snort、barnyard2以及base的IDS(入侵检测系统)的搭建与测试及所遇问题汇总

    一.基本环境 虚拟机工具:Vmware Workstation Pro 12 Centos版本:CentOS-7-x86_64-Minimal-1511 Snort版本:snort-2.9.9.0 B ...

  8. snort + barnyard2如何正确读取snort.unified2格式的数据集并且入库MySQL(图文详解)...

    不多说,直接上干货! 为什么,要写这篇论文? 是因为,目前科研的我,正值研三,致力于网络安全.大数据.机器学习研究领域! 论文方向的需要,同时不局限于真实物理环境机器实验室的攻防环境.也不局限于真实物 ...

  9. 蜜罐网络(开源汇总)MHN

    0x00 前言 这篇文章中,主要讨论蜜罐和蜜罐网络,以及如何使用它们保护真实的系统,我们称之为这个系统为MHN(Modern Honey Network,现代蜜网),它可以快速部署.使用,也能够快速的 ...

最新文章

  1. 长庆石油学校计算机97,我在浙江大学学习计算机技术 ----我在长庆油田会战的12年(十一)...
  2. 哈佛结构和冯诺依曼结构区别。
  3. 管理软件售前咨询与企业架构
  4. Image Lab 6 for MacOS WIN 图像分析软件下载
  5. 什么事数据科学_如果您想进入数据科学,则必须知道的7件事
  6. python 彻底解读多线程与多进程_python 多进程与多线程浅析
  7. 魔鬼的梦魇—验证IE中的js内存泄露模式(三)
  8. 人工智能导论 王万良教授_人工智能导论 全套课件.ppt
  9. IndentationError: expected an indented block缩进没问题但是出错
  10. 数字式PID控制MATLAB仿真
  11. Ubuntu18.04 上 安装微信(Deepin-Wechat)
  12. [题]走廊泼水节——#最小生成树kru
  13. 利用Spring扩展点模拟Feign实现远程调用(干货满满)「扩展点实战系列」- 第445篇
  14. java modelbus_modelbus tcp java
  15. C++基础知识 - 多重继承的二义性问题
  16. 台湾大学林轩田机器学习技法课程学习笔记6 -- Support Vector Regression
  17. 网络端口流量监测工具ifstat
  18. 千川投手必知: 直播间流量起飞深度分析(逐字稿)
  19. 计算机组成原理-入门篇-01冯·诺依曼体系结构
  20. C语言—选择控制结构 已知银行整存整取存款不同期限的年息利率 要求输入存钱的期限和本金,求到期时能从银行得到的本金和复利的合计。

热门文章

  1. steam进社区显示服务器错误,Steam错误代码-118怎么办 社区打不开解决方法
  2. “失败”的北漂十年,我真的尽力了。。。
  3. php面试题中笔试题目的汇总,php面试题中笔试题目的汇总
  4. SVD的原理及python实现——正本清源
  5. Hadoop学习之SSH免密登录配置(以三台虚拟机为例,完全分布式)
  6. mysql命令行安装教程_MySQL命令行教程
  7. navacate连接不上mysql_解决Navicat无法连接到MySQL
  8. java 垃圾收集器_JVM垃圾收集器详解
  9. php 自定义字段erp,ERP自定义单据界面、自定义字段
  10. 全球及中国食品级隔膜泵行业需求分析与发展战略规划研究报告2022-2028年