graylog+kafka+zookeeper(单机测试及源码),kafka+zookeeper组件部署(二)

  • 问题背景
    • graylog+kafka+zookeeper(单机测试及源码),graylog组件部署,查找问题分析(一)
    • graylog+kafka+zookeeper(单机测试及源码),kafka+zookeeper组件部署(二)
    • graylog+kafka+zookeeper(单机测试及源码),graylog测试用例及源码(三)
    • graylog+kafka+zookeeper(单机测试及源码),graylog收集kafka(脚本创建发布订阅方式)存储的消息(四)
    • graylog+kafka+zookeeper(单机测试及源码),graylog设置URL报警方式(五)
    • graylog+kafka+zookeeper(单机测试及源码),graylog+filebeat+sidecars收集log日志(六)
    • graylog+kafka+zookeeper(单机测试及源码),微服务日志查询使用(七)
    • graylog+kafka+zookeeper(单机测试及源码),仪表板Dashboards创建及使用(八)
    • graylog+kafka+zookeeper(单机测试及源码),indices索引和streams流创建及使用,日志定期删除功能(九)
  • 安装步骤
  • 问题总结
  • Lyric: 远距离诉说

问题背景

上一个篇章安装部署了 graylog 以及问题分析,这个篇章安装单机版的 kafka+zookeeper 进行测试,kafka+zookeeper下载链接

graylog+kafka+zookeeper(单机测试及源码),graylog组件部署,查找问题分析(一)

graylog+kafka+zookeeper(单机测试及源码),kafka+zookeeper组件部署(二)

graylog+kafka+zookeeper(单机测试及源码),graylog测试用例及源码(三)

graylog+kafka+zookeeper(单机测试及源码),graylog收集kafka(脚本创建发布订阅方式)存储的消息(四)

graylog+kafka+zookeeper(单机测试及源码),graylog设置URL报警方式(五)

graylog+kafka+zookeeper(单机测试及源码),graylog+filebeat+sidecars收集log日志(六)

graylog+kafka+zookeeper(单机测试及源码),微服务日志查询使用(七)

graylog+kafka+zookeeper(单机测试及源码),仪表板Dashboards创建及使用(八)

graylog+kafka+zookeeper(单机测试及源码),indices索引和streams流创建及使用,日志定期删除功能(九)

安装步骤

  1. 下载了kafka和zookeeper压缩包之后,使用 Xshell 进行解压,拷贝时默认需要 root 权限,不然不能传输

tar -zxvf kafka+zookeeper.tar
  1. 单机部署zookeeper,解压zookeeper
tar  –zxvf  zookeeper-3.4.14.tar.gz
  1. 进入…zookeeper/conf 文件夹


5. 更改 zoo_sample.cfg 为 zoo.cfg

mv zoo_sample.cfg  zoo.cfg
  1. 修改 zoo.cfg 配置文件
vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/modules/zookeeper/zkdata
dataLogDir=/opt/modules/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=2181
server.1=10.10.195.199:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1


6. 在 /opt/modules/zookeeper/zkdata 路径下创建 myid 文件

vim /opt/modules/zookeeper/zkdata/myid

7.写入数字 1


8. 进入cd /opt/modules/zookeeper/bin,启动zookeeper

./zkServer.sh start


9. 查看 zookeeper 状态,为 standalone 单机模式

./zkServer.sh status

10 可以使用 jps 查看 java 进程

jps


11 单机部署kafka,解压kafka安装包

tar -zxvf kafka_2.11-1.0.0.tgz


12 进入config文件夹,/opt/modules/kafka/config

vim server.properties

单机部署 kafka 主要修改了这几项

broker.id=0
delete.topic.enable=true
port=9092
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.10.195.199:9092
host.name=10.10.195.199
log.dirs=/opt/modules/kafka/kafkalogs
zookeeper.connect=10.10.195.199:2181
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
delete.topic.enable=true
port=9092
############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://10.10.195.199:9092
host.name=10.10.195.199
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log files
log.dirs=/opt/modules/kafka/kafkalogs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=10.10.195.199:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

13 进入 /opt/modules/kafka/bin,启动kafka

./kafka-server-start.sh -daemon ../config/server.properties
jps


14 创建 kafka 的 topic,名字为 test

./kafka-topics.sh --create --zookeeper 10.10.195.199:2181 --replication-factor 1 --partitions 1 --topic test

查看topic列表

./kafka-topics.sh --list --zookeeper 10.10.195.199:2181


15 创建发布者

./kafka-console-producer.sh --broker-list 10.10.195.199:9092 --topic test

16 Xshell再开一个窗口,创建订阅者

./kafka-console-consumer.sh --zookeeper 10.10.195.199:2181 --from-beginning --topic test


17 发布者发送消息,订阅者接收消息


18 查看日志文件,

./kafka-run-class.sh kafka.tools.DumpLogSegments --files /opt/modules/kafka/kafkalogs/test-0/00000000000000000000.log  --print-data-log

1 删除topic

./kafka-topics.sh --zookeeper 10.10.195.199:2181 --delete --topic test

问题总结

1 慢慢安装,问题不大,起不来一定看看防火墙有没有关闭,还有端口是否被占用

netstat -nltp | grep 2181
systemctl stop firewalld

作为程序员第十篇文章,每次写一句歌词记录一下,看看人生有几首歌的时间,wahahaha …

Lyric: 远距离诉说

graylog+kafka+zookeeper(单机测试及源码),kafka+zookeeper组件部署(二)相关推荐

  1. graylog+kafka+zookeeper(单机测试及源码),graylog组件部署,查找问题分析(一)

    graylog+kafka+zookeeper(单机测试及源码),graylog组件部署,查找问题分析(一) 问题背景 graylog+kafka+zookeeper(单机测试及源码),graylog ...

  2. graylog+kafka+zookeeper(单机测试及源码),graylog设置URL报警方式(五)

    graylog+kafka+zookeeper(单机测试及源码),graylog设置URL报警方式(五) 问题背景 graylog+kafka+zookeeper(单机测试及源码),graylog组件 ...

  3. graylog+kafka+zookeeper(单机测试及源码),微服务日志查询使用(七)

    graylog+kafka+zookeeper(单机测试及源码),微服务日志查询使用(七) 问题背景 graylog+kafka+zookeeper(单机测试及源码),graylog组件部署,查找问题 ...

  4. graylog+kafka+zookeeper(单机测试及源码),graylog收集kafka(脚本创建发布订阅方式)存储的消息(四)

    graylog+kafka+zookeeper(单机测试及源码),graylog收集kafka(脚本创建发布订阅方式)存储的消息(四) 问题背景 graylog+kafka+zookeeper(单机测 ...

  5. graylog+kafka+zookeeper(单机测试及源码),graylog测试用例及源码(三)

    graylog+kafka+zookeeper(单机测试及源码),graylog测试用例及源码(三) 问题背景 graylog+kafka+zookeeper(单机测试及源码),graylog组件部署 ...

  6. zookeeper -- Mac 上 Intellij IDEA 配置 zookeeper(3.5.8) 源码阅读、运行、调试环境

    近期需要整理MIT6.824 中 分布式系统协调服务 zookeeper 的一些知识,想要在Mac 本地搭建一个可以方便运行调试的zookeeper环境,这玩意竟然耗费了一上午的时间,实在是不可忍:当 ...

  7. dubbo源码解析-zookeeper创建节点

    前言 在之前dubbo源码解析-本地暴露中的前言部分提到了两道高频的面试题,其中一道dubbo中zookeeper做注册中心,如果注册中心集群都挂掉,那发布者和订阅者还能通信吗?在上周的dubbo源码 ...

  8. 实际测试例子+源码分析的方式解剖MyBatis缓存的概念

    前言: 前方高能! 本文内容有点多,通过实际测试例子+源码分析的方式解剖MyBatis缓存的概念,对这方面有兴趣的小伙伴请继续看下去~ 欢迎工作一到五年的Java工程师朋友们加入Java架构开发:79 ...

  9. Kafka 源码分析之网络层(二)

    上一篇介绍了概述和网络层模型实现<Kafka 源码分析之网络层(一)>,本编主要介绍在Processor中使用的nio selector的又一封装,负责具体数据的接收和发送. PS:丰富的 ...

最新文章

  1. MusicXML 3.0 - DTD 速查
  2. 【一周入门MySQL—5】
  3. 34.对象 GC,GC属性,影响GC的因素,GC步骤,GC算法,安全区/安全区域,新生代,老年代等介绍
  4. c#窗口科学计算机,c#窗口科学计算器连等如何实现,大神帮忙一下好么?
  5. IO多路复用及select poll epoll讲解
  6. java 遍历arrayList的四种方法
  7. 华为机试HJ17:坐标移动
  8. 2018-06-12 python读二进制文件
  9. TransE算法原理与案例
  10. adb刷入第三方recovery_全网热门机型TWRP_Recovery最全面整理合集覆盖安卓全机型
  11. torch.sort
  12. 小丁在美国的惬意生活 日常学学英语吃吃BBQ-猎豹体育网
  13. 天云大数据_【案例分享】天云大数据最佳实践系列之——信用评分模型
  14. android中按两次返回键返回桌面或退出程序
  15. Zxing 预览框不变增加扫描区域,仿微信扫到一半就可以成功
  16. 后缀数组+贪心+隔板法
  17. centos7的syslog知识点
  18. 超融合解决方案已成新黑马 市场排名初见端倪
  19. 编译原理 机械工业出版社 第一章第三章部分习题答案
  20. Android自定义View实现炫酷的加速球效果

热门文章

  1. rust 飞天指令_腐蚀RUST代码大全 腐蚀RUST指令代码一览
  2. 奇虎360scribe日志采集系统
  3. android 判断是白天还是晚上,然后设置地图模式
  4. 入职体检(三甲医院)
  5. Element 中图片预览后如何快速关闭
  6. 美元对全球汇率 免费 api
  7. 知道这六种拍摄技巧,让你玩转夕阳拍摄
  8. pow函数以及math.h的一些坑
  9. 反馈 术语 串小鸭 并大流
  10. org.apache.ibatis.binding.BindingException: Invalid bound statement (not found)