一、Windows下Zookeeper集群搭建.

1、集群版本:2.8.1,3.0版本不再支持JDK8,不在需要Zookeeper.

2、Zookeeper版本:3.8.0.

3、Zookeeper三个节点配置文件以及环境搭建.

三个端口分别是 :2181、2182、2183.

依次建立三个保存id的文件myid文件,不需要任何后缀.

D:\BigData\apache-zookeeper-3.8.0-bin\data-file\data1

分别是数字1、2、3

  ①、zoo-1.cfg

# 存放数据
dataDir=D:\\BigData\\apache-zookeeper-3.8.0-bin\\data-file\\data1#存放日志
dataLogDir=D:\\BigData\\apache-zookeeper-3.8.0-bin\\data-file\\log1# the port at which the clients will connect
# 监听端口ifconf
clientPort=2181# 配置集群服务,这里每个文件都一样
server.1=192.168.1.6:2881:3881
server.2=192.168.1.6:2882:3882
server.3=192.168.1.6:2883:3883
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

 ②、zoo-2.cfg

# 存放数据
dataDir=D:\\BigData\\apache-zookeeper-3.8.0-bin\\data-file\\data2#存放日志
dataLogDir=D:\\BigData\\apache-zookeeper-3.8.0-bin\\data-file\\log2# the port at which the clients will connect
# 监听端口ifconf
clientPort=2182# 配置集群服务,这里每个文件都一样
server.1=192.168.1.6:2881:3881
server.2=192.168.1.6:2882:3882
server.3=192.168.1.6:2883:3883
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

③、zoo-3.cfg

# 存放数据
dataDir=D:\\BigData\\apache-zookeeper-3.8.0-bin\\data-file\\data3#存放日志
dataLogDir=D:\\BigData\\apache-zookeeper-3.8.0-bin\\data-file\\log3# the port at which the clients will connect
# 监听端口ifconf
clientPort=2183# 配置集群服务,这里每个文件都一样
server.1=192.168.1.6:2881:3881
server.2=192.168.1.6:2882:3882
server.3=192.168.1.6:2883:3883
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

④、对应数据目录和日志目录 文件 

⑤、配置三个启动脚本.

zkServer1.cmd和zkServer2.cmd和zkServer3.cmd配置如下

@echo off
REM Licensed to the Apache Software Foundation (ASF) under one or more
REM contributor license agreements.  See the NOTICE file distributed with
REM this work for additional information regarding copyright ownership.
REM The ASF licenses this file to You under the Apache License, Version 2.0
REM (the "License"); you may not use this file except in compliance with
REM the License.  You may obtain a copy of the License at
REM
REM     http://www.apache.org/licenses/LICENSE-2.0
REM
REM Unless required by applicable law or agreed to in writing, software
REM distributed under the License is distributed on an "AS IS" BASIS,
REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
REM See the License for the specific language governing permissions and
REM limitations under the License.setlocal
call "%~dp0zkEnv.cmd"
set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
set ZOOCFG=D:\\BigData\\apache-zookeeper-3.8.0-bin\\conf\\zoo-1.cfg
set ZOO_LOG_FILE=zookeeper-%USERNAME%-server-%COMPUTERNAME%.logecho on
call %JAVA% "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.log.file=%ZOO_LOG_FILE%" "-XX:+HeapDumpOnOutOfMemoryError" "-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %*endlocal

  ⑥、启动Zookeeper节点.

依次点击三个启动cmd命令即可.

第一个节点.

第二个节点.

第三个节点.

二、Windows下Kafka集群搭建.

1、复制搭建目录以及目录规划.

D:\BigData\Kafka-Cluster\kafka-1

 2、配置文件.

第一个节点:server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0port=9092
host.name=192.168.1.6
############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\BigData\Kafka-Cluster\kafka-1\logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=2# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
#对应着刚刚配置的zookeeper的三个ip与端口地址
zookeeper.connect=192.168.1.6:2181,192.168.1.6:2182,192.168.1.6:2183

第二个节点:server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=1port=9093
host.name=192.168.1.6
############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\BigData\Kafka-Cluster\kafka-2\logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=2# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
#对应着刚刚配置的zookeeper的三个ip与端口地址
zookeeper.connect=192.168.1.6:2181,192.168.1.6:2182,192.168.1.6:2183

第三个节点:server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=2port=9094
host.name=192.168.1.6
############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\BigData\Kafka-Cluster\kafka-3\logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=2# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
zookeeper.connect=192.168.1.6:2181,192.168.1.6:2182,192.168.1.6:2183

3、启动命令

说明:注意路径和写法空格之类的.

kafka-server-start.bat D:\BigData\Kafka-Cluster\kafka-1\config\server.properties

第一个节点:

第二个节点:

kafka-server-start.bat D:\BigData\Kafka-Cluster\kafka-2\config\server.properties

第三个节点

kafka-server-start.bat D:\BigData\Kafka-Cluster\kafka-3\config\server.properties

INFO Cluster ID = -7jJzueoRVav3spVetQ4kw (kafka.server.KafkaServer)

4、创建Topic

kafka-topics.bat --create --zookeeper 192.168.1.6:2181 --replication-factor 2 --partitions 3 --topic cluster_test【三个分区两个副本的Topic】

5、生产者发送消息

kafka-console-producer.bat --broker-list 192.168.1.6:9092,192.168.1.6:9093,192.168.1.6:9094 --topic cluster_test

6、消费者消费消息

kafka-console-consumer.bat --bootstrap-server 192.168.1.6:9092,192.168.1.6:9093,192.168.1.6:9094 --topic cluster_test

遇到问题:kafka经典报错:broker.id doesn‘t match stored broker.id.

解决问题方式:每个节点的日志文件都单独配置目录即可,三个节点都写一个日志文件只能有一个匹配,当然是只有第一个生效了哟.

查询命令:

kafka-topics.bat --describe --zookeeper 192.168.1.6:2181 --topic cluster_test

查看所有Topic:

kafka-topics.bat --zookeeper 192.168.1.6:2181 --list

总结:简单搭建了一下基于Zookeeper的KafkaWindows下的伪集群,本地测试使用即可,一般生产环境都是大型的Kafka集群,broker都在10+,甚至是20+等,有晚上的云监控和告警机制的,完整的鉴权,Kafka3.0逐渐移除了Zookeeper了,搭建更加简单了,元数据有自身管理了.

Windows下Kafka集群搭建相关推荐

  1. windows下Redis-cluster集群搭建

    目录 一.下载安装 二.搭建Redis集群 三.集群环境测试 四.将redis-cluster注册生windows服务 五.打开redis可视化工具,如图连接成功 一.下载安装 需要下载如下三个:Re ...

  2. Kafka学习:CentOS7下Kafka集群搭建

    文章目录 准备 集群安装 1.创建目录 2.解压缩安装包 3.修改配置文件 4.启动 5.查看集群是否安装成功 测试Kafka 1.创建测试mytopic 2.查看mytopic副本信息 3.查看已创 ...

  3. Windows下Redis集群搭建

    上一篇文章中介绍了Windows下使用cygwin搭建redis单节点,这一篇文件将在上一篇文章的基础上搭建redis集群. 1. 在安装redis的目录的同级目录下新建6个文件夹,7000,7000 ...

  4. Windows下Redis集群搭建(超详细教程)

    文章目录 一.Redis单机版安装 二.Redis集群的安装 1.构建集群节点目录 2.下载Ruby并安装 3.构建集群脚本redis-trib.rb 4.构建集群 三.将Redis注册生window ...

  5. 环境搭建:Windows系统下Nacos集群搭建

    环境搭建:Windows系统下Nacos集群搭建 一.环境准备 名称 版本 下载地址 nacos NACOS 1.2.0 下载地址,提取码:5555 MySQL mysql Ver 14.14 Dis ...

  6. 转-Kafka【第一篇】Kafka集群搭建

    转自: https://www.cnblogs.com/luotianshuai/p/5206662.html Kafka[第一篇]Kafka集群搭建 Kafka初识 1.Kafka使用背景 在我们大 ...

  7. kafka集群搭建(消息)

    1.Kafka使用背景 在我们大量使用分布式数据库.分布式计算集群的时候,是否会遇到这样的一些问题: 我们想分析下用户行为(pageviews),以便我们设计出更好的广告位 我想对用户的搜索关键词进行 ...

  8. zookeeper 和 kafka 集群搭建

    Kafka初识 1.Kafka使用背景 在我们大量使用分布式数据库.分布式计算集群的时候,是否会遇到这样的一些问题: 我们想分析下用户行为(pageviews),以便我们设计出更好的广告位 我想对用户 ...

  9. Zookeeper+Kafka集群搭建

    Zookeeper集群搭建 Kafka集群是把状态保存在Zookeeper中的,首先要搭建Zookeeper集群. 1.软件环境 (3台服务器-我的测试) 192.168.30.204 server1 ...

最新文章

  1. 网络营销外包——网络营销外包专员浅析网站首页设计需要注意哪些
  2. VTK:图片之ImageRFFT
  3. python和rpa有什么关系_什么是RPA_什么是RPA_产品简介_机器人流程自动化RPA - 阿里云...
  4. SLAM GMapping(4)SLAM处理器
  5. linux服务器搭建_Linux下搭建FTP服务器笔记
  6. CentOS 6.5 Zabbix监.控系统功能及基本使用
  7. Python 批量下载BiliBili视频 打包成软件
  8. Lua C API中文函数手册
  9. 只需10行代码就让你的U盘变成纯净版winPE系统安装启动盘
  10. Android FBReader基础资料文档
  11. 虚拟专用网络隧道实验
  12. 移动光猫连接移动硬盘变成超小型nas【HS8545M5
  13. arcgis10.2以上版本转换测量队节点所在TXT为所需格式(arctoolbox-samples-features-creat featrure from text file所需要的面格式)
  14. 三维表面的降噪处理 (surface denoising)
  15. 每天一个npm包 之 qs
  16. PID控制算法与参数整定,用这几招轻松搞定!
  17. ReactNative使用精美图标库react-native-vector-icons(具体使用方法)
  18. 数据恢复软件设计与实现(一)
  19. 教辅小程序之微课页面Demo
  20. 10 个免费的服务器监控工具

热门文章

  1. SEGGER J-Flash 烧写stm32程序
  2. 计算机科学导论与前沿,计算机科学导论(英中双语版)
  3. 一维向量转换为n维向量_如何在N维上固定万向节锁
  4. 【解决方案 三十一】Navicat数据库结构同步
  5. 华为荣耀MagicBook笔记本 键盘灯怎么设置为常亮?
  6. 单例模式的适用场景包括:
  7. 工业相机SDK使用Python打开无法显示图像
  8. 微型计算机的功率,自己动手估算电脑的功率
  9. ES6 尾调用和尾递归优化
  10. 大数据学习系列:Hadoop3.0苦命学习(一)