1、几行命令

./kafka-console-consumer.sh --zookeeper 192.168.86.133:2181,192.168.86.132:2181,192.168.86.134:2181 --topic shuaige --from-beginning

./kafka-console-producer.sh --broker-list 192.168.86.133:9092,192.168.86.132:9092,192.168.86.134:9092 --topic shuaige

./kafka-topics.sh --create --zookeeper 192.168.86.133:2181,192.168.86.132:2181,192.168.86.134:2181 --replication-factor 1  --partitions 1 --topic shuaigeck

2、几个错误

A          [2017-06-23 12:31:29,338] WARN Bootstrap broker 192.168.86.133:9092 disconnected (org.apache.kafka.clients.NetworkClient)

原因不知

B        kafka.common.KafkaException: Failed to acquire lock on file .lock in /opt/kafka/kafka-logs. A Kafka instance in another process or thread is using this directory.

没有创建log文件夹

C        ERROR kafka.admin.AdminOperationException: replication factor: 1 larger than available brokers: 0

/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties &  &命令必须

3、几个结果

[root@slave2 kafka-logs]# ll
总用量 12
-rw-r--r--. 1 root root  0 6月  23 12:10 cleaner-offset-checkpoint
-rw-r--r--. 1 root root 54 6月  23 12:10 meta.properties
-rw-r--r--. 1 root root 29 6月  23 12:49 recovery-point-offset-checkpoint
-rw-r--r--. 1 root root 29 6月  23 12:50 replication-offset-checkpoint
drwxr-xr-x. 2 root root 72 6月  23 12:48 shuaigeck-0
[root@slave2 shuaigeck-0]# ll
总用量 4
-rw-r--r--. 1 root root 10485760 6月  23 12:48 00000000000000000000.index
-rw-r--r--. 1 root root       92 6月  23 12:50 00000000000000000000.log

查看下:

日志:

[root@slave2 shuaigeck-0]# ../../bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000000.log --print-data-log
Dumping 00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 isvalid: true payloadsize: 7 magic: 1 compresscodec: NoCompressionCodec crc: 3388553551 payload: walasla
offset: 1 position: 41 isvalid: true payloadsize: 17 magic: 1 compresscodec: NoCompressionCodec crc: 879170540 payload: ssssssssssssssssss

索引:

[root@slave2 shuaigeck-0]# ../../bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000000.index --print-data-log
Dumping 00000000000000000000.index
offset: 0 position: 0

几个配置

service.properties:

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
host.name=192.168.86.133
############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = security_protocol://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.86.133:9092# The number of threads handling network requests
num.network.threads=3# The number of threads doing disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log files
log.dirs=/opt/kafka/kafka-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880
############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.86.132:2181,192.168.86.133:2181,192.168.86.134:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

一个问题:

为什么所有的topic都不在我建立topic的机器上,这些topic都出现在其他节点?

kafka学习-环境搭建相关推荐

  1. Miniconda3+PyTorch1.7.1(GPU版)+Win10_x64+GTX1060深度学习环境搭建

    写在这里的初衷,一是备忘,二是希望得到高人指点,三是希望能遇到志同道合的朋友. 硬件信息: 系统:win10家庭中文版 CPU:i7-7700HQ 内存:16GB 显卡:GTX1060 目录 一.确定 ...

  2. 腾讯云GPU服务器深度学习环境搭建

    Author:ZERO-A-ONE Date:2021-2-20 ​ 因为本人的电脑没有带有NVIDIA公司的独立显卡,所以需要用到GPU进行大规模运算加速训练的时候,就萌生了购买云服务进行计算的念头 ...

  3. 台式机Ubuntu系统安装Tesla系列显卡+深度学习环境搭建

    1.前言 Tesla系列的显卡主要是作为计算显卡来使用的,常用在服务器.工作站等设备上,并不适用于普通台式机主板上.与常用的Nvidia显卡系列相比,其内部的电源供电结构.散热功能都是不一样的.因此要 ...

  4. Ubuntu16.04深度学习环境搭建

    Ubuntu16.04深度学习环境搭建(anaconda3+cuda10.0+cudnn7.6+pytorch1.2) 文章目录 Ubuntu16.04深度学习环境搭建(anaconda3+cuda1 ...

  5. kafka windows环境搭建 SASL_PLAINTEXT/SCRAM

    kafka windows环境搭建 SASL_PLAINTEXT/SCRAM acl认证记录 一.kafka 下载参考地址 https://kafka.apache.org/downloads 二.环 ...

  6. 深度学习环境搭建(从卸载CUDA到安装,以及Pytorch与torchvision的安装。你从未见过的全有版本)

    深度学习环境搭建(从卸载CUDA到安装,以及Pytorch与torchvision的安装.你从未见过的全有版本) 先来点头疼的:在深度学习的过程中,环境搭建是必须要经过的一个关卡,由于版本对应麻烦,很 ...

  7. Hadoop学习环境搭建

    Hadoop学习环境搭建 Apache Hadoop3.1.1虚拟机环境搭建 工具准备 安装虚拟机 Hadoop安装和配置 配置Hadoop001.Hadoop002.Hadoop003互相访问 配置 ...

  8. ubuntu22从双系统开始到深度学习环境搭建+必备软件安装

    ubuntu从双系统开始到深度学习环境搭建及生活软件安装大合集!!! (一)本机环境 (二)双系统安装 1.前期了解 1.1.查看[BIOS](https://so.csdn.net/so/searc ...

  9. 深度学习双显卡配置_linux(manjaro) tensorflow2.1 conda cuda10 双显卡笔记本深度学习环境搭建...

    linux(manjaro) tensorflow2.1 conda cuda10 双显卡笔记本深度学习环境搭建 下学期要学tensorflow,看着我可怜的1050ti,流下了贫穷的泪水,但无奈要做 ...

最新文章

  1. 2021美国科学天才奖发榜!16名华裔高中生入围「少年诺奖」
  2. idea 如何看bytecode_IDEA字节码学习查看神器介绍
  3. JavaScript 简史
  4. BZOJ 3224: Tyvj 1728 普通平衡树 treap
  5. GitLab-怎样使用GitLab托管项目
  6. IP地址概念及其划分
  7. php文件改写nodejs,node.js – 提供PHP文件的nodejs,expressjs
  8. 那些功能逆天,却鲜为人知的pandas骚操作
  9. Android应用程序绑定服务(bindService)的过程源代码分析
  10. mysql命令导入导出数据库_MYSQL命令行导入导出数据库详解
  11. w ndows太卡,用Windows 10太卡?教你快速干掉Windows Defender
  12. 【Visual C++】游戏开发笔记之十一 基础动画显示(四) 排序贴图
  13. java hsqldb_Java HsqlDB的初步使用和技巧总结
  14. 网站做渗透测试服务的步骤
  15. CAD教程:CAD怎么绘制云线?
  16. php模板地图修改,让你一个地图拥有全部资源的修改方法
  17. 深度学习中的类别激活热图可视化
  18. 莫纳什大学FIT1043 assignment2课业解析
  19. GitHub 近两万 Star,无需编码,可一键生成前后端代码,这个开源项目有点强!...
  20. Codewars实战(一)

热门文章

  1. oracle缺失值表示,Oracle SQL,用最接近的非缺失填充缺失值
  2. oracle删除定义变量,Oracle存储过程,临时表的创建、删除,变量的定义和使用
  3. 修改对象的某个属性的值_什么是类,什么是对象
  4. php curl重用,使用PHP CURL解析负载较重的站点?
  5. Ayoub's function CodeForces - 1301C(组合数学)
  6. 解决Out of memory error (version 1.2-rc4 ‘Carnac‘ (298900 ... by android-jack-team@google.com)).
  7. 一个完整的gdb调试过程以及一些常用的命令
  8. excel 如何用vba将某一条记录移动到另一张工作表中_EXCEL常用快捷键
  9. P1468 派对灯 Party Lamps(BIG 模拟)
  10. 交叉编译器的命名规则及详细解释(arm/gnu/none/linux/eabi/eabihf/gcc/g++)