添加如下文件到server.properties:

port = 9092
advertised.host.name = localhost

参考:StackOverFlow-leader not available kafka in console producer

日志解析

 INFO KafkaConfig values: advertised.host.name = null      //为分区选择Leader,在单节点时为Null导致标题错误metric.reporters = []quota.producer.default = 9223372036854775807offsets.topic.num.partitions = 50log.flush.interval.messages = 9223372036854775807auto.create.topics.enable = truecontroller.socket.timeout.ms = 30000log.flush.interval.ms = nullprincipal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilderreplica.socket.receive.buffer.bytes = 65536min.insync.replicas = 1replica.fetch.wait.max.ms = 500num.recovery.threads.per.data.dir = 1ssl.keystore.type = JKSsasl.mechanism.inter.broker.protocol = GSSAPIdefault.replication.factor = 1ssl.truststore.password = nulllog.preallocate = falsesasl.kerberos.principal.to.local.rules = [DEFAULT]fetch.purgatory.purge.interval.requests = 1000ssl.endpoint.identification.algorithm = nullreplica.socket.timeout.ms = 30000message.max.bytes = 1000012num.io.threads = 8offsets.commit.required.acks = -1log.flush.offset.checkpoint.interval.ms = 60000delete.topic.enable = falsequota.window.size.seconds = 1ssl.truststore.type = JKSoffsets.commit.timeout.ms = 5000quota.window.num = 11zookeeper.connect = localhost:2181authorizer.class.name = num.replica.fetchers = 1log.retention.ms = nulllog.roll.jitter.hours = 0log.cleaner.enable = trueoffsets.load.buffer.size = 5242880log.cleaner.delete.retention.ms = 86400000ssl.client.auth = none     //是否对客户端进行认证controlled.shutdown.max.retries = 3queued.max.requests = 500offsets.topic.replication.factor = 3log.cleaner.threads = 1sasl.kerberos.service.name = nullsasl.kerberos.ticket.renew.jitter = 0.05socket.request.max.bytes = 104857600ssl.trustmanager.algorithm = PKIXzookeeper.session.timeout.ms = 6000log.retention.bytes = -1log.message.timestamp.type = CreateTimesasl.kerberos.min.time.before.relogin = 60000zookeeper.set.acl = false     connections.max.idle.ms = 600000offsets.retention.minutes = 1440   replica.fetch.backoff.ms = 1000inter.broker.protocol.version = 0.10.0-IV1log.retention.hours = 168num.partitions = 1       //分区数broker.id.generation.enable = truelisteners = null            //监听端口ssl.provider = nullssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]log.roll.ms = nulllog.flush.scheduler.interval.ms = 9223372036854775807ssl.cipher.suites = nulllog.index.size.max.bytes = 10485760ssl.keymanager.algorithm = SunX509security.inter.broker.protocol = PLAINTEXTreplica.fetch.max.bytes = 1048576advertised.port = nulllog.cleaner.dedupe.buffer.size = 134217728replica.high.watermark.checkpoint.interval.ms = 5000log.cleaner.io.buffer.size = 524288sasl.kerberos.ticket.renew.window.factor = 0.8zookeeper.connection.timeout.ms = 6000controlled.shutdown.retry.backoff.ms = 5000log.roll.hours = 168log.cleanup.policy = deletehost.name = log.roll.jitter.ms = nullmax.connections.per.ip = 2147483647offsets.topic.segment.bytes = 104857600background.threads = 10quota.consumer.default = 9223372036854775807request.timeout.ms = 30000log.message.format.version = 0.10.0-IV1log.index.interval.bytes = 4096log.dir = /tmp/kafka-logslog.segment.bytes = 1073741824log.cleaner.backoff.ms = 15000offset.metadata.max.bytes = 4096ssl.truststore.location = nullgroup.max.session.timeout.ms = 300000ssl.keystore.password = nullzookeeper.sync.time.ms = 2000port = 9092log.retention.minutes = nulllog.segment.delete.delay.ms = 60000log.dirs = /tmp/kafka-logscontrolled.shutdown.enable = truecompression.type = producermax.connections.per.ip.overrides = log.message.timestamp.difference.max.ms = 9223372036854775807sasl.kerberos.kinit.cmd = /usr/bin/kinitlog.cleaner.io.max.bytes.per.second = 1.7976931348623157E308auto.leader.rebalance.enable = trueleader.imbalance.check.interval.seconds = 300log.cleaner.min.cleanable.ratio = 0.5replica.lag.time.max.ms = 10000num.network.threads = 3ssl.key.password = nullreserved.broker.max.id = 1000metrics.num.samples = 2socket.send.buffer.bytes = 102400ssl.protocol = TLSsocket.receive.buffer.bytes = 102400ssl.keystore.location = nullreplica.fetch.min.bytes = 1broker.rack = nullunclean.leader.election.enable = truesasl.enabled.mechanisms = [GSSAPI]group.min.session.timeout.ms = 6000log.cleaner.io.buffer.load.factor = 0.9offsets.retention.check.interval.ms = 600000producer.purgatory.purge.interval.requests = 1000metrics.sample.window.ms = 30000broker.id = 0offsets.topic.compression.codec = 0log.retention.check.interval.ms = 300000advertised.listeners = nullleader.imbalance.per.broker.percentage = 10

WARN Error while fetching metadata with correlation id 13 : {test=LEADER_NOT_AVAILABLE}相关推荐

  1. WARN Error while fetching metadata with correlation id 5 : {testtopic=LEADER_NOT_AVAILABLE}

    当生产数据的时候,报错如下:提示没有可用的kafka 主节点 root@ubuntu-130:/opt/kafka_2.11-0.11.0.2/bin# ./kafka-console-produce ...

  2. WARN Error while fetching metadata with correlation id 1 : {first=LEADER_NOT_AVAILABLE} (org.apache.

    搭建Kahka集群的时候,创建生产者是遇到的问题,大坑,所以记录下来 报错: 其实就是无法识别leader,那么就是当前主机变得无法识别 解决方法:  配置listeners,后面接本机的ip地址就好 ...

  3. WARN Error while fetching metadata with correlation id 1 : {hotitems=LEADER_NOT_AVAILABLE}

    进入conf/server.properties 把这行注释取消 加上自己的ip地址 亲测有效

  4. 【Kafka】报错:Error while fetching metadata with correlation id 1 : {topic_lcc=LEADER_NOT_AVAILABLE}

    文章目录 1.美图 2.背景 3. 解决方法1 3.1 原因 3.2 问题解决 4.场景再现 5.神奇日志 6.解决 6.1 解决方法 7.kafak topic坑 8. 场景再现 1.美图 2.背景 ...

  5. Kafka : WARN Error while fetching metadata with correlation id xx : {=UNKNOWN_TOPIC_OR_PARTITION}

    文章目录 1.美图 2.背景 3.验证 4.场景再现 5. 场景再现 1.美图 2.背景 不知道什么原因,用kafka命令发送消息时候,一直报 bin/kafka-console-producer.s ...

  6. flume报错WARN clients.NetworkClient: Error while fetching metadata with correlation id

    项目场景: 修改linux系统时间,模拟使用flume读取日志文件,并把日志文件信息传输给kafka消费,kafka消费之后,再使用flume读取kafka消费后的日志信息, 问题描述: flume在 ...

  7. flume 对接 kafka 报错: Error while fetching metadata with correlation id 35 {=INVALID_TOPIC_EXCEPTION}

    flume 对接 kafka 报错:Error while fetching metadata with correlation id 35 : {=INVALID_TOPIC_EXCEPTION} ...

  8. 【Error】 WARN [Producer clientId=console-producer] Error while fetching metadata wit

    今天在使用Kafka生产数据时出现一个错误,具体报错如下 2021-12-29 18:17:50,662] WARN [Producer clientId=console-producer] Erro ...

  9. 【kafka】Error while fetching metadata xxx: {TEST=LEADER_NOT_AVAILABLE}

    本人菜鸡一只,该文章会比较短,而且没有比较详细的报错和图片,但是我想解决问题的思路还是可以分享下的! 公司有一个kafka集群,我接手做了些文字匹配的东西之后,好久都没人用过了. 然后最近公司想做统一 ...

  10. kafka生产者报错:[org.apache.kafka.clients.NetworkClient:600] - Error while fetching metadata with corre

    在测试kafka写入数据的时候一直在报错: ... [org.apache.kafka.clients.NetworkClient:600]   - Error while fetching meta ...

最新文章

  1. mysql如何存储表情,如何让mysql支持存储表情
  2. MySQL 在 LIMIT 条件后注入
  3. Oracle中RAISE异常详解
  4. [html] 请使用canvas画一个椭圆
  5. java 动态获取IP地址(城市)
  6. 口译务实——Unit 10
  7. mac 源码编译yar遇见的坑
  8. yum centos 7.4 安装svn服务器
  9. 基于android的订餐系统 答辩ppt,外卖订餐系统答辩PPT
  10. CAN BusOff相关知识点
  11. Nmap列举远程机器开放的端口
  12. 【ESP32-IDF】02-2 外设-触摸传感器
  13. D语言使用dub编译ms-coff文件
  14. 开源 免费 java CMS - FreeCMS2.8 会员头像设置
  15. Visio绘制维恩图举例
  16. Ocelot.Authorization.Middleware.AuthorizationMiddleware[0] requestId: 0HMJ300E5APNA:00000002...
  17. 微电子电路——与非门或非门异或门
  18. 微机原理_微处理器架构
  19. 工业类计算机主板维修,工控机电脑主板坏了如何维修
  20. 18V转12V的芯片,PW2312的BOM和DEMO文件

热门文章

  1. 孩子被人欺负了,要不要打回去?非常赞同这位宝妈的做法
  2. 第一集 斗罗世界 第五章
  3. 计算单词的长度C++
  4. 如梦若梦,肾盂肾炎似烟火
  5. cf 949A Zebras
  6. 会说话的汤姆猫2 Talking Tom 2(含数据包) v2.0.3
  7. PyCharm搜索技巧快捷键
  8. 实物补贴和货币补贴的权衡
  9. android内核调试
  10. 一招教你解决Win10屏幕模糊问题