linux下载:wget http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.12-2.0.0.tgz

云服务器安装kafka,部署zookeeper时有如下注意点:

在server.properties配置文件里:

1、在云服务器安全组中开放:2181、9092端口

2、zookeeper.connect改成公网IP

3、listeners=PLAINTEXT://   必须填内网IP

listeners=PLAINTEXT://**.**.**.**:9092

4、配置外部代理地址必须填公网IP

advertised.listeners=PLAINTEXT://**.**.**.**:9092
advertised.host.name=**.**.**.**

5、host.name填内网IP
host.name=**.**.**.**
port=9092

在zookeeper的zoo.cfg文件里加上:

server.1=**.**.**.**:2888:3888

zookeeper启动之后,再启动kafka

./bin/kafka-server-start.sh config/server.properties

相关报错:

(1)kafka.common.KafkaException: Socket server failed to bind to 10.20.1.154:9092: Cannot assign requested address
该情况下的虚拟机对外ip[暴露的ip]和真实ip[ifconfig显示的ip]可能只是映射关系,用户访问对外ip时,OpenStack会转发到对应的真实ip实现访问。但此时如果 Kafka server.properties配置中的listeners=PLAINTEXT://10.20.1.153:9092中的ip配置为[对外ip]的时候无法启动,因为socket无法绑定监听,报如下错误:kafka.common.KafkaException: Socket server failed to bind to 10.20.1.154:9092: Cannot assign requested address
解决方法:listeners=PLAINTEXT://10.20.1.153:9092中的ip改为真实ip[ifconfig中显示的ip]即可,其他使用时正常使用对外ip即可,跟真实ip就没有关系了。

(2)Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memor

错误原因:

Kafka默认使用-Xmx1G -Xms1G的JVM内存配置,如果机器内存较小,需要调整启动配置。

解决办法:

打开/config/kafka-server-start.sh,修改 
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" 
为适合当前服务器的配置,例如export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"
(3)WARN Connection request from old client /58.247.201.56:31365; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)

报这种警告且外网不能访问kafka,原因就是server.properties文件里内网和外网IP配置不正确。

正确的server.properties配置文件如下:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners=PLAINTEXT://172.28.54.75:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
advertised.listeners=PLAINTEXT://47.124.245.85:9092
advertised.host.name=47.124.245.85
host.name=172.28.54.75
port=9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=/root/flink/logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=47.124.245.85:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

云服务器(阿里云)安装kafka及相关报错处理(WARN Connection request from old client /58.247.201.56:31365; will be dropp)相关推荐

  1. 新用户购买阿里云服务器 阿里云搭建Csapp Lab环境

    每一年的双十一,购买物品很多优惠.阿里云针对新用户也有优惠,最近在做CSAPP的实验.也蹭着自己是新用户购买了三年的阿里云服务器,本文介绍如何使用Xshell连接阿里云,及其使用docker搭建Csa ...

  2. 新手如何登陆阿里云服务器,阿里云服务器怎么登陆

    登录阿里云服务器有多种方式,下面介绍在网页上登录的办法.在网页上登录的话,需要先登录自己的阿里云账号,然后找到服务器管理页面,再打开一个专门的远程连接服务器的界面登录,下面介绍操作办法. 方法/步骤 ...

  3. 阿里云服务器Hadoop java api 上传文件报错 could only be written to 0 of the 1 minReplication nodes.

    问题描述 阿里云服务器命令行使用命令可以上传文件,但HDFS java API上传失败,上传后只有文件名没有数据 报错: could only be written to 0 of the 1 min ...

  4. 如何登陆阿里云服务器,阿里云服务器怎么登陆

    注意,请先到阿里云官网 领取幸运券,除了价格上有很多优惠外,还可以参与抽奖.详见:https://promotion.aliyun.com/ntms/act/ambassador/sharetouse ...

  5. 企业如何选择阿里云服务器? 阿里云服务器配置购买帮助文档

    本系列教程汇总: 买了域名一定需要备案吗?什么情况下不需要备案? 如何购买阿里云服务器(图文教程) 如何购买阿里云香港服务器(图文教程) 如何购买阿里云学生服务器(图文教程) 阿里云是国内第一大云服务 ...

  6. 【iOS工具】rvm、Ruby环境和CocoaPods安装使用及相关报错问题解决(2016 12 15 更新)...

    〇.前言 在iOS开发中 [CocoaPods](https://github.com/CocoaPods/CocoaPods) 作为库依赖管理工具就是一把利器. 有了 CocoaPods 则无需再通 ...

  7. 博弈论数据可用性声明_阿里云云采购季活动最后一天,买云服务器送云数据库与云存储...

    大家好,我是服务器吧(服务器租用推荐网)小编,时间过得真快,今天还什么都没做呢,这不打半天就又过去了,而我还在想我们今天的"阿里云服务器优惠活动"该怎么写,刚刚查了一下阿里云这个词 ...

  8. 云服务器下能安装虚拟机吗,云服务器下能安装虚拟机吗

    云服务器下能安装虚拟机吗 内容精选 换一换 Windows弹性云服务器虚拟化驱动异常(Tools没有正常运行).为保证弹性云服务器的正常使用,请参见本节内容进行修复.弹性云服务器虚拟化驱动异常会影响弹 ...

  9. 腾讯云、阿里云、百度云香港云服务器【对比】

    本文原创来自:58侠客个人博客 除了内地云服务器外,香港云服务器也是个不错的选择,主要可以免备案,这样一来内容发布上限制就减少了很多.当然了内容还是要是健康的,这个是不能违反的.目前大家常拿来对比的香 ...

最新文章

  1. spark 持久化 mysql_Spark 从零到开发(八)nginx日志清洗并持久化实战
  2. OracleOraDb11g_home1ClrAgent服务
  3. 在Apache Hadoop(多节点群集)中运行Map-Reduce作业
  4. Qt之格栅布局(QGridLayout)
  5. safari only css hack,css hack将Safari和Chrome同时作为目标单独使用
  6. C++ Primer 第五版 第6章 6.1——函数及函数定义及调用阅读笔记
  7. mysql-5.7.24-linux_Linux下安装mysql-5.7.24
  8. 【LeetCode】【数组】题号:*289,生命游戏
  9. c#简单注册登录利用缓存存储账号密码_“密码代填”实现单点登录,安全吗?...
  10. [书籍分享]0-006.App营销解密:移动互联网时代的营销革命
  11. 用cmd打开jar文件
  12. asp 在线发送邮件
  13. Fedora23搜狗拼音输入框无内容的bug
  14. 网络营销--网络广告
  15. Kubernetes 健康状态检查(九)
  16. Tesseract训练中文字体识别(转)
  17. 【C++ 】STL求全排列和组合
  18. Zigbee 设置信道,PANID,发射功率现对z-stack里几个网络参数的设置以及如何获取总结一下。
  19. 從turtle海龜動畫 學習 Python - 高中彈性課程系列 10.2 藝術畫 Python 製作生成式藝術略覽
  20. 基于matleb对音频信号的频谱分析(幅频)和滤波分析

热门文章

  1. ff14最新服务器人数,FF14第一波转服过后 现各服务器人口状况
  2. 为什么你的IntelliJ IDEA没有别人的好用?差距在这...
  3. AWS云计算技术架构探索系列之一-开篇
  4. Code Combat学习心得(Kithgard地牢45关Mightier Than the Sword)
  5. 神经网络的严冬与复兴之路
  6. Navicat11 for mysql(包括激活工具)亲测可用
  7. B2G:来自Web平台的挑战者
  8. 在飞书搞了个机器人,我让ChatGPT帮忙写算法
  9. 女人:不爱,请收起你的暧昧
  10. C/C++对汉字的读取