SPRINGBOOT项目连接远程服务器上KAFKA遇到的坑以及完整的例子
版本
springboot | 2.1.5.RELEASE |
kafka | 2.2 |
遇到的坑
- 用最新的springboot就要用最新的kafka版本!
- 当我启动云服务器上的zk后,再启动kafka后台日志也没报错,只感觉EndPoint日志信息有点奇怪,然后springboot项目连接kafka,老是有warn级别的日志:"Connection to node -1 could not be established. Broker may not be available.",这是未连接上kafka
- springboot项目控制台抛出ip地址不合法的异常。
telnet一下云服务器的9092端口没有响应,然后看云服务器安全组里也添加了啊,netstat也看到9092被监听,到底咋回事?
原来是kafka配置文件的问题,导致9092端口未被正确监听,ip地址的问题就是要绑定kafka服务器的ip地址。
注意下面红色三项配置很重要,解决了我所有的问题!
advertised.host.name必须写kafka服务器的ip地址!如果写localhost,并且项目运行的服务器和kafka运行的不是同一台服务器,会连接不上。
将kafka服务端的配置文件修改如下:
############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker. #broker的全局唯一编号,不能重复 broker.id=0############################# Socket Server Settings ##############################监听的端口 listeners=PLAINTEXT://:9092 # 客户端连接的ip地址,必须要写成服务器的ip地址!advertised.host.name advertised.host.name = 47.XX.XX.XX host.name=localhost# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files log.dirs=/root/mysoftware/kafka_2.12-2.2.0/logs# The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log.# The minimum age of a log file to be eligible for deletion due to age log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0
代码
pom.xml
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>2.1.5.RELEASE</version><relativePath/> <!-- lookup parent from repository --></parent><groupId>xy.study</groupId><artifactId>kafka-demo</artifactId><version>0.0.1-SNAPSHOT</version><name>kafka-demo</name><description>Kafka demo project for Spring Boot</description><properties><java.version>1.8</java.version></properties><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter</artifactId></dependency><dependency><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-devtools</artifactId><scope>runtime</scope></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.47</version></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><optional>true</optional></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope></dependency><dependency><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka-test</artifactId><scope>test</scope></dependency></dependencies><build><plugins><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId></plugin></plugins></build></project>
application.properties
#============== kafka =================== # 指定kafka 代理地址,可以多个 spring.kafka.bootstrap-servers=47.XX.XX.XX:9092#=============== provider =======================spring.kafka.producer.retries=0 # 每次批量发送消息的数量 spring.kafka.producer.batchSize=16384 spring.kafka.producer.bufferMemory=33554432# 指定消息key和消息体的编解码方式 spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer#=============== consumer ======================= # 指定默认消费者group id spring.kafka.consumer.group-id=consumer-group-testspring.kafka.consumer.auto-offset-reset=earliest spring.kafka.consumer.enable-auto-commit=true spring.kafka.consumer.auto-commit-interval=100# 指定消息key和消息体的编解码方式 spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
生产者和消费者
@Component @Slf4j public class KafkaProducer {@Autowiredprivate KafkaTemplate<String, String> kafkaTemplate;public void sendADotaHero() {DotaHero dotaHero = new DotaHero("虚空假面", "敏捷", "男");ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(KafkaTopic.A_DOTA_HERO, JSONObject.toJSONString(dotaHero));future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {@Overridepublic void onFailure(Throwable throwable) {log.error("kafka sendMessage error, throwable = {}, topic = {}, data = {}", throwable, KafkaTopic.A_DOTA_HERO, dotaHero);}@Overridepublic void onSuccess(SendResult<String, String> stringDotaHeroSendResult) {log.info("kafka sendMessage success topic = {}, data = {}",KafkaTopic.A_DOTA_HERO, dotaHero);}});log.info("kafka sendMessage end");}}
@Slf4j @Component public class KafkaConsumer {@KafkaListener(topics = KafkaTopic.A_DOTA_HERO, groupId = "${spring.kafka.consumer.group-id}")private void kafkaConsumer(ConsumerRecord<String, DotaHero> consumerRecord) {log.info("kafkaConsumer: topic = {}, msg = {}", consumerRecord.topic(), consumerRecord.value());} }
@Data @AllArgsConstructor @NoArgsConstructor public class DotaHero {private String name;private String kind;private String sex;/*** 返回一个不同元素的数组* @return*/public static List<DotaHero> bulidDiffObjectList(){List<DotaHero> list = new ArrayList<>();list.add(new DotaHero("影魔", "敏捷", "男"));list.add(new DotaHero("小黑", "敏捷", "女"));list.add(new DotaHero("马尔斯", "力量", "男"));return list;} }
public class KafkaTopic {public static final String A_DOTA_HERO = "a_dota_hero";private KafkaTopic() {} }
测试
当启动完springboot项目后,再运行test启动生产者:
@Slf4j @RunWith(SpringRunner.class) @SpringBootTest public class KafkaDemoApplicationTests {@Autowiredprivate KafkaProducer kafkaProducer;private Clock clock = Clock.systemDefaultZone();private long begin;private long end;@Beforepublic void init(){begin = clock.millis();}@Testpublic void send(){kafkaProducer.sendADotaHero();}@Afterpublic void end(){end = clock.millis();log.info("Spend {} millis .", end-begin);}}
SPRINGBOOT项目连接远程服务器上KAFKA遇到的坑以及完整的例子相关推荐
- 易语言远程查询oracle数据库连接,易语言如何连接远程服务器上的数据库,并读取数据...
标签: 用易语言来连接远程服务器上的数据库,可以使用支持库中的方法. 连接数据库方法名: 连接mysql(服务器地址,用户名,密码,数据库名,端口号) 注意:连接mysql()这个方法名,如果没有,单 ...
- Jenkins+Pipeline+Docker部署SpringBoot项目到远程服务器
Jenkins部署 前言 Jenkins 安装 nginx配置 jenkins配置 Docker安装 项目部署 项目添加Dockerfile文件 Jenkins新建任务 使用Pipeline部署 结果 ...
- 如何布置项目到远程服务器上,如何用远程服务器布置项目
如何用远程服务器布置项目 内容精选 换一换 为确保华为云关系型数据库服务发挥出最优性能,用户可根据业务需求对用户创建的参数模板中的参数进行调整.您可以修改用户创建的数据库参数模板中的参数值,但不能更改 ...
- oracle 没有数据库服务器,本地没有oracle数据库 连接远程服务器上的oracle数据库...
由于项目开发测试,需要在本地连接远程的Oracle数据库 连接远程Oracle需要两个软件: 一个Oracle客户端,instantclient-basic-win32-11.2.0.1.0.zip ...
- 把Springboot项目部署到服务器上和结束运行
部署 nohup java -jar onlile-1.0.0-SNAPSHOT.jar & 关闭 kill -9 22899 杀死 进程的pid ,关闭程序.cat info.log 查看文 ...
- 网页连接数据库 服务器,关于asp网页连接远程服务器上数据库问题
我做了一个会员登入页面,我是网上申请的免费asp空间,然后我写了一个 p的文件,代码如下: 我做了一个会员登入页面,我是网上申请的免费asp空间,然后我写了一个 p的文件,代码如下: 然后,我在log ...
- go语言连接远程服务器
1.go语言通过ssh连接远程服务器执行命令 2.go语言通过sftp连接远程服务器上传和下载文件 3.相关依赖 github.com/pkg/sftp golang.org/x/crypto/ssh ...
- oracle 11g 连接远程服务器 数据库
有需要连接远程服务器上的oracle服务,针对遇到的问题进行记录,以下是关于客户端配置,服务器端配置进行了记录. 前提必须做的是 如果要连接远程服务器,服务器上的监听文件listener.ora文件中 ...
- Pycharm连接远程服务器、使用Pycharm运行深度学习项目、Pycharm使用总结以及Pycharm报错和解决办法
Pycharm连接远程服务器,使用Pycharm运行深度学习项目以及Pycharm的使用总结 文章目录 Pycharm连接远程服务器,使用Pycharm运行深度学习项目以及Pycharm的使用总结 前 ...
最新文章
- nagios监控haproxy(借助脚本)
- docker-compose 命令
- C语言利用Cairo图形库绘制太极图
- 在页面中隐藏数据库某信息并显示该信息对应的字典编码名称(后台ssh框架,前台extjs)
- I/O模型系列之四:两种高性能IO设计模式 Reactor 和 Proactor
- JavaScript Cookie的操作
- 英国首相用华为P20 Pro自拍引热议,网友:真香,比炸鱼和薯条还香
- 道理与例子【人人都是产品经理:9009】
- oracle无网安装报错ins,安装Oracle数据库时的报错处理[INS-35172]
- 机器学习系列(19)_通用机器学习流程与问题解决架构模板
- 旋转矩阵(维基百科)
- Exchange邮件服务器渗透
- 第二人生的源码分析(9)登录界面显示
- 六度分离理论在社会工程学中的应用
- Bloc入门之Bloc详解
- 从Daemons到finalize timed out after 10 seconds
- 笑看春夏秋冬,淡泊无悔人生
- MATLAB 2013a 汉化画面
- 行式和列式存储说明以及OLAP特点介绍
- 写程序不需要天份,也不需要热情