SASL_PLAINTEXT认证 本人认为就是consumer连接broker开启了用户名,密码认证
acl权限控制 就是指针对用户 配置拥有哪些操作权限,如 topic的读,写,group的读,topic的创建,删除,等等都是可控制的权限

server.properties配置

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=37############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092##listeners=SASL_PLAINTEXT://100.100.184.145:9092# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().inter.broker.listener.name=INTERNAL_LISTENER
listener.security.protocol.map=LOCAL_LISTENER:SASL_PLAINTEXT,INTERNAL_LISTENER:SASL_PLAINTEXT,EXTERNAL_LISTENER:SASL_PLAINTEXT
listeners=LOCAL_LISTENER://127.0.0.1:9092,INTERNAL_LISTENER://100.100.111.111:9093,EXTERNAL_LISTENER://100.100.111.111:17002
advertised.listeners=INTERNAL_LISTENER://100.100.111.111:9093,EXTERNAL_LISTENER://10.28.88.61:17002#100.100.111.111为部署本台kafka的真实ip地址,我们可以看出来,本台kafka暴露了三个端口
#9092,9093,17002作为接收请求的端口。然后给三个端口分别定义了listenr名字以及通信协议
#advertised.listeners中EXTERNAL_LISTENER://10.28.88.61:17002表示
#EXTERNAL_LISTENER开启的端口注册到zk中元数据的地址信息,当client端首次连接
#kafka通过100.100.111.111:17002,获取元数据的时候会获取到这个地址10.28.88.61:17002,
#然后在通过元数据中的ip地址10.28.88.61:17002继续与kafka连接,所以当我们需要ngix反向代理来跟
#kafka通信的时候一般就是这么配置,10.28.88.61:17002就是ngix配置的反向代理到
#100.100.111.111:17002。此时client端配置连接的"bootstrap.servers"=10.28.88.61:17002就行# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=/home/whtemp/kafka/kafka-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=101.913.89.166:2128# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0##security.inter.broker.protocol=SASL_PLAINTEXT
##security.inter.broker.protocol=INTERNAL_LISTENERsasl.enabled.mechanisms=PLAIN  sasl.mechanism.inter.broker.protocol=PLAIN   authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false
#allow.everyone.if.no.acl.found=false的时候表示任何客户端的操作,如果发现你未配置acl权限,就不能
#访问除非给对应的consumer或者producer通过sals认证连接的时候配置了相关权限。超级管理员不受限制。
#如果配置ture了,那么如果通过sals认证连接的时候,对应的用户未配置权限也可以操作,但是连接的
#用户你配置了权限,但是此时你操作的权限不对,这时候会报权限异常super.users=User:admin#表示超级管理员是admin##security.protocol=INTERNAL_LISTENER
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

几个配置说明

#100.100.111.111为部署本台kafka的真实ip地址,我们可以看出来,本台kafka暴露了三个端口
#9092,9093,17002作为接收请求的端口。然后给三个端口分别定义了listenr名字以及通信协议
#advertised.listeners中EXTERNAL_LISTENER://10.28.88.61:17002表示
#EXTERNAL_LISTENER开启的端口注册到zk中元数据的地址信息,当client端首次连接
#kafka通过100.100.111.111:17002,获取元数据的时候会获取到这个地址10.28.88.61:17002,
#然后在通过元数据中的ip地址10.28.88.61:17002继续与kafka连接,所以当我们需要ngix反向代理来跟
#kafka通信的时候一般就是这么配置,10.28.88.61:17002就是ngix配置的反向代理到
#100.100.111.111:17002。此时client端配置连接的"bootstrap.servers"=10.28.88.61:17002就行#allow.everyone.if.no.acl.found=false的时候表示任何客户端的操作,如果发现你未配置acl权限,就不能
#访问除非给对应的consumer或者producer通过sals认证连接的时候配置了相关权限。超级管理员不受限制。
#如果配置ture了,那么如果通过sals认证连接的时候,对应的用户未配置权限也可以操作,但是连接的
#用户你配置了权限,但是此时你操作的权限不对,这时候会报权限异常

kafka_server_jaas.conf 的配置

KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin" #内部通信用
password="kafka" #内部通信用
user_zsh="niubi"#用户zsh 密码niubi
user_dataflow="dataflow"
user_crawler="crawler"
user_taskcenter="taskcenter"
user_admin="kafka";#用户admin 密码kafka
};

kafka_client_jaas.conf的配置超级管理员

KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="kafka";
};

也可以是普通用户

KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="dataflow"
password="dataflow";
};

java代码自然是

//当然你也可以换成超级管理员的密码props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,"SASL_PLAINTEXT");props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");props.put("sasl.jaas.config","org.apache.kafka.common.security.plain.PlainLoginModule required username='dataflow' password='dataflow';");
package zktest.zktest;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.DescribeAclsResult;
import org.apache.kafka.clients.admin.KafkaAdminClient;
import org.apache.kafka.common.acl.AccessControlEntry;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclBindingFilter;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.acl.AclPermissionType;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourceType;
//import org.springframework.kafka.core.KafkaAdmin;public class AclTest {public static void main(String[] args) {Map<String, Object> configs = new HashMap<>();// broker地址,多个用逗号分割,这里用了ngix的地址,如果不需要ngix,请直接改为kafka的ip地址configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "10.28.88.61:17002");configs.put("security.protocol", "SASL_PLAINTEXT");configs.put("sasl.mechanism", "PLAIN");// 登录broker的账户 admin是管理员configs.put("sasl.jaas.config","org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"kafka\";");AdminClient adminClient = KafkaAdminClient.create(configs);// principal:User:test2是需要赋予权限的帐号// host:主机 (*号即可)// operation:权限操作// permissionType:权限类型AccessControlEntry ace = new AccessControlEntry("User:dataflow", "*", AclOperation.READ, AclPermissionType.ALLOW);// resourceType:资源类型(topic)// name:topic名称// patternType:资源模式类型//下面的写法表示当client用sals认证的时候使用dataflow这个用户连接的时候,当使用groupid=wwaaaddfw,对topic-name17仅仅有读权限ResourcePattern rp = new ResourcePattern(ResourceType.TOPIC, "topic-name17", PatternType.LITERAL);ResourcePattern rp1 = new ResourcePattern(ResourceType.GROUP, "wwaaaddfw", PatternType.LITERAL);AclBinding ab = new AclBinding(rp, ace);AclBinding ab1 = new AclBinding(rp1, ace);// 多个权限赋予可以传listList<AclBinding> ablist = Arrays.asList(ab,ab1);adminClient.createAcls(ablist);// 可以查看赋予用户的所有权限DescribeAclsResult b = adminClient.describeAcls(AclBindingFilter.ANY);System.out.println(b.values());adminClient.close();}}

client consumer的例子

package zktest.zktest;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.serialization.StringDeserializer;public class HelloWorldConsumer {public static void main(String[] args) throws InterruptedException {Properties props = new Properties();//这里我使用的是ngix,如果不需要ngix,请直接改为kafka的ip地址props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.28.88.61:17002");props.put(ConsumerConfig.GROUP_ID_CONFIG ,"wwaaaddfw") ;props.put("auto.offset.reset", "earliest");props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);//props.put("enable.auto.commit", "false");props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,"SASL_PLAINTEXT");props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");props.put("sasl.jaas.config","org.apache.kafka.common.security.plain.PlainLoginModule required username='dataflow' password='dataflow';");Consumer<String, String> consumer = new KafkaConsumer<>(props);consumer.subscribe(Arrays.asList("topic-name17"));while (true) {ConsumerRecords<String, String> records = consumer.poll(10);
Thread.sleep(1000);for (ConsumerRecord<String, String> record : records) {System.out.println("分区:"+record.partition() +"分区offset&&"+record.offset()+"&&分区key:"+record.key());}}}}

此时consumer如果换成其他用户(admin除外)都会报认证失败

kafak集群部署配置,开启SASL_PLAINTEXT认证以及acl权限控制相关推荐

  1. nocas集群部署配置与启动(Windows)

    nocas集群部署配置与启动(Windows) 1. 建文件夹 ​ 这里我们集群三个nocas服务器. ​ 找一个合适的地方,建一个cluster文件夹 ​ 里面将下载的nocas的压缩包,解压出3个 ...

  2. 【分布式任务调度】(一)XXL-JOB调度中心集群部署配置

    文章目录 1.概述 2.代码编译 2.1.代码下载 2.2.初始化与编译 3.集群部署 3.1.服务启动 3.2.反向代理 4.总结 1.概述 XXL-JOB是一款轻量级的分布式任务调度中间件,默认支 ...

  3. DolphinScheduler 3.1.0 海豚集群部署配置

    文章目录 DolphinScheduler 3.1.0 部署过程 1. JDK1.8环境准备 2. 下载安装包 3. 上传安装包并解压 4. Dolphinscheduler用户要sudo权限,免密配 ...

  4. mysql 7.x 集群_MySQL cluster 7.X集群部署配置

    2. 部署配置方法 本文介绍使用2台服务器(100与101)部署最简单的集群方法:其中100上部署 (2.1-2.3为所有节点均需执行的步骤,2.4和2.5为相关节点需执行的步骤) 2.1下载MySQ ...

  5. HyperLedger FabricV2.3 Raft单机集群部署

    目录 云主机配置 依赖环境配置 部署步骤 0.更新yum 1.安装golang 2.安装docker 3. 安装docker-compose 4. 安装git 5. 安装java 6. 防火墙配置 7 ...

  6. Kubernetes 集群部署 之 多Master节点 实现高可用

    目录 前言 一.K8s Master 高可用架构 二.在单Master节点基础上搭建 多Master集群架构 2.1 环境准备 与 搭建步骤 2.2 首先部署 Master02 节点 2.2.1 从 ...

  7. Kafka SASL/SCRAM动态认证集群部署

    Kafka SASL/SCRAM动态认证集群部署 目的:配置SASL/PLAIN验证,实现了对Kafka的权限控制.但SASL/PLAIN验证有一个问题:只能在JAAS文件KafkaServer中配置 ...

  8. Linux中级实战专题篇:rabbitmq(消息中间件p2p模式和pub模式,消息队列rabbitmq详解,单机安装,集群部署以及配置实战)

    一.消息中间件相关概念 1.简介 消息中间件也可以称消息队列,是指用高效可靠的消息传递机制进行与平台相关 的数据交流,并基于数据通信来进行分布式系统的集成.通过提供消息传递和消息 队列模型,可以在分布 ...

  9. quartz分布式集群部署并且可视化配置job定时任务

    2019独角兽企业重金招聘Python工程师标准>>> 项目使用quartz框架完成了定时任务集群部署调度,并且对quartz进一步封装完成在web界面可动态配置定时任务.定时任务如 ...

最新文章

  1. quartz异常:Couldn't rollback jdbc connection
  2. Lesson 8.18.2 单层回归神经网络torch.nn.Linear实现单层回归神经网络的正向传播
  3. windows安装RabbitMQ注意事项
  4. atitit.微信支付的教程文档 attilax总结
  5. sql执行有时候快有时候慢_如何让你的 SQL 执行的飞起?
  6. 这5个让人窒息的烂代码,你看完都忍不了!
  7. linux 用户管理和帮助命令
  8. 管理结果集和分析结果集
  9. 使用frp通过ssh访问公司内网机器
  10. 一个双向转换火星文的玩具
  11. 无任何格外需求的命令行C++飞机大战,内含BOSS,动画,千行代码免费奉上
  12. java 取一个数的各个位数
  13. JVM学习四:垃圾收集器与内存回收策略
  14. SNS网站LinkedIn的Java架构技术
  15. vue中将字符转换成数字的简单做法
  16. linux 抓图,关于Linux下的抓图软件和使用方法介绍
  17. python混合整数非线性规划_什么是混合整数非线性规划问题
  18. 【网络攻防】“跨站脚本攻击“ 第一弹 ——反射型XSS
  19. 地图采集小白日赚100+卖网红美食教程月入5W+(15个热门项目)
  20. Split Engineering Split Desktop 4.0.0.42 Win64 1CD爆破软件

热门文章

  1. android之模拟手机助手下载
  2. Apple是第一家价值1万亿美元的上市公司
  3. Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net论文阅读
  4. leetcode 回溯算法 17. 电话号码的字母组合
  5. 【IoT开发工具箱 | 02】嵌入式Linux设备网速测试方法
  6. javascript中for循环练习打印99乘法表
  7. 自己开发的瓦片地图下载器,免费绿色
  8. 远程登录服务器链接外网-启动Teamviewer
  9. 计算机专业毕业论文4000字免费,计算机专业毕业生毕业论文攻略
  10. python课程大全从入门到进阶_Python好书从入门到进阶整理好送你